var/home/core/zuul-output/0000755000175000017500000000000015145555037014537 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015145566403015502 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000333043615145566314020274 0ustar corecoreikubelet.log_o[;r)Br'o-n(!9t%Cs7}g/غIs,r.k9GfB > "mv?_eGbuuțx{w7ݭ7֫L% oo/q3m^]/o?8.7oW}ʋghewx/mX,ojŻ ^Tb3b#׳:}=p7뼝ca㑔`e0I1Q!&ѱ[/o^{W-{t3_U|6 x)K#/5ΌR"ggóisR)N %emOQ/Ϋ[oa0vs68/Jʢ ܚʂ9ss3+aô٥J}{37FEbп3 FKX1QRQlrTvb)E,s)Wɀ;$#LcdHM%vz_. o~I|3j dF{ "IΩ?PF~J~ ` 17ׅwڋًM)$Fiqw7Gt7L"u 0V9c  ˹dvYļU[ Z.׿-h QZ*U1|t5wKOؾ{mk b2 ܨ;RJK!b>JR*kl|+"N'C_#a7]d]sJg;;>Yp׫,w`ɚ'd$ecwŻ^~7EpQС3DCS[Yʧ?DDS aw߾)VxX帟AB}nyи0stĈCo.:wAZ{sy:7qsWctx{}n-+ZYsI{/.Ra9XcђQ0FK@aEDO2es ׇN# ZF͹b,*YVi+$<QMGhC}^}?BqG!(8l K3T[<~6]90}(*T7siv'=k 9Q2@vN ( R['>v*;o57sp$3ncx!>t®W>]tF-iܪ%GYbaRvHa}dkD̶*';ک|s_}8yj,('GrgTZ'U鋊TqOſ * /Ijo!՟8`"j}zӲ$k3jS|C7;A)͎V.r?t\WU1ojjr<~Tq> `=tJ!aݡ=h6Yݭw}?lѹ`f_" J9w4ts7NG GGG]ҡgc⌝M b/Ζlpah E ur C&`XR JcwB~R2EL9j7e\(Uё$׿atyХ?*t5z\+`/ErVQUxMҔ&ۈt.3;eg_O ξL1KiYLizpV:C5/=v-}҅"o ']쌕|tϓX8nJ*A*%J[T2pI1Je;s_[,Ҩ38_ь ͰM0ImY/MiVJ5&jNgBt90v߁R:~U jځU~oN9xԞ~J|dݤ߯R> kH&Y``:"s ayiBq)u%'4 yܽ yW0 -i̭uJ{KưЖ@+UBj -&JO x@}DS.€>3T0|9ē7$3z^.I< )9qf e%dhy:O40n'c}c1XҸuFiƠIkaIx( +")OtZ l^Z^CQ6tffEmDφǽ{QiOENG{P;sHz"G- >+`قSᔙD'Ad ѭj( ہO r:91v|ɛr|٦/o{C Ӹ!uWȳ)gjw&+uߕt*:͵UMQrN@fYDtEYZb4-UCqK٪L.2teB ˛"ո{Gci`du듎q+;C'16FgVlWaaB)"F,u@30YQg˾_YҊŏ#_f^ TD=VAKNl4Kš4GScѦa0 J ()¾5m'p/\խX\=z,Mw˭x:qu礛WԓL!I xӤ1(5AKRVF2ɌУլ F "vuhc=JS\kkZAY`R"Hr1]%oR[^oI]${&L8<=#0yaKL: JJl r;t#H+B|ɧJiM cm)>H=l}.^\ݧM<lu Y> XH\z:dHElL(uHR0i#q%]!=t_쾋-, vW~* ^g/5n]FhNU˿oۂ6C9C7sn,kje*;iΓA7,Q)-,=1A sK|ۜLɽy]ʸEO<-YEqKzϢ \{>dDLF amKGm+`VLJsC>?5rk{-3Ss`y_C}Q v,{*)ߎ% qƦat:D=uNvdߋ{Ny[$ {ɴ6hOI']dC5`t9:GO: FmlN*:g^;T^B0$B%C6Θ%|5u=kkN2{'FEc* A>{avdt)8|mg定TN7,TEXt+`F P |ɧ<Ғ8_iqE b}$B#fethBE;1"l r  B+R6Qp%;R8P󦟶Ub-L::;Ⱦ7,VW.JE:PgXoΰUv:ΰdɆΰ (ΰ0eTUgXun[g, ׽-t!X򴱞_aM:E.Qg1DllЊE҉L ehJx{̗Uɾ?si&2"C]u$.`mjmƒVe9f6NŐsLu6fe wkىKR%f"6=rw^)'Hz }x>1yFX09'A%bDb0!i(`Z;TyֻΗ|ִ0-6dAC5t[OM91c:VJR9&ksvJ;0ɝ$krogB= FYtЩOte=?>T&O{Ll)HClba1PIFĀ":tu^}.&R*!^pHPQuSVO$.KMb.:DK>WtWǭKv4@Va3"a`R@gbu%_J5Ґ 3?lm$K/$s_. WM]̍"W%`lO2-"ew@E=! I,($F{ձ7*Oy 6EK( EF #31J8mN .TTF9㕴/5~RxCe,&v3,JE- ZF5%Da,Gܠ*qI@qlG6s푻jÝ$ >8ȕ$eZ1j[h0SH,qf<"${/ksBK}xnwDb%M6:K<~̓9*u᛹Q{FЖt~6S#G1(zr6<ߜ!?U\(0EmG4 4c~J~]ps/9܎ms4gZY-07`-Id,9õ԰t+-b[uemNi_󈛥^g+!SKq<>78NBx;c4<ニ)H .Pd^cR^p_G+E--ۥ_F]a|v@|3p%kzh|k*BBRib\J3Yn|뇱[FfP%M:<`pz?]6laz5`ZQs{>3ư_o%oU׆]YLz_s߭AF'is^_&uUm$[[5HI4QCZ5!N&D[uiXk&2Bg&Ս7_/6v_cd쿽d@eU XyX2z>g8:.⺻h()&nO5YE\1t7aSyFxPV19 ĕi%K"IcB j>Pm[E[^oHmmU̸nG pHKZ{{Qo}i¿Xc\]e1e,5`te.5Hhao<[50wMUF􀍠PV?Yg"ź)\3mf|ܔMUiU|Ym! #'ukMmQ9Blm]TO1ba.XW x6ܠ9[v35H;-]Um4mMrW-k#~fؤϋu_j*^Wj^qM `-Pk.@%=X#|ۡb1lKcj$׋bKv[~"N jS4HOkeF3LPyi︅iWk! cAnxu6<7cp?WN $?X3l(?  'Z! ,Z.maO_Bk/m~ޖ(<qRfR"Au\PmLZ"twpuJ` mvf+T!6Ѓjw1ncuwo':o gSPC=]U҅yY9 &K<-na'Xk,P4+`Þ/lX/bjFO.= w ?>ȑ3n߿z,t s5Z/ Clo-` z?a~b mzkC zFȏ>1k*Dls6vP9hS  ehC.3 @6ijvUuBY hBnb[ Fr#D7ćlA!:X lYE>#0JvʈɌ|\u,'Y˲.,;oOwoj-25Hݻ7 li0bSlbw=IsxhRbd+I]Y]JP}@.供SЃ??w w@KvKts[TSa /ZaDžPAEư07>~w3n:U/.P珀Yaٳ5Ʈ]խ4 ~fh.8C>n@T%W?%TbzK-6cb:XeGL`'žeVVޖ~;BLv[n|viPjbMeO?!hEfޮ])4 ?KN1o<]0Bg9lldXuT ʑ!Iu2ʌnB5*<^I^~G;Ja߄bHȌsK+D"̽E/"Icƀsu0,gy(&TI{ U܋N5 l͖h"褁lm *#n/Q!m b0X3i)\IN˭% Y&cKoG w 9pM^WϋQf7s#bd+SDL ,FZ<1Kx&C!{P|Ռr,* ] O;*X]Eg,5,ouZm8pnglVj!p2֬uT[QyB402|2d5K: `Bcz|Rxxl3{c` 1nhJzQHv?hbºܞz=73qSO0}Dc D]ͺjgw07'㤸z YJ\Hb9Ɖ„2Hi{(2HFE?*w*hy4ޙM^٫wF(p]EwQzr*! 5F XrO7E[!gJ^.a&HߣaaQÝ$_vyz4}0!yܒ栒޹a% Ŋ X!cJ!A\ ?E\R1 q/rJjd A4y4c+bQ̘TT!kw/nb͵FcRG0xeO sw5TV12R7<OG5cjShGg/5TbW > ]~Wޠ9dNiee$V[\[Qp-&u~a+3~;xUFFW>'ǣC~방u)т48ZdH;j a]`bGԹ#qiP(yڤ~dO@wA[Vz/$NW\F?H4kX6)F*1*(eJAaݡ krqB}q^fn 8y7P  GRޠkQn>eqQntq"Occ°NRjg#qSn02DŔw:ؽ 5l)Fa/TTmCԤ{"9b{ywSXE*m#3U ùRIvޏrJ`k|wJKH:O*OKy`( ݢe*{ ua ȻݔhvOkU~OǠI/aǕ-JMX _.6KsjA Qsmd  O#F.Uf28ZAgy>y,d$C?v01q5e.Um>]RLa&r?+@6k&#l)I5_> ` D s5npo}/ؙq #a2V?X~.4O/'|/_|&q̑0dd4>vk 60D _o~[Sw3ckpkpLNa ^j 5*<&}kˢmqvۗj=<Tr=[ a^؃ È(<^=xZb [_tܡ&yЋ{ Sym^?̑sU~' Ԓ f\itu)b>5X -$wMm[eG`̵E$uLrk-$_{$# $B*hN/ٟPE[/Y5d{zrBܖ6Hlc "mKv~[uLU4lZ;xEN'oI㤛rP*jC# 6@dmHg1$ʇȠh#CBΤ{sTQ{%w)7@y1K^ ].Y$46[B-3%OONw8d`Q4d$x0t8@t]y1T\YAidtxBG:pɨyeNg4n]M؞ e}Wn6׳i~'ہZ*FU{fXڃP'Hl4 ,ŸqMHDCYZz Qnz܁$Jp04ȴIL΃.0FiO-qy)i_TA|S2G4miBȨHM(2hys|F 94 DNlϒòκ-q|xC ,gKDzHR%t+E/wd#礱ºȄWEz o\JξB.wLKZ39(M +(PWՇfR6#ю3Ȋt ݪbh]MTw䀩S]'qf&)-_G;"1qz퇛0,#yiq$ՁɄ)KٮޓJ|̖D?:3mhW=rOf'/wѹ8BS8]`;=?,ڼ"ϴq*(A7? /W= #^ub"6q f+=^OI@߱^F[n4A#bYѤwd)J^Z{*ǥzw73LuaVad=$6)iI gC~.1%YmҪ+2gSt!8iIۛ*JgE7LGoş\bC}O i ycK1YhO6 /g:KT sPv6l+uN|!"VS^΄t*3b\N7dYܞLcn3rnNd8"is"1- ޑܧd[]~:'#;N(NknfV('I rcj2J1G<5 Nj̒Qh]ꍾZBn&Un' CyUM0nCj.&Oڣg\q0^Ϻ%4i" ZZG>Xr'XKc$2iσֹH<6N8HSg>uMik{Fm(W F@@{W+ߑ?X2hS4-=^YgpUHެbZ!y!ul@ڼ63" ۩:6=TZõ$E,ϓRV|G&$rr;J TtIHFE=RȬ]P pLm|?$%>Eü%mWO[>Xmw,*9.[G n >X8Ī;xW%dT:`ٓ~:QO,}j6j!yڦʲT:Pqҋh] H+&=>g| Z;D8ܶb:! Å{2:+au 6:!fF+0#+̬NY"!6a7#񕪰%:r|o5Znڧs?si/W qEU馥˟^_޶oڷOj'?nc]Rn\t3^邳塨Lɏ"8k8M~?M}OAH$:L@D+dˠUHs[hiҕ|֏G/G`' m5p|:9U8PZ7Yݷ/7cs=v{lLHqyXR iE^1x5/[O6rpP40ޢE_A͝ Z5 om2p)lbp/bj_d{R\' 礅_}=\:Nb{}IStgq$<$ilb)n&  $uT{wD]2cM(%YjDktByxVl巳1~jpd1O9Á%˧Byd}gs9QNʟ. /ӦxbHHAni5(~p>/O0vEWZ nY3 cU $O,iLacoW1/W=-kqb>&IL6i}^^XpCŋ݃k-$pxbڲ&6*9mg>{rtD)wQ`pkKyt1?[ˋZ5NhfӛŮ Qu8Y4?W֫/&W˸~%pqq{% ?K~,#/0'NZ׽Kq^ėSJ6#j8GO[ PCbʍN^XS&}E9OZ]'t$=tnn&nu [}Ab4 +OLuU{0fIb { wml"Ms>\΋"?|nKfֱn !ڄ`[nUgu$ B6 [^7 |Xpn1]nr CC5`F`J `rKJ;?28¢E WiBhFa[|ݩSRO3]J-҅31,jl3Y QuH vΎ]n_2a62;VI/ɮ|Lu>'$0&*m.)HzzBvU0h} -_.7^nya+Cs 6K!x^' ^7 l 2Jj.S֔(*CjaS:vp/N6I*x8"EȿQa[qVM/)fpOj4r!:V_IG^nILVG#A7jF}'qPU嗈M9VS;a+Ӧ8E8zmMs*7NM~@6 ' 8jp*:'SOANa0rӍ?DT%l)gvN}JTwQ)Bی:D`W&jDk\XD&?Y\9ȢG:${1`+i n8=%Ml%İȖb7AޗuV3A7ำqE*\qb'YpuHƩҬV nm=Ɂ-2=|5ʹ zi ' ׹U>8bK0%V\ t!Lku`+]c0h&)IVC)p| QUA:]XL/2La[Xѓ F;/-rtx-rei0hE˝ݸDt#{I} `v;jUvK S x1Q2XU&6k&lE"} Q\E)+u>.,SzbQ!g:l0r5aI`"Ǒm O\B!,ZDbjKM%q%Em(>Hm 2z=Eh^&hBk X%t>g:Y #)#vǷOV't d1 =_SEp+%L1OUaY7lBT-R"ӢQY"0`%޻I>rd31V_ѺUiO]Uap5k_JA+A.A~ C~`[>y $E jG)jGȱ WW/ @a#LA4.ٹ^XڋX'>דzY'Ź$:fr;)ٔf ՠ3KcxwǪ0q,7FeV/!; 瓠 Li% z}ɯww"O-]J`sdN$@"J`Y13K/9`VTńǪC%+A~~wȌ-zJS-~y30G@U#=h7) ^EUB Q:>9W΀çM{?`c`uRljצXr:l`T~IQg\Ѝpgu#QH! ,/3`~eB|C1Yg~ؼ/5I7w9I}qww}U~7뭱ԏ,}e7]ukDn`jSlQ7DžHa/EU^IpYWW兹Q7WyTz|nˇ _q㖟A-? [zWW,/:nY_s$-#9?mh{R 4ѭm_9p -h2 dֲ 1"j {]]Nk"-KjLOgeΏe|Bf".ax)֒t0E)J\8ʁ,Gulʂ+lh)6tqd!eó5d ¢ku|M"kP-&ђ5h ^pN0[|B>+q"/[ڲ&6!%<@fpѻKQ31pxFP>TU?!$VQ`Rc1wM "U8V15> =҆#xɮ}U`۸ہt=|X!~Pu(UeS@%Nb:.SZ1d!~\<}LY aBRJ@ѥuȑz.# 3tl7 ]وb Xnݔ[TN1Vzk[~S_vD.yΕ`h9U|A܌ЃECTC Tnpצho!=V qy)U cigs^>sgv"4N9W_iI NRCǔd X1Lb.u@`X]nl}!:ViI[/SE un޷wQFa2V%ZniE|nZ&-I,t*ώlo Lhnٓ'Xm R ˍ-~ά}hs\5TT%~am.>!LcoJrKmqvez܅E9t6FZXgsreHhlٷ+ [}r:̓?W~e6>0E8`Jq-(ed;W¨:Ä&]䒿e;0:|7IIc(Irw~Z"+A<sX4*X FVGA<^^7 vq&EwQű:؁64ĤjI s>U^k6v읨*:}9V|MX!8j0"t \5Ȕa|)v"Tqw?E8V 7z[v_}OO-DcĥF7FX%2@KɴH/=sۄ`gvRqcf:|XUZ#O\_JK\?}3tj>YSD1౱UR}UR,:lơ2<8"˓MlA2 KvP8 I7D Oj>;V|a|`U>D*KS;|:xI/ió21׭ȦS!e^t+28bI,ti \Τƌ ]穇`8[ ضزEM_UA| m' L,C{"./Ep.h>hLXULi7.1-Qk%ƩJ4^=ple;u.6vQe UZAl *^Vif]>HUd6ƕ̽=T/se+ϙK$S`hnOcE(Tcr!:8UL | 8 !t Q7jk=nn7J0ܽ0{GGL'_So^ʮL_'s%eU+U+ȳlX6}i@djӃfb -u-w~ r}plK;ֽ=nlmuo[`wdй d:[mS%uTڪ?>={2])|Ը>U{s]^l`+ ja^9c5~nZjA|ЩJs Va[~ۗ#rri# zLdMl?6o AMҪ1Ez&I2Wwߎ|7.sW\zk﯊溺^TW^T\*6eqr/^T77WNZ7F_}-򲺺VWQ77V\_v?9?"Th $LqQj0ւcz/Adh.$@bѨLtT=cKGX nݔ͆!`c|Lu_~ǴT?cr$dts` P}KNf@r3 Zj/}I+ϟYSUkTa>x!8 J&[V =͋A,z`S,J|L/vrʑ=}IhM4fG7QIc(<ǩJi lc*n;YKO?md+P<X9;~v005a[oIɲce(f&aׯutW)$1ZɅLQ4*γ'[Jd 9H0VX$Jǻ|Xՠb)0.<"*D)E\i,2SicS-1VEd"HhRbԼ 7;SYYq{\~)ϣB6Id S{x!_3?Z5S'Gk)X*jFO bqbblۦcZZ/!OB%[]x)2~wT!dnp[:H+m )uluaVGK.?$A2ͯ@L|5uK xh㝥_iZ1=\P~EE_P =S> -#trilXLͤ#ǿ<>G_"1nn|o9nzO+y4&O)Adq %n?Ƞ=KX啈yR+9NT`{ǝk.b+OIݭٹߝOߜds q4Pk,`TEU=IBbn]?X㉽iP6I H m Tol tђ=D3/|lk\GwL;Iވc\&aF@xX V9:Dɞ A}})Arlz@]f`[Ê>LC iHY> RCf|cw{ͻ]`YKPTW('1 wy wu{Lx4(6q]x1 [oyS#qO)Ćˌ[6Qz |L "x S7s9ǵ$|.>}MvyoHym>I>%}q mÓ I'g1 i7#UDY CḭKv?goe3 qkw &]yr\e/E^Ӡå*OޜhwY{kw.æOОb䪢( 4P[1"8,w[$?Z [ﶿغnSUGXѐnV~> aP.}`e J-q7,s]Ֆ$'ɺNnhw=/5/y! ArN{,6Ѐ쮷gxy%.k b[*>9oLNJ>޺+sW%yR/2n$ei/NݵO<֥F4#$uGu]s">{ܝeC/<o|4yI8ݟOMͳ5aD66k_ WxDӸhoi 5Ԓ$BkGuFTlMUti7w)E"Z)-Ӹ><0tmS8(Qڵ y~yx"`sCy.ЂThH~gRkkl;^ӈ;s<(YMc(On,(pVavY33E,hP\[ m7vgÍCψh*+|u2< 9&4FQˈ VFߊJߢWClB$S;M'Qt[E-oRa"-ZE_R3ZMdtڬ?I,qTLvŁ~X6:CdJJݗp~r%b)r)iSht\DζHx{xYr-ȶ *x+r 3"e_M=f̊6媞¨wrgč8E47L5;۵\@pHdsKU+!ص + 7Zk!ЄĿP5C2hsx>~zXsC?϶!4,(/Ce[Y'l9jp':ASV$VlކzTR6 GBFIC"}Gx~j` FU)51tm-۲j:qJZ僮 nJ%Ϫ{睜U靾T-3zĵD^n8|l ʜ dJkDe±9?Ll;h^:o<@YƮɖ3UY?E<_hE1"aKI~}.ͩe U#BϺ4CLZbETu;pP)clK?LU&@<]ֽk+ RT:iw(yXo}"hhT.w/H&\*1 Ad6DQ9)HV֔ ъ9*{Ɗc!dPhUilՉ caDRL_qDNc[  Ub\~U3D;*w؇5|.K|hQym)1ͩxStqUhKs9㿫fk^<|RǺϧ3I^]\9Q;UZ5JC17rӵfoVXPO"-ka,35Ge[+E`?fyXhWT8+W]-ɜזl߷i܏"Z^$|;0,EFzoF1,0ut'i.:U쮽JCnCX78Tkj7+!k:*j}*sK2WI@5DtnLy5TUeQ萖grc"Oq9DqUY5*\H$-nx|7ן&cnu.Hْ^sKʟ 5lKbGO%J\mĶg w]U<|wHWѭëK]EcUyzio@^&Mg ZO\_gg0iaw6"g; WțzS2+yZAbLc !`hdĝ: |tV98|3g!FA$<08BzkeW0#G$Z^I`)\k-}U 5WJkfw?2ù4lOu34>P|>U3&5ccgU, 8?eqff|އA9fy[t8I4)D$0WS~gm˸Uə Oid6 k3EL.F}lLW5B! tǑ>w a;^q؃$;8&D{MAɐi30FcFj| "$ׇi,BBDogqIA,œ7|a'-N7)a[o|NV9dizLH$/e?h.%9֟!= v(0/~ۗNWfQd}^~##$P+ oONIѦCC֍-Oߋ"GI($/#C !%Bǜ7<޵3-󃔨'.Ȧi7ئ͗4l@D[[Iv\fHJǵCVƱ$f8eޘ"_/ `?X.' ~7f=5<nxΧˀ<ɟ+ݭ6<SKkp%ǘ<QWĆ) _.6LE*H2`kP"pB)&=p?I5Q(al:{?_,M]=SwڡL @p )\k2)Uo8)&u@a~b6AeJEXNE:ʣQXj?}}@x!}s D9l=0nYb=ViJ/Mx"u U7G~ /Gm#0\CةBP'`my';`iKcMcZġz y$@Oi᭲SWSxD(V" nGQ-=Ǧ ^)ֈPv@D^sdgHm@Y8>2HgJ[(y"(w.fP@(v{Bx\ֆ b-Apo(gǺݖ0 cQd!y' {A~bFQ `p-6?Mp[x`v_="QL#ⰶ ގ?`݊>D ph;apC:Ltx;"k{KͶ@0ΞӤ rGPV[HpIR{dl1Yٻ YeT6$rCFʈ&Rn9,EX]%0{i.X '[qD1(q`&^"x/tHjO 凣Μ8 &e&f0޳39[ԥup.Q3BX>%^w-Uzp1Ḿ6;rt*púk3'Ia"90v 'Ө?u>S^TLK4؎ǿ9l.(rŁU Y #$#dp9B5K+dn,ajJ:U3Ȅ t:)~uSNo*-Qܲ˲~U}H Fby=F͞gZQDރ9[%W?1d8:laݭ016kKD`L[F A@]nްAG-mlm0C8ug q 7R - ]J%JRTy UH_FyZ8A,ŷR,\D~z AC?FU KtY~$ _g:>FgK܃wMۼUXd2qq>dco'rB7J"r!\Zܮ9#nu}:p;%k&?~EE=krG݁]k0aRŴ 5</mD^8քy?`r) v =І װTQT -Nfkt-+:QkkMs nMk  x30KnZnlt-n.kf쑦} 7ah V9`:?qAp\OFR3Di8kggSH`uq9{&g Ya=F$ 8FR0 o`; %Kt0QY6~` `ee,l&v٣fhx}tB4plɃ(PRi،Ky 0Q\Yځ]֚;x$OvS~uAi'}@'*-Jk=? 0pcȇ|*q$w}q'ܢU#n@+Җ:jm@w6MFV>b&UU&GE<{iE3e(`yq]V4`_~?ȫ"}گ e eʞG(ےP K]6SEg*۶[Fܱ6]ܾݺs7]sZsbi5G%ZZjmNPwKB  6 ۜPyz[m@H#ߒPs8RX[ Ry@͟.#S-^:ߡΨsLj&ñ8Ճe6-Mt|%w&MItdRJwIr+Lx`ҵM_cJOװ4!_7vrٴg{_<x7 ,)ޗнSrLL4} ™y ʀvW8 zxvR_9 %G]8_\Ulwj6MG*Вg IG}|]@ |0/A Ig.6kl4Zn DeR:M2NpE'p˲i `NcOp,o!aaTT.*MMLY(*q"ڣ649A;i'(LDzNDޫWm @l&< *gg/T#awL§1L$IL -y/w @u8g"jY}#ؽ*xhR` S=Rֻ%H$'h:US(Ҹ|SX8zhr0"DG.j%~LIȳ1SxӳY^s7g 'IOӛE&XXRDS$M^^ZpbdNoJ]vPꡧֆ7}qP\+t'Y@! Ӂq4d>ߑr*<XHI @4-H>KgF> ?܆#JȚ0D_w>* C"!CYG՝:,%4 : "#LXfĒ̉tzp_oZ?):G\ |5SL>gJU*":mΥ\ SPb8 b(-[kÝkAKd:|eW;i7oAg&K:az*t>_eT0HVgDXdU% V375O*"O6;#Gwy+K Ź.zB[$\CD7y9 !O RÜ )z`v9$*tgx2*S=8m8q~݅Eq_z6a[=H`"GIkA2w9&0bZNsyãUY{:2*[(SM4F6|%!jN(ѴqBn Ԅ|tq"pD@Tʾ4#0KiC[rnwW V=Ev)]0}*sfb\>^`!I;jdh}B*q !V5u,iQG+Iy`V2i~ׁ7#,3špeCm"5c_NJkw";:f4mn#Ҡڴ].dUŃ+8_eKXȂkCXP۹JS9U;t~'5pQ|6y}t^l/cE`gAnrXGfBݰ!"jLT)aT'BM\>j 9\j9UIC'v1)$A`*!\yQL44#mAHp[oIpbl(vkC V3BTM1B F{\wzG ,<#"|H"z>eG)AH -M6 It (9kJNfX*؀hVlmA#B/G1֮TW-h}bBZ duf#a8 ' n*lL[Y3L(X 2AH<ƒ+ZկbMK,}ό͵9>pGo#۬+G-hw2 CLI3~T?X "l|fO2 .2*Y:岍 kؑ+A#1E3qD3l'ò; `W14I.)dv1E/|y G0%6IpaQU3*[䇏klm R͸{@"/E=jB(\ܼn@W] h)|g7ސcuNk9N0^M.oW{^%Z; ={:9Ha\>pg_o6-miz6 s`}lϬW6 6Cǐh*W&W*ʼnh>83^NHP]M^GjTA@Wz[];YzuYdvn sڟ{ilH+S`Ij\*-aǭ [7Mi5Y56 49l)4mMLprg%˙ }o(:5=wz=OI܋Dp^i$&9Vk> <茑 .t^Eaxt<`G9Eਲ਼M~`i Oݏb@ۼi\tJdM! L-qr!+{$ַ$JgE5 envUh =~V r֢-ϛ:51ͯ1dt׵(L}օ5Sv6 layu\ *yV{"ł&ZYD.⺧f7.]{ nDdlesLK$^!%-1Sy1!>rPp̻Q5o^<+VaڋG y|ģZ>n6d}`$+p b$Iz̢NcRj!BB" =M>-}M,;wcR ϚxL nq,baE 0A$qp,/^jjjL'1Ȫ:BtH2* ?tRʹ*S5 IkT< 1b=ݢ,f fb3ExFEGQA*>GB3[U2*&]C0WL.àF#Wm*ׄpMg#Ԣډgvⷾ}{$8W(>sgDem~$8:{=xMT$GѤx*4eL 1U#FY2FJSxLΖ`I μM+W/|i k<9X'oWx]_ej ?4U_fպgeԸ}qA  7~-8Vͨ1U[Ÿ -&6;5abs!jEݰ G^3]-"˘r<9vaYKB2v;58z-ă`8[.Xn̋!Qmj6UYrAta*R4 /7zrrq<$FSq}"lv%sa9!"C84R&X M)8VZ1ڤ=&*94qmYK}%tpdJ\a7 ϒ 7TT)vfXQ )шR$0."ѺE7N$nHpMsef6u\cѩO d Wwx 4\#7I8kE4##Vy6· % .jX4W.#/E[T X "`7x/<P~p9b.4FrŎ:ǍF;ڒ{'= ྮk{ O Ȇ‰FeYu ZNf&H+(R.ÏҔz?)ު@?ƿU! \Ļ6:bҮ׾jjź *&sLQBw:uVDbHk?RYegvy-Q+5$6S>L/[9O95"F/21ڀu?,ċA,9K`at ,N.5bO dFYQBt}eiLSp%OA7!dtÈa}WJk]j {փ`T0.]#QDE‚[ .\C%b$]N$8:]D[ ~0Έp?cGr մZN7#=kًYsJ5 -O<-2pV;3{8cX/NސHspggػƍ,&̶F{4tƁQ&ZGW$eQݒLmw["{GF$d,Y)l3ϷRƓ mw-:0CsYK #~Ď}П{|ORڱ hb8r">42q?L&Ů~]WR-\3{u|J1eEAgd %a4.D?O:w>_~04X!s?N}3qkػiiZӋ!Gc1 d yIg86Br_33?"mc76 Al~opCHwPsy-52 |?]#GY8Z%Dȓ~:ƣȎ'gYGjfqi`%X֭t8W'_URX HM)dtaT_p?]o;= :\<6tx5 nc<>3:|Blg+o?77|sɛw?ޛꦗpu/a´jr(v,Nʔֻl98xM KpQx&UMAdrc1Hz\TeO0Fq$㣪v|}?d7CFӱ@ $넪#ȑQE<4$c1Zq?+0_W&V'\~\X-_^Zw`^ќ*z4 K?zhQ}K Nij^uՙG!j?,{"*ނ] Ϭ6[@TY &UE;c|Va<gCxˊ)&FON%\%R%(\VYD`E֠T(,w36s< :\ԩ-nUPAUEFЭWP1Ej cH;GY1-c.D-b-jpJPSEV4w!¬l7hB qVU j5q{A[wHN5y"Bp HY.R)++ iW.Ij$-:,;k6'b k5&{M%KY, *-@XQ?Ʊ>M''S>I!4h -]`X܈eݟN"8 7ooCAU; jQyN\:$:5-~_W|ҍ QO ejΛpYail}U+UI&˯%@,%RaeWQE;Vjҭ9CnX{UlT` nPT#Tl{,Rq⹠yh ?'JA9>rf7Su:j#TX0b3%}m!GMKZXK^3pG[-`D<1lf@/7|x?C~3Xp-rLCJL?V`$S,P)Q9V9ey ➵05j^<xBٌxM*}Eϡ`$1%CZiJ2i{&@\[3)YL@ I j oqòf h<%d^'6z30x/*ض-ufn}s&;"W܎1ĮRIiD/#D>IMQʉ,vZR֗V=*}:磯c{8Bˉr },{x۹  5Amүq?М_~`!Lg\rERRԦuQ`ў5AZAJDA{)ŨF B[r$Dq-}Y~j3d/'"-ڍi[_\J.(gNy_Jͅ1Aq.8qDXCs1J!Qi}s摗w߇G_t_{{d8fߟV#~lL5ySUAWf"XEPYqPoo/`JPD3R.á+q{p'+N1ȺڟF۳]rUyu\ZyӉY\v|,'S31MJYhBɶ&@Z#2&ƗrtFif!5LYc TZÔCT-a}qU:6Q8V[SK(e_O#4r{-:-BT +@HQB@; baUDK{'&.<pEOGO%_E)s$S;W; 5bgTtmFyzlv&D&] ]ÿgXnZ8bqV+98 fOn5`W~cZ|O3ɈFS<;5ٙek=~vf}F#k*F!PTͦՆ+4{ĥ |,Vd&W5-՗$.zI!CBf G\r/r Zwep_&: dPr:w1"ò@n`띧YDS@ +m&.rMfMm|"!Ge90 4 Ig"HXl$ҬSދ&'1b-?QOi[DJ`O۾U[ϜŜriEԀ P%)wXIHhy̥\[^i`CM0T#ik69Vrp"ωhI\Ȼlq /RXx!qwD!FC^BCH#H\;_V-~^Lt؄HV-pbE|(FK1孪KR̞5qɞV)d痻:a(EB`٣ ëԚks E.j="&ǟpT@sB*MԪ 1%+F/lRƈ'U0Bl@\)C+r&§1kV:Zai\ rBR)R$$Cq߻Mt+\ RJKHc1MLTX3Si9b))Kiu8%D0 l*5G24Ϊ2J-J#z>Gj*-Gq[ktSO|vٔ@\c\z IgT8'`x`2+,j29f> \|P,Ʒmc+pY~wW$)\I\XzEWA]R%KVL/Ma[$lfԙ$,JcjMjtD,44B!Hǥ/͔P†&hDDY Kb nR*HsIdN$Jv|'D1WcnRq7<&() Q|4+)I8ITbp ANMXL؜0D2a%WO^hмCB %Z -xFE߬6>="Wx97xv~}=6l+6A2j=Rc3ll~ӱʨj njLmK#C9NF4",d,b.m͍/aVTq0OguGmxX{h)n=W1[iʥG["O=1g&1<826\"H$$5'G% =EibdJ9(x κC*ã(8eb4f8E! FQ1$Rg.Q!TeGY!б*My3A#fN*1$=i xBӆR;@ĉ2R"Td, ',J4h\0T5#;8NzMoF|VT}$?D nl:% L H@|P|t?g'Ww\$C"BVw!ؽyC~=mU5I8 $-4K]B(2 qhwRɹ>v+)M`M &:"29!EZZ:I)  nf\k"j+Wn_OTuָSc< pܖ|ѪΠUPC&-4Ppˇ< %׈3z.XK֡5AmK:-ZL/|ic?:W o>U3;\&"_Qr]{>Xe܆=.˅Gil|>6+EcMXbqrX.mil3j4UeAՍ/#") ^iȹY?q^?nV+c$A^Η)mǽ7Ń8l;OsV\~  kSmv04GLc/6l${"R JC*C,RMwo%m^0BIUG{n;h7D -[D˻tOfE>ݠO0 v;ۄKyʉG`*)[?q0-V?@'Zf+ FӐc0PRk;fLJl;֫+S վXCh;O rzRsK߯XH%jZNCsg՝`ڵ̂b~qM*O1nkz-;[ڈ$={Rj kO(VL#y/\?T=ŞhhO](;9 M!@YD]yB*߂v-yI.`ՔJem9#j-7 .$KoWLvzH0QyŪJ`)?MxEX$u[oK MFӗQX5-_Vq}6Au"igP;F+}~=Moi Q dhzd\{u pvؚeu_uf=jDw g/g7dH! {LcĩZx-`I>$5WB`Ё$@ ym"WpAlzr_}̄NTa]Uu3v68xEHt8HsLGrnuyze6ns с6SOMb/QIZNn/i0:Q Lr]FYܪ <+3,hEtCspܯLZ622IMzpg/} ݟ!`;Qu|Җudhv}Q]PE@./,T޸(_'zgC 32\ 2)ԵŬ@R[2itKO9 gV¾8_g0ٍL~ts {ꮽ6#ӓ*˒{'xWec`=S_I-`ʽ,m0$(;>"ĮAhNxSo`랠g굂 YqoxMpq3\xsNxTxi3DhנR԰ jΛ@a֩Zm`t"LM[pl#]bݠCkƮs{t.+jF͸;}E"aUTZ`\b=?q,\!8Zឆ@ȱZ+k<05xXoL{ZEM kgZlZKwCo_UOT{hݔޯϣI .nBW0\Yb ~;/{?Ul8Mk=|L7@RP\3sF|r}ϋ=afdÐ -5u_WE;ME TS 6(g̰D3DuGt 3C̣+/ ;>Z3Y0B  ӏ=G+,3M&/± 82¤IE"I-͍ 1Z0 G?Z~`H˩)pYqЪ+mgzHXaH#[U8܋GvLru{&F3@ g/2Q5`Z3ޣK,ZJZv>{ƪ5/Dtgb4v*q[64W9TLdl@Ò.: 1Sdp9w1N# G /szn'hZiu rװlEZ: &DzFr^Oݚ1lKApD馃c p$,]gtp59tp¸8-Sbɱ%i?p۵< M/==򆾶 VY...MTعhT̊h<.n*-F_1Lr=sj9Yþkv~Uc[+ɫh'8 !^+e-1)R'?x:ON2V+ oF&Л6v|\ ]%#ݧ(aV+TLm)%(JbW̋s x42r[XUbnáQJkиžղ`czE+X)G*0=yz\~X`Y&Įp?׏O岉ۀae}ɧ vͰ \\ꗹ}cƲq~zg[^O\._K}_%?^3Ϸ̢'3jlsFfp߆+=]ά,v'l^sDfR'O1b+?Ne$?v{$8^җurAL̗ >=jҡS퐿T6门@!zQCo;k}<TKӏ ܔMݢ+r,*,(>.m)5TBs5bܲU=I~Z:7C/KY4.LPO՝WNžgŊnPj`4Nʠ'/Wռ#`w6nf&% "_BFhiP=eN%I3bHQ䬟(E"n4ao$LY߅B`JcP޸"-:J3𠵎4!&JjǕ̺"KY"{0#7e} ݏyAr'Ҩ&+vVv|]Rwްg&W}jSV4CN"_=QUq' ΚuBvbaU~5ٽ)#> %z07"µfÎjgi%ϪL@nŮCM~.l N6n/Ae nN8FSp1Ϭe2N?VeZpyϚ)pi6Ƌcw[d겖1%ף=lE;{_߿/`ݕ(u*Le!ޏ`Ks0G羝ٟWo̞H¯Itd-wY6*O6w =l\q/us­70.jmi4)k|ͪdۅI>'k.ozS2YE1480x)Om-l(7 z(]Z&AKX&L]HF.>jO j'+6@}a8qQ0NS6G8_Iu*Ji$S"<25$UeoV Tڢt]ԗq.OVv=75G0ԹMAX pR9buֿ DNݱݷ%o~|_&u[_[>%Vo9յ[:ѫՅ]y7OVPٺ^ 6:b wٙ.u|ku2.<=Nbte^oOG5*{ [c: g)*YlL$WǮWE$ flj؇@Ev?-Wb@_n S>0x H'ezyZ[a߲xEl*Dáڒ,we=Gz`(C>/mC597g]U]#PfYȌ mz:h/ՆՔf~բSWCqn%AWz#UL"THҷ T\pSY7*Sxx3!zժ ^=Ay׌xKPu.AKPv.AKP&Ռb?]S]S%8[D<fC<-N.J 0Rm^ 'Qn࿲St~HgCwӿ[o0#1%ir1PH%VBdNQ%_NO_`@\L\{(ԙ7o~0vHiůclޝ!߸REwdySI߀S1Zџ8e X5bZ!|zim,èob~%}rew ObRk~a7/Y+nVWG 0OAյo7+v ?^]:'$~ׇ\lꋵZ?&CxʔH0$HjB<tVv]u߀[l{tЂQF*XW&)=ia/q q=%np?*lݧK7FX%NG| %QQV釐^E` !Bq 32aU#xW5-u&ryXCKQMIƁt8KaPL$-sEQ9ȹL䏒@7{W+DxE!YbDQ +ȥ2*֔<7^F5J^gc˜‵(g sJ $2 x p>հ8 i&.U,F91kBƑn]I&k\ P@F@m7Z.t"JC/T `;ш1kő;i>6}kYu@Ekد.` H8&j¤eFly_ _^۾4 0U 2hKrX֎3&[yo*nomrTcv"~OC>X갑P_yU&hS*(W?ޞ>v=}l?~d.[Cwӎ7tS4$0KXI95>xo45Jw@ A}* I%m`-y?,u\ё^H <2 ĕ$pGWZ_{ց-“@'RnbL2eG#xy_+ 8:QWeC~F9UV.cH#1n{E2cȑ!.VˢJ{ k+dfcf> uqR[Fhe)- Oiy.INli0EWM|YU}ڼmfq^ ./lʈ 7ڑJ{MKEr"3M'.ΛHZ؋ã&ry_+9%JI0,B` 90|Z)Кtq}kc`j”a<;̈́2kK ^RX"P (4+)91flM8"f2Z.Ix4$&8$OU!DДmG|C 'P}h 7n e *a {!5Ȫde\$[vQ˪'+f‰(Sdp!ԏ=y_y:7J% }FY܁0T9KTiQ!k?4KpJ8/CrZ3ƸcAXP}:`Ng7ʊXfZ7{2RUyILq'e򾢕E ˧+?P 9qKƬ9\U)L|mLtI[j;wUEtveDJ1:IR_;i-)/ T% 2WJ;|@+/l(M;TZO%Ń43k~J {XF]ohҠNȹi|&E91J-K9znӢK`b:&F~ɱ*~:IÔSvR2Կ[dr C̢o?o]G57iS6v'4|2i/y6 {\6dy{Ç2DŽ/kAŤzբS5%*-l7zM{Tcf+6$wyw)oF3pJM1 Nf:(TxIbZh.$JZpDK:^?_1 9&֠DcL]Zlb]Zpы#>å]vIV!c1˙Zpء-lcǧ4=,`^}^erMI Y%VBT֖*b4Oqu"+plV_QV^x| [ ݮSǯ`m1΋a3xV6E"gE|y_ig`=\=SՀ]_kqn7CGnR>1ج 5knѪSlldLfJV57IReS=Mx%z~Ɛ.a"nc z ~qwUa$Un`¬7R](b׍_^|*tO/DLYz|*2ǔ3ɤprό{Oiƫq/B-sL݀6}~mQ?4F ǻN\RYR̃KDpF]Ó刱'Q};b'_?;eT$e)GD͈c}z;A-sLjt:a0? Mi`@|@Rn En/M0w'#M[\^}NxቦpI=R2iMnC BC_kD\;w%uZ` c6Z昲q|'ð7iIBTI[ī V!A(i;M̹@ײʈuJbD"6̠"+T5UWcO`2d +PmR)x%Wwsl+{c=*lFτxx"EDxw)"(0yU[AO(6= t$CkosemUsتRjwξv5r n 8?#P'S Z8aA %)z- J90iYUsf4Hfuv ~|]aTXqŌS .Ժ)3o9h[0p|dE`e(YZ.?W,j(@wJ} 0B?gO> /O>- mШfjk6}|x~TzpJi;4zg3=j1WUi; b;hEܦxNǨNvh&0:C`5oۅą*`y˪p5XH60Ai~ᚣݖzDwEbnzD282b\:X4_>5k1QbF.UjLӾf tOw.y]Rfjy]rދ[&14"g^=^,kG5%Gv5@{K @G$71=תf꽣h`4G i>fſ313t"n=*H}}ÓNF14"~lui BQ' ErN2c.4߾[dm$i3A(gŧ_f$E^RH} ?7beW#ּ\3ySjDͽh_DَVC`4}ŻӲ13nmGowI(ZaGL8Z[}11E!jtQآ|Nb#k1\TM=:S|}sRO?]v1tĜ"松NLj)x$6޶{}R^vՏӶp<&0SvnµM;w.Nh'Q)''U"- x;,KD]y. Z[ɇxa~{h19V\ݫs x'j{Ǎ_i,6Iu.&~eٻYHu{m{l{f+U~QdhamKUSdU,VYAS)YJq%T- ( ta`x[ϥ >n9vv>PM;7"K?阊iW짯p 0"YEsa-qJaSN  ߴkey 16YMh:^QQGdɯɿIuEzutn_ѩr$hZZ`vc* }x 32}!oM-/!:kfZ*}gF5Rm&O= \[H.:HwvkPR%c}1ڷM} .oڪ^8,p#'ߨ>7P5;pﴆKw~5NxBR+m01R׸FM֨nGMǣbF26 !a!nD12pbdkWϿM oʣѿYFN2xgIU#]ߜtL\/pٔHPGEYxJXnE!&׎!gN暄e=?Iw\So `q, 6k(vi>miB{7*K(%@ .d6|E}eε eyw! se'+* 3c!6x޺?^SYS/qzQ͙C7H%h l棣Q(Cxpt#IY‹E0Wt^nd&V_;T[3)UAmUBz_Ux&iTwٗ^(/nZTvK7976eXė RYo"0/FPl eږ%){DN)`p.  [ L-!#5fcW2.(w:s/^ӒZٓn{v5{N]-V _ƭ/ ο q= Xի*{y==XyUۉZM4UpU (a2Jg>}Ę8Fcr2|}_]S'c_1HLM_8k}lUd<c{=؂aPGl'-v Znsy潌#q,# '%iã92ӧѨ1$gW<&ѠVF[錭J;ZY0eIQ]a7"G14-gkȻ%[n 20?1@ZTtȑ?!X:yG3"$[oQYgo<_Gꌥ\Qc5QC^gR'Y]&_uYVIwZ R)B-5q!дIi81-KfIuH靲TQR: ZYE`񧈶iã9+ȥ^*?QcC^ ^Xd$S1݀fV}iښt}̓" GY|z8DǬ{d y#~eU?=u2Uӻ̻*lW7.}d8*+NOѬry,a@8Owwa>wUjŁLd9^K' @T"IrNh ?sU\Hşk{N՟9L}G 6 ֚+,4?2[dN*7b-'ЍG_o"Y1l^›d^+_7r~]}_h/s7YyφS ('$??//Qz lL#{Uw< w?Ddb:9MjqnF/[1 A2zOK%1k >O,ƾ5Ҽq>dR2e!/u-S(`9ƘyYwg2?{ˇ,1퐂ߖc7VwyOcZ&wE;wЁ?|ww4td4yn{4N.Z۸ V 8p6-֓OY]Wsfe4}?7^{Dr?q7yusqγ! hu2^X͗[|1z ?0Z7o‹ϗFxuV?zRZֿdctrF +~?'n?p?m0B6@~+d3zMC lL<]wQ]`n[rw}E EUe&AƐӐJ6+>-%PWdl!A# t?y=3lh'Tdݢ-\Ug9x)H"&d&w""(@sΌ6W-rs %&IΖ$V" gLeGtuF*u1#lI>Xo(a!7o(Pjg 3[CΒ1 B/ZXfl1dgh{̄>Ӄ*[8CL cȁn؁%ǿxSLl4c{Jt)%`$d :I tyWeɼG]h+lֹi* >)AYt!gYno狗IhXrDیśYY @{$zFR1dTidָA5nV9moܐ:'`PVńXHԘnS=S.BH}Hȁ5_Բ_1]xpȻBi[PzO!H"ǽxuق"MA-5q]!=Il:s}kK%-,JeR[Mqup+y&^VCE"HErC{2Y^\S SƿY.4A&(Ä7 =Aga_0÷eC[1ƑwǝuUV;'ɕ=E)|AZV +>m \Xt%BR<<2x +2 $Km(0g4h >x*Qtu'8bIL^[rv,x,"mEY٢ZFM(|K[Ƣ9ąr1Gss;Gyʯ oQ 2)tųXՏyϺVu.j%EϽ8^%,`0y J Ί5F:I^(I}/E>O^&ǵk%ְq%e*L X##2VTˌ\ M t9‹23Ǵ&ܯ< ˵*a\KȻB.EʊK>n>wW{z 2NjnlA3B9cHVˉ-b)︱ TAndzP=DPAƐͅvx4AŬ JƈeiK2n< DdsyWy{+v[iN0%a$dйQaF.@(򮐋r(qrjBA4s+ 9Wck~ޓ̧Y`914b.#yxs5Mg+NOmT7vҌ'4[G\4J6" '{RzIs~Wۚ3I)ۢ-٦mjǫt-uܦsցQO%!\Ӛ}ǚXx"r^Pq @j>/|Sg~&\=Jx7T+>k[Re&F/ ֺ#iq^w"bAW*.,K6ÞTDYߢ-TDZAk 5&"-Ul^tr-~]7Pxn u0d ŕepHXP, j‘Y*IG8ڪ~.cZH@,\H\)VJfg Ḻ5G׽ELj gp4#60c36-H5uZDxSǫ0ʠ<ř@/.zq42 8.&!CrLԱOG b>=>s1=#sL^&p)c*,n\QQM2Blsd`]̦Q1jsl*q̼g8'_sG"bͻU޴X`n4w ;@)L 0/R@X gpգƃG 2zS}6Uׅw%ѻ[}=zEWWT1:'gd!/KEͥ:V,Dc)"ƴgLwbUaum, =c?2ZDfsi$ 8U=ko8E=#J| {m7D%8ے,)Yے]v,ʘn HTE*-#J5qC.`ӟ{~ =#X79'Hf <2'zUKqd8FfNw ʗK~9H 8yrzCSl0d8d,PDf1;2F8ERP4%pSr \D$1ԘĖH8E~L768Ff8yKh-M>Q4C2dhYV$Jyp̜ D{:Fb>r<#dV6#%5JO]VжL[O ҶRLbjv3i-m喥Y wbjsCSaXʄSav{Lpf̑_V9U!E꒑NzVt%;x?2W ut! $|ag˗S]!'8f Աq`6̰gXƍ D#?|眤LC 9MPb|\H"RQJpJJi,n:Gd6A4rRϫT J۫T9*30q{岺5.g'TM 9zJw)@Z(ۀ؈H(aLDɴAunfm%>-ڋjdn]?5TAY/tL_37p̜ k[s6%]94$ʺS,nT.O}X"I}D"m e`CYB1.s?I (wNP֚Z=K&ze?PIndFf +t0)SP=N)$!FqLNCXpj xmX7V:k+2Ѝ$ JkIsD+D!ȘDA'b@,䟼Nwu˲_g%E޻K w[^I[dfpk# |Z pQ9NM'0Iy(:_P|#_F >:ΈxP宅` lTyB5C͚)"CC۾ Kj4(lg%߸׾;_D+:,׆e PVdL#QJmsOj;O]!g͕;ƠE;^t}f_@k"c˻9cF*E.k&'!bٹ$p@-;P LxU B5C]EN`gg lj1C˙kln{[2 ;t_AJ6׏$TuuE!3؝Y:l~+ ڹ%O$7侳EDJT8gܗ~/kA:Ua״I?>f'QiBq *G+LQ c8aٴ& $529WQDktYk#`dEg8NOujupd"j&a=Ɩ3V ݨ: U1D@ 74m^+!4pU!3e8丯¹C{K_6XJphڀJ)KFXA;/O$욎^tX<!0maw0i@ŕWk'S'T@vU31d!_#B0TOVdL'^ٜ{T+Kvnu1Du SI/T yT?SH;uX$C*|"5!RUߛsRpx ]Wk+@%HSWnq탐VOH0 +Ȉ>|Cp蝛G%,$S (d*0zˏdl$0^_f6 ~:"󳼝IkUECS* =k $ ĊD(_'kF0tcȘdcaMNt0P{M:tLW6q=M+c9 5G!jǽ6B<;: \̲йI֨~ATk`@~+8Wv B~W@v3gZUp0 $ߥiDJ=uvgs(D ^s)1al223z~%++i9/==mWw_|d/ #r{x+&/T>,n-3`%+ہ;3N}"͇eobӜ1gF2 =ƲJ1?oK,9Wfr3:+2&Կ֎G:N 黊N1[&Q%)6\O%VLlNY VmyZ4!CdVTghH=@u)֥1wiX3xcy[~2&e2Lc!qF#A%̰! BXV?]- 0BI$ ]T:E>GիLZ7E"}g:}C(ĕ6"]W䨿1.SRg){"AonHƌభȋ͓ ڹOOee(|34c3M(IcPd$(WGy m3xH;|qMYC6smyPs͉[l$fB&Ib!"ETiИEz!_ARBod AR1 1uW8'˜DkB$-y@4bS82n/!WY8a1(}RwjwU;'$ȫ԰ b=K^x|g8@.vK7eM 77Ǹ)UW>ˍtqkx[ w޼jy˿/ʒn\Kc"Ye@H?*ͿW.F9DP /QD#ҀQLkK.d,v)RgXNWM!)++zFd3/uY K)/s EP+>6?zrjO_w{]?2>ugǵj4"ƣ'[/(v3\ؤowL̋})'く0 N5O糹wg<8WvgROt.;{LpNHH exn y?S8'ΧR:D,䟼WRoeaBxAc~۳IR"KveXj(P]8ќ]XVU-S:[DBzHip&ql>Xj0BAzw qFoc!yma~:9Kd{N) O·w洱£0P~oZbz7){ 7ϫQg9Ym {[C1qfXac cvI"xJqoMyhB83 ]dH+HmGzc|aatR-a˃c_Y: 3utq N^ \=4L HIpZ>ו 95*oIDzXΰmP?N?Ыշ7ip_?W2d%xY܎o.:Ofqe|gk%?KR7Z$?k|x_Ub^HWZE3CF~]KLH~_ (SNqHDQΘ$0!DIH&$fM)$ A$a4S 9Vvx.ž7GIiTP8i6rR׳Y\?*$LaOjel&Q_HD^rVfq,2.7"ʿ=xq>;S<$ۃL"fu gx6ը6^觐X Ohf86PX6W-a`rp%. aonA6ٶv4G؞9bKp%5&؇ӮbŪ"rţEz'M>߶>Nf ܤ+ɤYO ^1 ~܋[/x %O2k G ,$ȁ{ל1Gչ%O)8)%O+>Too9u8  o{z/\`w߃ I]~w4Ns;K4[UHH9x` L`djZkM2 B.>O@BgԆ}_ƻmn{ͤ6˸@dbԮV,X:M 鈕CkBUu`]͎&h2.B 5Mz|mlQ_Bvtf2\f e#Gfqe\+`W֘ۓwwOI͵>kto&k)NgZ$YT*ׇqj/GBM_i?AJ 'Xý7/OP7nN安>O&9|!}hif nF䒎?fgsZ~HY~߬1b,NO&>kD1\{ "X `Ԩ'c^t'I~}DDL8 V$tq2v}B8#o 7~kԁqh]Aw բ+XP89+X̾́T!$/^'[uBQA0^ R9j(#QJkjE0[oAHAfHwWP9 ɩL= m)O+HY~6T,m{aZͿ7a+5#rwuu6}?-^R!PWl{8K|8w{i=!cXO,8/zޟ((=e8Ku "Uͅu-x Up*v=}z~%̡SdwI \eRFңMGpf?M}a5LWؠc Fe׵J+I) #jla.(1H5&xR>ޥk n `=5r.PĜ ??j,sdl$9FS [ͤEKgE 7jK?2XǼ'y@@++5:0sZ82lמwa1D{]R1`z-]j!().CHz`Tr9S/3DK]2 !_0`I&8C pRy r׵>>?0)#8 3fcS; |Gj90>U1kdY QSf;?Em na&~o8K$ɯgNKPBr+2V2˭$$#7jz* MC\\7#8 3GU4n6[.n}@P!50sxasBJK8NcİLf,GGpf\eE>͊jwBP-gk3L:PGÓT}Ga FvS`b[|M9̓@ i8Q(̟ T}$ M R$pxmZ}иPv%1Ca ^ȵD CH?/~[#=F#cg _ĜẢz_qfNLq# 20C2Lɇ0i nǰvᦎdcgbGCk7c^JJ.M ƚ6<0gObθ]~g t~*0 ٍ/,u 8*7Vzt IYg;Z}SZ5.LyEgw=0%J!XbcɍsWqfNwf=rspv =800c'tv`fwC@, qT[M]`Z-*G \]+0F9aCF8 3'SڧS} y3ﭦ;>{3#'R4HRwChz\)mD 3bK=7O`ZTM\ 'G90s2xK[!Nc= 0_.07S 6g$bcpw>&)na>dp=NAS2CZZCw[1&Q9L^&i6h[(vs]#Q{F \*qnGaH ĺ{ f)&^X|͹E[82O_{nICeZ@ s'}1)>bU>,E.pAdd_/Kx]L|=f\} Swoa>樒 $n #F 2j`Cqc[uWEā"]s*삖[άqf>?T5?_?e. J.R04/{0uA ԧ}G[JWx? fXcX4),[]WӬqd`o:d K8%}V1ݏW˼ ?FLXKkjf, Ta㹊dha͞-!*;?Bac;miM\ X4_7j~+7,,ȿȼG ?=|WڂhŃ0I[M(8 JɈb1E[+ | =F~eT_+-/#}lWwvq^R[jvGfKZb$p#i0-[a̩?-7 vUfc0y)0bE4HrGhyFb9գUcn|tqnӌΑDml˂IJ9T'%8VCݟbƱRnGrUJK.&GZT WCŽus]>O&})NB\cXl&i5[r ' =yC^ Nܰ6vIB4OK:p-,Ap:*l΀ *YШ xg?*u"~>DZ8ovAmO Wg:y{eL:J@\Ku`z~wC9ivٗ˲1ի՛E7:Nu \5fk"bF69>)rǑ%Rs!6nxϸk_w 8d8{_O6DCwW~;',꼫^?㭿w2' ܄nu}ҏcNa‰Vg~mgrLO7G`ߌ~%(̾ڂ֓٬JٞdOCE dP nG xҺ9yfȘߧ⸌j'v,Jڞ3V϶'D&)F= ԲVlOCo9mϋ| ym[#Cwf9ΌjVIiSHepMqlKz\eм„sո[1jZ܌~ZNj(:j+:Lp?Sa70ayC $(Xn4ۿ> emI矈T8bш4<-ߡE%Ңְ~7-{GXvYĵ|#;۲ma:snuد+]"cO`Q5?#?7WpJ)4ǫ;݆ױ,Tz !Ź760G, :(ɸCdm(39U8]wCA""'Tw! w"B5>uS d!^K|c'@AaL't! FA!x}+b#[q0ڔTA >BMk|&fxu!D>噍 ,L|J{ħ@B|L' A|3qD|p d!^;_9(`S 9SiT{g49T)^'>B!M|£c'@A&8; v!]wCB&w>p,"xW#' ,(g3ʨ]wC@<,w&)Tb]|vb2UE , ]g>BmRdN| S]ۧ<#A|3&sBY!8GfqW@WCH1!'@A<ϟNZ(m̽"4CL "ue /Bg|A5 $޷p\RfXŽS ^ħP:)rɝLA|,ķD<(fqjEq1)Acه;!Zף:AX|qQ=}a X\ϿLaڔ<[F~ ʥsDs'`2yL@ "slTu@A<%ў]wCB$wށ@^+eA#cEJZYp sr/eħ@A<О~+q ^2{SL|9WS^ >Bu0 A+*q+>*3*n, >B 5ńaAoW?Ʒmc+qB{?zI-=XYREɉSYRiYE'qQr5ٙpv9eL=j߆B]g0%Z1 J KuS,sB81߂Bc9msnPy`k71{oA )sr֐Xjf ,Tq˽6`5iSt-(tӣh(P"'cP-hhd28̞WS37)SDȈrr$"RDl4a`J+v!71bUZ1B Ba@1CϴMf%h"sH8&G!x $r,iKA V@?0Rix`,PP*e=V0;i3{LPqD,a`R{Xxf`cS|3%)CYm9k fREapBq`{azXX*oSd4bn拢\I#C]|1iGyհy?,qg+⯎zyT(>(,Ohc+RgEa(P0KE',Q6l5e _rZTOgc hFokRv ;AdsNK7242̌΋ /V~Kׇ/}`FEozfw\({;e,X }]NgCb20d?M=;wOI{alҠZ;jBSIM\Tgj">\L4`(Y / b@9rk Em:2,"1%bREbxD#);iݝFY|Tflob)WqnZ.{\eaj"'[t^\kӉɥ;yvvm~sY16o"BgbsUYǹ,e[fW h)|xuT|c^6H\t,o26̂E93;o=Lޤ&x#v\w(Ѿʊ i189tu48)s01FE"an]o:Hsyܼ%ʙ&x2];?*C/mU:e!5\(wp)3f4z1g*j$kQ[5!11˞er;)+GVDyuxQWxl:*]^*v-D]9~Gb+$(E'Fz)Lm.T`ɘ V8lbCZzȭH̛Oٿ豫˺vO'C0_6ᵯy<02[[\@vn6'`)v^q/JfyIl[,HQdtR-;8qK,ıHX@p) ӸJJ`HÏ0٠KbV"JdxˑvIv^?ckZL$-3d 522ZJcMG⅓-C?Hۢd"Ji" RR l\:(ǎiDi*Hg>J+,hWZp)qQ?w#B1T2pΫ޸<^)4)eEżNYՍ0?['\+|( O,] ?o'ߕ5Oz8U 3U>Qf>{.g|z"; t0 C"rg\z\L5ѪչUIxH뽽ReXOًgg`q`ylTrĶ㷶l zTmvo"7r=JtXǰ q#g+*>6i<~m{@6C5&YZ\0b|6uش̾# $g\KrR 3Gl_b`>f}xD "Uv"R[lF=ln`W^ɡ;5’E =!"} U3E4AR%npŜ(aHǹFugWs'g`j[?VƹJA5ź/Qg%*}De_/QYhR7w@AR(6 C&Kâa] ep-۴ :}DtH儽r@@Z kp:OVfe (Eb![cI"|ZeX_ZW !(9&r6ȹMtdXDcZ)"V_"D4Fd:/ ~EB;*ihgf.h?QK~> Г 3kB07M':k"߿irfKsP*B5s.\.Dy q6;BV E vU*{j1%,OUauo}TUPք\`bf tP[vAf7!1<PILf?.~W5+f<4/8i6 CMӪUv4p G H}7opJd~ ŬL1]ھ}SKh4#<8<#wm`rپX)'m4B! #b > S}`j9Juc:zX'<[יWWFL&`='iӧGYRԪDDC=,1PGq#cA=0co'j=tfbg%"T2* 2 (Z!G%_9u]xg2]] |YMγ?V~G'G׻'M:^N(]:!+w&e˥ M&󃻪BgHFIFIԟQҟQҟQһ..Q"o|i'uAX)P.*C)}cpH\jIxNj?xՆl ]@ aݎ >yLgo'-xR VE}tF]0>iSb;oGeBkA*I!?@V iy_]v5<(L6m|3$k,%G8#ɮo-6-,ZNsL2JJ2S*iz=I?e~w e0nTR\v׏J|pˠ5˵V -Xn g=ȹ6G;dT 5q635;ҘzYYYMҮLF#P{wQ~zи{ȸp׉vmHthCvLp(7Ѯ?M9vݝPňvm^fϵyH=N *JW*tn{wY]]-%.{kН+Uۄ6: H SRl>2 hudyzi2>/g}Rru"K)Y;!-wmY_iNvR݇`wdgdL>L 9H8UwjR$R6`K<]zkM}=;kf ]#)$ dm 2 so8Nm<<%UU8 C,PpzOd0wBo[VܷpHB `hn)19d"᧠OA})FkfҀ]4R!a%hQfta C+w1F nb {&C<7K)b΂waHFM #Љutf5x#km38)AZ9W&t?|;tbЂ;*BHb]&FLL\X(r"{n)LK q'5 o'[z5Wz77?DA @1;m=SB\TB!p'PH%R TB!PHFG:vI $ڝRSjwJN R4HtT7 t.ӅzPOB=] p#ZMjaZ'/rjaZ io7趖Wg5C>FK70""%@ ;X|4}6ڤ6Mzjڤ6MBrzzG'?Kr?dwK@&&VQR8z3jg>L欠^1Tk%iq^~=t1+V{Uv=1޶=A' I.{_a-ګ݉@@05 3rF[+Lm.T`ɘ ^|jI!H6\}ij#H[f )!1@jFAX BOé,5A_/$`W I}nh"Ji :8##:pP;D0gVX. U6|]bDO  QW\RAn4gEt'&$ Eb~?Ov£vy2f. ? 7.=5['Qՙf33/3,^6j Sjr5LP.L_\V n6-e%f$.03?VֈI#ڻW-"[8?G?tAVTOc0?KPT?qesg*q#@\D]o5Xmy@-vVȌeoo$ئokvn/]A{"ѿ#+0z/1uWfvj#w{:[ (8TWT:P=]=gȑ{Wһ%¤j516%g4'J:~ɱ771r(;=+'XW 4j+`aMgs/\o;Egv@@O|9w>K]|q{߶fq~ZDnilYlu|M÷g@〝%dmGcUe=AjGA)5pIܴ'YBpSa=S20ı/8$X/X&JE@ }Z!ߟ1hjOpvskr#]}j#dw걣"s`''XHH0+BB 1EQh*#N5ڳiӼ˻ى7OgG vVdr6`XGTB#:F(75hE`'rMu} tPb}*k7__~cr2lj15 e7CrtE.wkz8nOB D< 4!4bE -r M(Vp$Ds; DS\1GO9A7YeB3X n/+Ve2%:~!)@ ;cSu}p]4л8*νbOmlh63z=˕Y)MC veƇVT"/KU~绒+ՇջYEZ5,*pB=o(󕿾 v3z:w~ CW_ʢ>{)bgލⱧ,I, nűmZDd(p 79(ۦ(~XS |}( \-ǹƣ8~g~R˝p'Zz̬/=7OEWɯ܊fe3^)Xult:^4qr8I$_FM{-o?ҝS #RK:aENR-TK,KR-TK,KrCz!wO?Nqతw췓PLp ql'?QzkUC.h6Is9Twuʴ N,rﻟVE'̟k{!>Cl9wB2ҽ͍ܵj.Y+٧)a~g)S6۰ܢD'[ +U|<͛ݽBWpjꡦzjꡦsjꡦzS=T5CMPS=T5CMPS=T5CMPrKPS=T5CMPS=T# :mm2SՋʻd.>u]BV ;ĬWp \P$#Z{1W(83q%%cMXd`am-Sx<vP CLal:bHZfeIEnư/x?t9VD)  KrTV8dNe@m&\PJR=Qیvir]tLMP/>=, ,|} v({jLquw~/n__M>A^M5?xԂ})%٣pN'V@bKyrsK0ZqZۀB*dq2.JBi@y@#ʜirm$lZ'*ZG~U|V1 V hX\)v=bhɺ|STl絠\`BM|nE( 6y)ӭ\l~1r1$>*4nM-j@MSY_%ﺐ3Lng5N%h[M?M0v3XM~l?e c1G˃逶}q?EKZ[a_U!h@m}ߟ|+/ cgt$lw޳5;D wloe<Qb*wѲk5EDhV%.h>tLOdPχS{ЊB{q4yDBڏDM9 ln}k <ࢥH3:`O0~ĥi*΂-giE=ϸzh~ƣf!걗ؐ}Fz@/* {7gG8nOV藢;G\ꚥFSP3.CF-ra[Gm/zx[W_w ֣^2:"$[vLM v[@vQi@}Vc_ i7 v|hi"Ѐ`ru聾Uƞg pk_X=xgb%zb'sc.6A=Hz䋖9<{yr CYlVT8<DݤlrpFG+M`$:#)Cl3UMQ|CVǚ'۳=!X=9xˠ'LΫvjU6uRUUyOmX]W$=0`oa~g~Ì RoHQ|w,m񎒾6#Z쯾(m]<ZtS6ygmng,1m@@qf+/O™LqjiFW^G5*tU#26krfa_o7bo0^_blQ^oE di|U /yO·e& eZ30)ZMVHKD8?%M3AVŝ_ ^~t'm -,vǟ|0ar$-VJ\|׌]4fMGs;/@5|:!~≶Q)``;޿2dM(Yx_X7 S\B[M( ,3Sv"k/8$j)]ŵU ׻UPR6?h`>OP fh/ @H0 rRuI$+$Bg `*S:$ rl-!`/Gmt!a\,Iƒ  #$$8XBb%#E$S, A9p0IX2i$-IZ>%%#]cmEW^2[?k|5Gupt}pw]˞55̼)=qFKE0 =q#I"8C@pdwA2AlF02x<&%zYԘ-/g0ꯪ;Rf+E'60)G@S>Tk'YfNKIUzԼpCswTcZ> g`lI-m0_8;Q`d}43%jqxѭfXƳm DE;bWf:t2!E ΀ 5@y~z``1f٨MIIV4ɈQFZr6JO0$u<9)%2E`!,^RzCcpQ~~; Ȕ䪧0w$75ts'5bJd:t,)8=ރwqOwIm ri?ԣxs!gM7`Wr26}meI9qӾbX@5"V`4JHB MA \:8{=RHRJLC$|J 񦗍7y bI@49C"xSD! ! &RXg1FĊuo’ ;&}@;fh ~K))s来$%Ǜ:xlܳ3eÅZ8R0q^ؿsr3vq15Dj bbqEb>xD֑T:n r*AiP n΀ۉ./ߦ@n> b(vكo8>VհJ8g;N-OP'fl[}?_?U}Œ+_ͫ!W>T?%"np@q_g[2rCs:h,(} $?Br%p$_M0Zdu>g3#u_ff`@+hL]ᬙ3 NrughŎ}1C0ByaЊ@E_afn=fo[ q|K8UKro`o4~,hOԓ{7hLwHt̮arf=R+]E'B`NV ʣUX0LnbΊ;MlE} ouGkvh9CH\ˮU]ֺъIxiR+vL^钜9e#cysngC|)z"tvBʎB%ll~o6h&{1^>?ZXu_wdTI՗F 1#gHZ5i>2>$xM!; Loќ59KwoFee)l|s#428 ]ǡQS҇jG!;= 5#Zr'rӮӡGRC%v=a?M>?y(qܱkpk(r䴷r1eX XPZ !a{]rZ2.|Abpd13HD"{U5%wQ̒ŝ3_>XrF3qk I^c GlѾ*ԭmoVտ~ }y\^YL\K$7j1"JEcQjR2B~¢ tٻC `\?e?7BS&;&f"j~kMfT1c뇒Iq,ylb)Bg}VT(ݿ./&홱OjT6B2m -ZǗHGٻGvFxO!\)Pʍx R,Ƹ%k $M%#RHDߧS߈,kzN2 T l|[tn|RS >-.fEd:gI䉋 ]$淁ȥb(+}fYq|cT6.3 fg$C嵼RV[& L2ՅCoX=q5)<:ңkݦcwse&M+Ŋ~v1'ǟ'y{|rȖ@=܆;m2Cͨ?,]p ~ ghmfasX3$>>Ds.V=).A\ĭuaU`b\W,||y@dlQ|QG?$A0J?N>q |noMN<"ް|ӬHbg~1R#a9&xOz⾱֟cV}P U@g-"޽WtCtǨ98FYk_“|՗9/n"]/z^ /aboUM@l'ux#f%)\W^VsDXGF>/?&/RKJ0 Y31 9-aR<2%dnƣ?D`DkMR/{? w}J~qqUԆ =eۓlR|׳{&H נAQq![RiI(J1,RR)]i0n69$%E%|2fjryHK t{s\-םP28)n@ߢnwe.蔀TH`̙ [j]UiChU^a~; (b ,F>GMSBI TH+X*D;-"a3-Y[`Y iXB*TL1(kL#+' kA I]Y@^ӅXt V!!T:HKb^x%ZIj$fuc 6hZvփRĖ@.]XWa;`шgЩRJZ-ẂYls%|;ϵ೥(snHy9Wk 5;f0KPlN+IXmp$q\M<%(X1X㬵$vkR1!Q&pJ[eTJ)yµ>uR4rq {-{pW[PxNJ.l?f_Ow:62H&vS+P>-nǵ~^lh0Mܴt1DWA!KWRͩF!X [^pMv "8c4B$FvBA]%֞aY"'Rr,Hfe 3ÂҌ[$4P/($9bpc)P \IgϦAY7%B# ,cʸV(fa&$.V b Km%6$xʌ;!03Hh X iRw8a%yEK?(Rthc<ţ KIG%si[D0g.T\; vAU#g4",D]pI-7[W<&Y~$0'Y vGQ˾n6fwŬ.4oH͖'Xu541l~%,,npgLHƀ1U36~硇 ?Zfi%Ww2_s>)$dj PbL]"9FVc >Jlw?auL'3\$Ub;!r u_/F{BeƉiRػ1~_?[i3o8v/y FUmbPoHoEv8{1!LCdS{aߦӇIOhyo#XbcQi_ckX׼s@ȉZU>+J}͌LeJbxΔEDoV5%v<~Li%CK^bT_0Ro@ R9V)An[bqC|I] 6n1F71i:/EB B)RҐѮE8y80|2^RP/>}= zm1ܴ#ΦR?Mͥgc8ħ,b%r£= B6R}>&7he^d`+#XHX00)B`&H a1!MWC #͹/wO߃z?ĆZ>Qgg Nů_+Ke~TU~E+\+^:fYI0dZ9X>Kr@SWkX("B)$Kf9Y_쌘,"}Ndb^Vr_&QR߻;?FIY .1ȧ_Gb|ވxUt*E'bщfщe`g# STLHe,37̵R"0A)82j(%O%n?I+%`]Tu٭v4N*$0,l eeuUΟ ު;Ml~Vy׾}rtDSh3ZqaD&gLLFǽT|K>v^Pƒ7; Ԁ68O6 u٠<<t(ݙ>k>^WËvbc-]Ó= c {58|U*߯bf&YF V޵E8Zc2p[{QڢmOG"h$},ZIW/{Vv).ɝf82ApJmv(}3+4{>U899^yTR[=CfxWTk׺Vd(S1ҒYv%ɜw!A&%^zK2% %A3>--)a]͔.A7rI֤ dNj U8˒6BL93 YD2%F{ݨ)Ž״QiߓVs:1g6yU@I0}r\ԱfٖU=v^5;U%⸺_HH5~0p\Vs$90<%T$ A xqܭ=s2/٠pe%xGŠ6@'}2@^^]E>D.'iSOH3n4~Es6yRDc)qβ?q[j^cP>k.QJ1 ̃L$28،&΁zʡ,;~s>{u/t&FGecu͜8Q\ryRr=n̺Ϣ5[펮x8ȭCTG{WKMu!-&.&غiMfw9vth#o_ͺClIYN m8w<\O^~J~+.,e+!Չ ` Kx&EVWqHAMq@.CS' `iuV B ,cE{kv9vA2ΪdKLu%$+30&pS\%2AuAe̅zq%m.߻f8>sknRv"p+EJWDNJ:YXU^ۉTaZt ukoks_:_Fdtw{[άyƅEd)>AU~̥Cޟ2WM`;?/}hh:Ta2mWQ?kw^HIzF|;?EfI|YN{{ ߹n/oB>mtגa؝}D%%@|y wϙ8g̞1Oǘkuϡ-|u}bLhy -nL-$ŶS]z֖5l(]ȸM]&q^/Iʭ{I-[oz36+hu1oY Ӂl;28g:'nqe@MI:[ٲ@{% #3lcަSȤIZǑg2yϦnf]:Kr ̅zƳ;io@0U'cގ 'F_Sw8N99ȄoogOgK= SOs4F@qx৿~~w_+Ӡ2x?<'^kҸ_\'8WG1*KS|U=Ψ}UBop4᭤]'%L*}2ѝD ³Ng7 |o{4:e =p[gXÍ꓍"6UHB%oC" JY(M }> eavRKL(*eHyqh{P7hˍPz^'^֮@ץ-B]W=ɚ􅀷Eۻ拜N@*k!l~|B$ԃ^kjBtܺ]YQŭ+2Bvg(s{jC71 ƑR0H<Mt)2ra'V6ᒺp&~-ֻ= oW߫$ܼčFwFjˀuو 8]Tx/rA(xQsuFHlrHmk a/] qXJd07SeH)&O{-2!^p>fi@d$1Kc=!Q4 @MYi& #胁bZ$YZ8 p k1ZsZi䴋0Ŏ(YY-vjUa<ӵ9|Ozo}`b?s/2w?]:@9SD1bD:: S!KqE;y0=\cZ?YŮGV9򢂴hdK9ۆӄ$=^0j5:mPeC5AuZӒ*4\j\Jsh\8g2w{e9Z㚖 gSQiF(m)[D0ZqgqmXۻqm# ^i;EHK@kkY%7Eԑ@E&6[),90L* NZlDZT9:g);x! >| U9_sGՅfkRp Agm8Ƃ h4rDrI Y8UebpǚpoӸ>Y"lQLYW9>H͂2p'VQ 4MMCO!8~iFR҃ ΣJ99k X ?qɬƻR<9+Vemʍf6* N9p4 X*W$ĬR;nwV6?,,\I-+w6_,VY**fAFTq1v= kn(0f?֖5Mp3`>7;lp=r?3wl`򬛫PWH?bbw~Fj_"à (-=cGN(Ό[\?jAݦ[ & ew:z>FET.2OYKPߨokvVuj%+Le Ѝp? PS)LhPvb I7#\1^1>b ʯխFf"\J_U5MUYܛ6&(9r@kZ^ j6Sw\aؐ"%ҸgL}XܹLj/dx]Eݢ~?L77ۈx: nԐws.ML+@IX` rf.3~.e`;Ԙf|G8,w[E'ׯ1ȧa4Ip )xu>M^9 k{bm)ʺeгAwl1x.I-|dPnN@%YɫR, 61:۱U9!58'5h. xds)2+3 _d4NlS-e2.rV(@Ʈw{K -mҞC~:%OfYGO|jfGyk5N]szWMxYlz*N %d'YsϦ˸0F*Et4" Nw$aӺ2v0͹Kq\]ׄXyG`f~MjΘ$>FD$a7Ɣd=~=PU}V.lyDEd.3-;*G>Y쓥Y*TjOcGˇh$(=m&vLLsi8tBq_taM3xM95JcR{hleGyu>[Q_:|̫\f2m[T Y)V#{,䘣FcLQR!;Bts>pvkOʽOT(.B9:rN,-6M'PQ o_ z^᝛cN, Iy tr?$٨%JQO2OX-c8'$ R3sr…UywA|\хǵl=: Y±@r*ywa%Pe$'()Y{(ۛNkoJ: C25H:rʯ{WHPO3@Mv1=hYv֔JX.w7:,ɺJYU%Ȉ@gώ1NK 1a(M\c4~D2<h"D ZP P͔S Ctol:M&ݞ?LÇ O͌rðyyró?@n:ЖIG\ Gi N"x. &%@dNEGZ㝢w#^M@xȎu1x]tߜȿk>݅#\}iR|sc;pHY$oK#Rn2We2z85=;BCo/ r< xBaBi/3Aoe*s BrF$n"S>;4'QJK/Gw_r{f뤳Vb09fU&Xn=.T$sL fZ)dav4Bz(jCz0\ȘnZ!~0W'GG$>@.jPI!oI lK ^cP>kI'zDQ&F{ 3&جkBt'2'ufiaOVd *$1b,\02oh1r -q#3ֆHDR0ȅR1d <)d2Tɜ/CNU@q&Z390.JHP4$v,%hfۧ?>VG4V΋4( ̋fZ3*rOABY* GHflsUYQ<=ujE-o?>GYX˪T9K9Z2@^ d@3Д0&nhbt{]@IPL jJ ^'Ekeej0Og+u vuybêFF6L7 *ŶS]-k'8k$;mXMOz>1G[WsZVBu{Y#ײv:Nnz/$T.`Ƥ7GtX|>`ky#OiY{z꿽G?`tLx#LDM]8p{ պ-TZج$@'}}-N2/o{?Y*gׂdIkyY{t}Z]쓎ϙA1.eDop%GT̚]t|k{Bu&ߥjM{F(4J%l9YPIҘtҐn3}Nk U(4hU4#|]>͸U`Tŭ1A RX-2EepVz=JO8rgc{MӒ]i9zܝ6y6 ˗cIOD6 iecgۇ1=s̵V;V&uj2f ڝ/Ɍ/hryf`frtɀm4]vxtp ܮfN:;̻~<xcx W]]9١{o^. y %ۿNU48r&> NAeF擗#=b(ms?.HǛ17m $=UG4oT%g%ZZ!:2LEIx1&霹,A˸Z9r7oԸw⼹G=5;s>~Q`iυ=z*E>=v NUJ]+}tѕ>GWJTtf5٬GWJ&R+}tѕ>GW4NRC幬.*'TU;T2RMVj7n*dv.+dY&+dTjUJ5Y&+dTjRMVJ5Y(j>qTjhI$XC !pL3JD dP4U1xTo\zXjjֵڙ,y[TT2`G4/|^}c(\D>w{/ 17ę'PI 56x:m'B'fSpڕ%cs(b %+d|.H%pNSXfgF߂6*fhBC6*ZI)&p[g#geAz'bvDoz诜قqcZ,ug-q.f;, +:::  žI~ nBѠ DU74(Xlct|Ki[n |+z:wA/68 ںVHuH%}C&XgC:gFtr1\F::ftϻw \ЮD'GjRfA2_c[ ȨFzd{fd΁U! !Gu/S>i@j]JE;B&ETZA-Ѳ ;Ξs=?!."e&Rܴ%H*ןb !dykг e|C="YlX}NFl,RоʃV$IUY#HB'_uz /c\+hfBZ%ZPmĐl,`\E$G4Yg1VHqӰ̨v5ֽ?eWVx,WNO&qR ډDO.HpY6hFǏP} II.=J+#Rs3[E;wI!S5n %yu600*EZ,`;6IXLP%ƬR;a}^Hwf,DX\H-? c >q gKiqzX]Oᣢ ZO=oȊ. ?Lmf?8AuѴg熱)=zGKOz14fdP>_ |B:m{m\`vL6$>\ %u{@].SF Ae\H"1~KP}P_)ZZ.ɢ.SˍQ2ZkBe\b za߫+D)!U_Bk 1NU)q/!gi<֤\ 7~x1MEfQ,9# ]_-VƟ]|$AïSUix*W7[Ɉ\-SOk7>~?kQc)`\4}AS>e_ &J )$_(h\O카b/AbE&<݈UN(yɴØ]*sR-|,0J IdҗbJ4'!u SX A+A*CFsR.o@!&Ķ +nevh9չ&|6FDc s(-O`+ArrA(\AI+S3JNakƃk< >TÁ' / IӮ}{E$Pxf֔G :rsS Z>j*Sot몕쵒}g%v+_+l_l+:R>?/6nA`(o:XQK#b1xa{E/ ZU .RdYJ_ٌ\e Q`{n1!@+D QƁzSƒ'@:BSG'j ~˯@D@p+M@}9Ѹ8nr-_q0 IBjP1wm$_5FCmw@n7C,Q-3H.-S=R$ŗCȰ- [===UU]]U*9&{0;L蠰: ~`dp=&jTUC5B?HrqDϊ2@5z5Vr}?·"NO}G |^:Uzsr:#A8%3x\ %ՉKYJ{({_uiz,or?xK'C g*1ֆ(1fAI[@mY>N"2KXGh%7mm/ךvW"3-4#=Tb񑷛'uԦ\.wr{QU3qPZFnUWSbL>HtLL2D1YϢhR.';Jp 1Xס U'YmA7;BSeO~IsÓf"#{Y \頄W"jfNȇ ª؉m+&oU !,IRzIY EF0E A1`&(\k!"$#Ao@;9:u{C@ RpjҔG-X@#@ ,;ME`>60-!CZC"U#=]}Ms*jg=SvpE`1a8}jRK#)iY] +T@l饷eb2+b5t odczyװG@ -eNj QG!"b:%*EjKrk)vz,qo(zפ9ٔuEof\\ 5Ӷxhz'ziV^T )9קܙq <ՌRYh)@T%p+pv<; I=ϚCKږY,u::<iL =Adʜ7ԑ;.u.4%zvL6+"vɻhiƂ:7aw⩣J/dbF%hNEl'&F M>^`6ɴJ/vy@,[xzq>3 n]7]0ȭC3;ˑDh{׹Ϧaߐ2ۜ[غ~ܺ=N͎=}6e-gXwmzxwYKaG -x8>gG _giEԻ>SoI;Ϛlw}ئ\nsLQg&:D`fZV/ דF_$`7CXFuK"|WetE9}4I$y&hL#Fc޲( dݳJ{O%:## [یkb! &^Q/7<-oűcG'bG l~M)PB xo$٨%jQ_&j)` CdgH b^os ҕ@Bwij?\MvwX>%dᷝ}sFX7.|5Aþ&dH|d[plRXK *bh; 0ȱc 3~@]$iR1iQ;u 9Fp|YB"˩\Q[.JYŜ`j"Qh:pep {J4޿r_ƽf~ݛu3bn'm1I߮hҔhȽ@CBA22gE)Q*2yT VTY.+3ً'ɑLZ3&Aq-#FZpӆX^-Pmmilf,OdY24,`wC}mAj*̗VQ[jB Jv=;;%E3rGt_Y!f._jBaQ'b1UgЇ|$.`F},3͋iY&s6gAhڦ%|9]Y`O9k{U\n>]<,|#:w!gq؝crljIpy \Cdvy:`XB7@qdP<8:LƗ?8BE5BRpGKC/O \62DggW<HJ1Sh(f!`kagQpNC42jLx QM=1gsJ 'F bA"'.' Y4m5u\B ;\ո-Mђ(Z g๜UHkG %.uyלs3ŖBH] ^r[\|]s:< “4HFFd%$@JR48i9؈Rtw`) |ABR63?>:y6g L+`ZKІ:‰dMFDC 2#@ >=@^ 7Ju/OFf H0f~yEȿxf/P~N_`C?FJ$Sfu\mȼGǩsiP>k/uwCjޅϹQV?x<)>L.\KDVuKZ*ઈ=/&aTXKH L $N/u*"ٿY5he#G('bD.56UE5El>o'Ps`=u،ʀo=Nd/ֈoYK`R U/CD:k瑫E4iQb _eIݨf RU@ [Ϯ\Oӂe'w9Ղilō"c)F G~ ̰yOw븪K"K˂.Aq*~?Oqp Emgl`1c@3fSvNfL#yAȨ+_"a Wb ,W$0V4i> dAL%hlsm XyTr*NT!%10A>r, -q *l+EօYU}|'oZ 0PE,<(NcFm[}CAC/lwA2'MM[@4Ǜ`]TN[44DJR0 J"m;v[Mfh3ݼT߹S(\`%I$Tj+ >hv{WN8%ND BqS)fK)|9[k""FUMsLMk7H>}l C]uUq6>P\>)ϼa:0 /ɹR;$#[ق혣|65{3O 7G@Ρ4<3&2|d!H rΑ8i%t;#!;־r&{9ְp3sӊ;PS8@E.DO^!|,>Tw;,WˇCիqvWpq0rӢaw݅RIMT<2:&J0kg0I) Y"6ֲ’R hr|nuQmXs6iQҘSznfE<+jnL_ůX^p5ʚx; *~?)//^nyxWwwkdg~՞Wţg dg9\LcD8!@bC`K{QcOP{aB;vZ=߷&Ps0Ez ƿp0{7O_!"XYdbaEN>RP>4aa&+6_`C3.-88uPͲ51:P^˯Ѳ6v9Wٻƍ%eV߇],)/x&FЧ,)<3~IJeRa3LR}bu)`F @Dcggw=J>HHos>r}VlCz ^ wE%Kʾ:Q[Ƴp LeS@4rЗ*Ղ"'|{_BY* h7=v MRnִm}7>o7Pn&ׅP|)s nM ?k8+ǒVLjZLE[ʶ"T`hHk&DaN.;6 Kn`EJ1-@!ە|/k.1(Pʍ"VFudXDcZ)"V?"D4F lt[sm燅S=lp+{VYw@<@O!Lp_(i'ÇhV|f.PQ {Vk) Lv:Aˠ`#%O,NK DP?Qr v||8H@ĬEb5rz%"讀(3>^Ms AHRVa S띱Vc&ye4zl5ͭl7BY{>pYErZL6*(PGy#C%P*U7k4x`Hj[`L <* 9N|0aIQR`k[HنE[55¾Gc%Sl>W42aD%$@tY$x) Ӂi!'UfvrӲ"瘉q]a |@N@kCu{_iL{DxwcZadPg($^2#YobA% 9Ij$fXrX[㴎:N;&i;Sl\?-EN1.ީ{E*.tL `"pp?~+[_-ry ,9劀Y,C"(2bAjl< 5[Ƣc\]9nKoq ܺWkZ'yUbrP)ͧd:38h񺢥l<ܱmڱOK!r]u.i/9v}R0|%gS4a-}4R2sc̱b> ʆa0˽[̱7RQ?̤\mڕ^PojJHt)Td*VlUvm6)E& |/Gzr㩦/30쐞aE`]N23 S}`j9ž(ظ'UB"x`Y ) VE5'Qs*5;CV`SME.ΊȗPfo7gt ͻL[3qY1me0o D Y l%P-7z)ƫŗl(RمBB,ZXJL!0">}:yMkf7) MRR>-hA;x(bYMXH}dcj3E4DR%nŜ(aHfSl7L4 FȑgeTpZxn=Ty \Xv"牕ntJ`Z;i7@faq=ے9l-%;ap*3i}s&Fqy݁9uoxT;4DJ& -&""b*sng|d;u&+\coa$SeNᯱ%Vޛչ+435Uv,3ZfYr*✢˨xڥ#[;R:j#X' [hu!qQTWڑCJ%vJM!qb8Mq lbR g:7K-W-bΎfos/g8_^TEm6'ɽoO]GJn/zy/aVdY7\$-z5@w 0tP,0w[+Y L2"[&&Se;l̎1;<cYǓ1ʼn1 Z~ZHzWC$UmPRan <̤msȸC/,e4wRܚ tM|['ېne8H*uxTyX}ŽaD:):<6mi3Ǜ6yHuuEĖN[2.gn[RflRC󸡣:;,x5J+: ":׾k@SE߬xv g(o Eͺ7cL <" 4`"qQqbAak(BD A8n0DRJg9((0e%Vה$,-jJ5E|Ւ*eE߯Y )euj!Ԟ8jakjO09lg\(Lm!T`%j@t"/;}5bש)43fףtV5re/N{,W0˃Hbd)F"% dQ49y T`[ML1jveZ^jFeنcl/j-_LkO#T+|ܳj}{~~e 4nmoz`hٖ9@.tgFB!*fcPRGͩF1: W^pM@"zc4Bt $Wh.njXk9#R"%1,+ J3K`T|5Ucdwcu}N%3yt!ZZhG[U׼dfilṵ7wol:7_~H%aDnZ :$,Ȯc/9=)"Wtu+1uWf6 vx5 C]?@0iC|E:q %??jR#I0*kb,nx`j5Ĕ|R&"ge]E}+z],`Y+˪!?`JL}BCt7 TRdzQH4Sa6{Rp/7؋D-+|O :_zw q@|^X,Hk퐲(yGn,Tjo8IE=U20ıI`!t^=a%"Z dd-$…{ <) !uD!F8hLhQS`h:OsOEyd=<AL3֯Z*&*,.:>8`fe9 ?{ȍa~]\z@p8L`$2Idc[hK%y`fl5&UWdGz1b ńP0PᚇbcdcR>cA~簵o:V|pfPc}6?ȶs͕#GLCz:sIO=_6tR>;(O")xYv K |w}6ӱ?rھ)h=/̈M<J?EN"?ׯ%6=/-+w?]l : ~25;$ W vrkY3@,/aQ*]le%^ ZWU9zr 0r 8M(rneZGEdQ0A[R.k%!RHD#)0HhWvgg Gk`^rM%3=ɉs:7z?urKXo:4F뾿odb{mf~5cjʝx ?ϠMG߃nVF,(9xՕ\U$QtZ[YmyǟřjgWCzgS-OPmx2rRQ#C {'=QOy Ĵ+R4Wph+HmTXQ""*D!J(TQ܁!I"jxT~8!,؍$S{+%R.sw7ѭ+PNpnԴ[ЦPb˻`':͍>{O]SW>Ö'>}3Y[Sho9MH1K N9ťoW12cLs9VײzEq+&irMa}Y񗱛ۅ?-P^oO0=S42haD%y yI0R:"!'U9`-@Ye_E.H1)]a |@N@ֆQ{_iL {Tl+10BB23uS/ɐN Y$53Gc9ll˶lbiNiiWma'0=w'OyK)Օ{їM7'%LTZt E70Nlv5GW[Ƣc\Wwc`i_ Ŋ9+˭"i"&V*'*5Q ZGK!zƌƁZ`1b"iZ+I/yrj+HOL[ϷI+9۔jy۴׿yR"-DX ; JC=OhML/eqAjI0敍WPYEf@9.8 Iة%u&lXǕ$Bt%!iH G DtG^:ۓp9w aVf "3;8 SoOu6Z\Os1{z<11 Nkm€,F. ʊ:IT:Xkl//^*҂(hY Zi{pm*+o4a=ǻO~4Bh+@i$D]D=+ջA*?֚& @τOggq P\S}^ٿ~LenQƌCo=_+0wm}Da O}~_Su xc K &R#yBi! F (2ԛ_jEM1v>}mgza5bn>פ~gxVl*Agl.Oݿ`_|Ŷc)~}Zo|7?t2К k.$JR&)|&) ^N3 2KأDEXot]S-bB#mzY ͱwx4DߣDe>-zeNJO9'#t iqQh%\(3 iXPZ1LgA +Q)GG`R2QİȽe٦l#c13VJpd!:+DgBtV^al!C༉BtV\ Ya(DgBtV f>I(׸xY ,DgBtV Y!:+Dg7BtVBtV Y!:+Dg4z4 mdq,*,sUMr`T* J*lH(qxQ7_ 9k4@m=Z'm4BMX°`{Xx'ʺh<6.i @ cRCW逰  Ab f>-֜@lmB#qx,ξMz/qwuDpbVCa~uݸE/{o\b/  Ζx8sXTImW HzIyS] }_7/CGadh$K[ӳWvЖOK[ZFWΥ@3A*)5U'T+rqL4:9J" VS @tA 3i⛈&gHX] >"ʌ6HHpA1FĊeo’E {&CDDI S"!R6s$˝9`N .,}㖈c8Όl>9ޭF>)ȇ<n"5JP "LLlDVznq*hP ހb z{%{Mo}؛j\d~ ^jxŮx:moh"Οzi=E,>o{4DJ. k"*bESR3Kj慦fb$Seu_,en\ݜ,z9I_Y7v4ʏca.UdZPb?O%/P%27Xf1b35ϺX9axaXl x˫ nzXvRQZRs/oFfZmsqɦBO Yt.i[/3a%N'=Ϳ(}M3־-ޟ |Ľe{tfRt;R'nRC{?j"V-.[We|;mjLQ)㩚4W g$Ul Ԉ褠CrB4f&l\o {߀]>N" 4`"qQqbAak(BD A80DR(Y|< :mbzXBH= 8Ue.~5O!? .g ?{L}4.C=0Vq\ŨVB,CONeԾ~Dv$bƯV۳CN ֖d 8+%\I,#-R|޷1B54}%tnt(AM2eN*p0f#GFHZFJE TeDWˬjTE`_ &&2 ;&Ec@44BK!cZj?t#i|Q=gR"RYfd)kswN1pWI-#eט9Pn4dp *`;e.)4 @:*Hd&蠌69R:ŎZp{BtXc1 >縘q?wƩɸ{/Ʈ'Q!_mQ9='ҳKoM_ԬiG!TN2i5=:O0gHTZ< W_|b Y L ᠱ)2t+I`ҢYzX1|Ը s1mtK5QiPvwgVkiTx! .„X*?ʱЍFru_oo0pӉ]L`',> */$6./gM!N"ž9.=+fQ+G2AՐ^m&ʏ>RϑٽOi%GW Wywq!ی~+R"' Q0e2>7d-ۀPMUf&xܔ@ )R˭ޗg vS|u!xTH0R@`ebTJ BaFD5^9Z %(-;}r&6p15 e_J>.q sl4!?5,"hi\&|gRl}4mm ΃Mb߯9d^CzM ^kVJ_$@u"zLJ1. Ofƹ,2k)p ,0gC0mȼQoQ΁,磭 G3jp/[)yoޙ5@[o_OXԷ޷nL*4$a"%C"RXOBMUv-_P3ۇ"[Nu\"Bg,7v4%2\J1H&BcR&3љDNmj\w>~jR٬xcrEj  SEȖYu A;:[mRſU"42EvT?+V>90H2 .5b0ZYm8Y4ݎťlƟdx)d݅d OFM),Z~}oIStcVcc{k4=5ٌsޒWj*hkr9峍Y0R!L.S9K_n1-9(F"@MJ 6LD#y.b+N,Y.Id bbn9崖ӎiÎ{*N6"ꠄoPËƿ3<⻗GANhBIXy94'QN9U2Mvy3g%4Y-Uo=jܔA9O=DߏgJfz<b brfDP'"]e^ZB*ޚ ^^*>e4:sJ2r#|JIXQ;p/GAY%[J.mUxy3EJXVwxi> m{‡w]i;6$%C_K_ =&غiMwwtfݫt- c}S)Kz^jFww|䐿žt}ɴ Nf#m4ӟ7=4=n YmsJ|,'V??Pts7pmE$<#g@&(FdL,SC38~r%n>COg;3O|M.{"% )plr:ޅ(T1v5ԏx~+L-Ѻu1_*ƺYwVcVB3A*-=Zy)?B _VR&jKd=ьrSRoRKӯOMhhP " #[yg#gi<%v\nD SUYA+|7goD ~p;#fTpEݹ )lQiGt$3M-LiVi >` <DQXׇ7ϊfK:f0oAcge"'$יq<jTF=B8W?V΁D}=ufx>fd{;ZCHݴ:<%s 5@3(( xYJoEq&lC*P1cL;OBzo)-pgp@ L\m89x#|\{Z]u/v,ϤY:4x/,l0;?*L^qx)*q9bxzǿ eah4P_=Hop{[m%dQMbbs}z|>gGS_7]gOv^QNvïF-ٞ3vol^*,D˫p $Zi"h//ýeu\oOPqCcp[ے#؟dJN)-gn̖1[<ci-n33V'f'PhyucB҇^5ԋڰpyAVmkȸӳFϻt$)fo%RF8^iB8+j$pEG OZ "MpRٔ]%cpvZۙ|}&=#V^iu=fw+#6嬷w=?~YZfț]rd_!.`td-i*1)dE[rA×h/488fȵf ږ3 8\ o01GPdF MvQ TD:-.D[o :Iyh= DZ9V*FpXr{? oFSKmM05 ";Y`CT,RF;Md)-:Jp3RCBy(ӁP d F" MEC`ezC.YX?1N[9U4欋ѸZ_|TIW*m7WڳoMZjV4飮,C }w8ʃt#5'A~qy ֯>=? Oi'vekS&V:r1Ebcx1Sr-հYmP4\~5̬)ڨUD/dgaBu}fY2G~oOZXԍ0p6q8_ggmY-5",>0Ъïն\^%-vD}s\{Y٫GOH-ɕ89wLRICޜ'}8ahf{+(K!Ɉ"TD~Cp.B^/3RJ MZep33Ql͍_*Eۢ}Lggx2WJhF3ßG41N% bAz&ȩbۯv>4gK;Il-vM}dѼcQix_H+<>ySE%;[R=>lc{̬튭3Tcl'9E~7BVڅ S#Fb)J6k6M#- 4G6M>_I K#wU]JZy/2{x' $t)%h+PijB܉\`;'*ۭ z3sGyv̵be͡w8STf?N6fYx17lm5&8mz7Ύ_,|uTDcPY2bũbJEEB D#MeBDau @4?bFX JRL=+`˙kHDv]&l}srx7Gzxiq@2"*6! 4.{F&JV 6j!dWsŹAzN9NQjF뤙 t>Z9[m 9Y;%t6g@*L$q>%bI%r5$ew%Zp$bt1Ӂ%Gd:Vo%I E1u2HJ TJY)DEjD bB ~" ~ά?c@"֋< ;hCY0r8E.6uE1&"~( {,*%W{t>o=9 ZJ;>>&mѸt$E.&Dtl346%-XEߣ8A8ngԘDDk 4]oT; xyuz}7{ KLQ)^g#LQD|6{9무] g;u-9‘TMo05SX6+LH׬>؃]nV?I{M-_VߩT\RsY>\T,=zr IHٗ&VG) ῖG\ |5EO7k}7c~x 8^~^' .#`?f%tJJ*uh!YN c{CQə:߅ XQ{9_c]di$ؚYK3S&VhyGĔɂA'cv Xc_/-CN@H)'@t<"YNG'UB֘{N9}rګ=0ŖJ<)̽wBl!OqVMZDž6. ߆4ifyC/ͷk'{>aΌ>z_?wɴifLF*@3 gڞBiPK~wJżgi3ooq3./? S}AKN=_4ʹ?CW_*557?\^>)қ%N"]Lp| ҈ hA P*-Q‹H%Nd.o%¼%4RKńBJc.^P$풔AѸ,$QֵJ2 xe[[G8-.J`-.z)\H^Qe1P(Q >W%t%8 :?g-C}ݭe jՏ1-bl~=#O,6@&Մb|N !˒3YklɌ1i)kw!`13Vjx9_i9nK3lI<8BPwP@dRh2w<֫fJ*g'3za%N!le.E(:ptiH}lg7)+\_xt;1Uňߓzf~sΖm9tW7ro*?jI x kzalGu8\pJJӤ Q n 2#{S]'ؾBGek1A#R)뱖P*)s&+Ee(r`j Gve#(ɾx}@-=S#q9p>! j!}LH"Iuqk VpJ) aTo&cu{xyG*>XmQVD">)c@'EY YRͣo($^-xyT,;w|˺<}VC_LKϔw^gʩB*dk$4XSRE&M!e`rBk?.]D͖`C8L>\A\V / TsBk{:p)MT"2KC*UӇfG)cxQj:&% oL5}ňRIFvG*i~ hxYa Ob]mo#+6_ `%,2dx~[-YՒ,S/,vVSl6Ţ`*QfsE&ΨC g bsk; 3lDz,6n);γeGsT@G~R?6w*e8F`* `P FY!f>_jRaQLfw7@o17*v|9gGswV[Ϡ/^L^$ɷgѮƦ vy^"`VtY7ee \oq< 06ݵ-vikQ.49 .O˚w܀05ϺCh 's`+Y~iZDz_UdSkPo2is.,dmK踌.BS,rB;zՓͿJ҇gtz1-LCh*/[.*']ִ<ᑇԔ_joq²3locJŤ9R ǁ{ڳ7NYb J}ʾV74ͳ}YcQnx-8)<~*m؟rF-x:$ QeuqJ1)$ E6Jg |=ߙߪ6U:l(E6ڙ*y~\Wj)TU|NAtꢓ]"BpYS(Qd= .p:Eq@J*ҚpvkƂ,dI;qe!xgO1< FӟUy-WB OG3βJB)f6 161a*AL)Ǭ(>遟G@XJ;$ֹ:h:) S}r LYkNƤ'&Uhi*ĂЮX; !{3*xԚDf0 ,)"$핗1 S{'qk%̕Լ9܇VA~igM U0GYmrEzβAq@ ؤ)za!R ܁dN=W$t%@4Ǜ+.(-ax_ b40%HL>WpاٸWѠOx>yP 0ɒ$JBbL"RA+}h{WN8%N9HRq^S)fR |>sD W)jBN$E7H>V;| "qR$9&7L纠&G@ 3:9Oh.@VrR >;Ɛy!ys JSKGbDƕLR"IPR9'.cyp΁?zoZF7pRZ>x)n8}.iֱQn,+ ^ mk%U:ߩ5] ] νCblg޼h -u1Cf#l; @ u4q`xk"&QJOuy߼wm]Œ#zӢ2nrq{˙Ÿ>r(mf.ohr^^~;e-]7m~n9:},poJ3.voK3/0r`pKZoB6ښjێii)JaGcZ '?e V)vvk`( PĊÑb_yYEzvKo]kOv,(j5BVy%-1p8LM\!0Tj-$.\J8u1$I[UI$%c 15ϴq'Zպ<0Y>xs[(MGňN05l;s&ÙgJ_[&d"US Adc>$J )SֱX'fAOVA H'HFK<4(wx!- uPcEx<,m$m!-`ļeuѡ~aZ5OJ·t&J8vLHh6~%f\J% d<(Alj)*0!}L3;W 巳">zۻ) lŔvLJ1\ozR9՝ή Qs*E S2tT77uDPR|Aє"H\V.M>9B(HISS"q? 4$@ahΤ3%Ñ"TuPzխpvǛP'NFUo8q3] Ь=8aM2y^tWO&?^uL[s;s9aˆ"I 0t% k- "A;k"mO#nö%dǮN*fžt6ο9Z%wp6AѰwwxL`&g2i͘*q:noY?',m_^v٧WtCUMe)?=rH+T}[9C VAo_@\"k{$ߞFE~MAK󭬣fEups9Z[6}mNK@(yw7vZb{7M"@ˈj49 .O˚w܀05ϺCh 'ޟ,4L-"*av)57Ҵw Y:.ci PMG5:-wIZn-iM_m5['UpyHMgtz1]@|.qoQcNet͆}ls']ִ<ᑇ~-Դ-0U 1gLbbu<Ӗ-qsú7NYb J}ʾV74ͳ}Ywm$ Wx~?{w1]pYG5ErIʶ6wO%%j(Qx53GU]PԤ8y[2`}|yPޠ7}{A-8p$vQeCn3MPI! (MntP޺UB+iȺX;C]WU~fEt꤃oi 9nuj4JUWKM ta0]A ˀ>F >Gb~e>/{Iei0S.@)xo.tikv; B/"_kqX/ȉQz>pts~n=j3*^ ǧ;b zAyoIkDy%,wVz=S:ڧ>~4Wr:~aBR%#.q 5x["Qc( WKmA*ddd8~4XGьKn ^F51]mug7R\;PS:F'zxMUOc(i`mW&UN3ɝDUKGd2# VߪTޝNv^YJ(B9 69n,2XʂNVUɨN1&7 E1YϢhRNCT;D.j[2֝ݒEҪ)v0u^ fp6 f'~tk7~RI`:"#{Y\PF:BMLXVmcn(Ɂ GaI*XK*P"܆A#3E A1`&(Y 1HvεawL~u<5QSjKS$)ug(9PTt#_:#JZq'qM`ۻqWdɋyeY|}QI@2'0&+%)jTTP]IFEs]aIKп[u= W10i-y BEI*H єjp 'ѹcEPH0=B=Ueֽ?eU{J, P.Ap',Y'4m&#! tX[>.Gr HYkNBRIj4ށXTA3WeCBl H5`C:a L -1iԎٰ>tIw/YX9+΋-Uf_&ٜL^rһdc|Ifcf4l6]l~_,54UՅT꭯ 7gӫQm)rM B1u3g!̞!X# ?/J$yׇcU"#ڐy׏qilY6Kïطb ƀX CYLF.?_4O_?sb\?/s/|F+SE~1 ..自ea:@+wٸvz%Į Nć$훓ܢ8l|儝٥Fڦ<|l2M l~E )XO] |șce..Ddž>f)iET}/߆."3gzjeF$Sn<<ci`ܞ  Aޒy_ f#w0qEʍ`~FQ~HO~ ls *o}ske~W{ e.S %3hՌm2T9}BJhVжJsV%/bsmHq+ J HW5+?elb4tU)+ԠD8L"JGk\2Cq!a wb ,W$0Zm,ZO*]Zk4q%=Դw)C'_qsm5eFs*&BEq*6i|, -q *h+EօY~4JVT}y ("RpE1h#U6i>ա!6 @[@497uQ9mQ )ITN'F(hD:D"eZw8٨WanNy^Q(\`%s$ R[)m0F[r9/v"B str:,QJ{\-5B*H4H. HRxS ۦ 'BIy 9%Ixx0s.蒌T;oe v|u/y 9b1G&)P(GQVBc'ccp׎oƿ5hkn\o ́Od4Nj.8Ž0kHi0-EڰrZ gUw*sb`B.m3K>C: Qkgr5=Ci GcBM܃ ?omTDɨh:s{]\q9R9vۋ'(3/y[Wr8+o,񻷷MK_/σg0dw0#9餫 G;+?%VێKm6oze9a i!N}&G_$3(B -/gu DIYKodN+¼Xֳ\ 9`-%Ik pZ)C*mSR$*ɭ5av]`Vކ7A޸'|8IoV{lf_?xL2cA=%-gC8W G_̸jF,}  ZADI;B^\BfXHIAChfʄ|1!޳1*0 H:)ʬg {PQB+ͅK'ՁNZ)DOݨ ݧd]#ϻࣃmBL'dg ȥV~ 3!OIȓu)lU]U#4H֙ UdOΊݍg)ч-/=fwcއ*/žM;lwQo 0lG d^,>S>DLG~4,*ͭͷxPDjNZC{S{+G} &ʤ%-EV$M=#`wJq*ԔfHJA}i_IQ9>e*ECRlJ3.> 'd|]YDSJzk٭7rty`vyK㹕|lvXG m=6L6`SW£41^}J!jeʥP[c?(LۥVc2~DABOo_~4";G)ꂈ3!X*_>|ظlQ^zש7~mN rhBϊP}M.8ٜ\R %GA㪳?Vx>d0P9 ݁Y2c~'*6ϙlEΥe>|F[5] l\,7^YɓsK/UM?\Ա͝1݋& {v3&o߿kR}_7J3:~0ޝAx0h<|\4_[d/~xFveb Nb #` s,N>hnXC8VysJZeۭQXy5ale9W;3XkyE?i7 7 ?PA5bpө #f; _2lHA^UyU'h͏6O ]_p5<Òwճ?=ú+#~xPoV;{h՟SUddZkfpw%$LJeoQדͼG[j7וu ]WpsY*Qχghj< Y=nogzDpl׀E=Q tgDy9Em,^Ȧֿ\UZuViե.jliѥ M׮j|" mZ{7hogJ]d鳴Ղ{N њP1ю؉Mt7ӧ$P,nmYiR o:CIMMԄ1bL"4=vX Buؕr mqmsQ Eg)b/9%|^{["ZѨr,G@똕hv:$+QѤ*CdνӡB`o]4t4FթNl~%B!}tB[Z](y iJ?T[T*V{:;IQ1'B#\b|jŻ IH/ZƩJj[+s!$+z[t΁z9)! G@1-0&(mGZdjڤТ-)$qu$~JaE!U>1qNOI=j^,>YІF.\>E=uk,R`䦤gE¥:V@FmY)dWH! Q S( ٥fّ#DȗBi`L9&X3rV{UPQAm> h-͡]K68U& G %jɭZhx L}h$kIu 5<o 7,:?jPJ4%wV4kkuT$KA[&n[`3 .-zL}ӿk'QEEmCYk Q^$zhhPAI; -662c kX^:(Jj]d*2(eFAjЁF-HOOdqewbXoܬGŰ%6vł*'+ IISL1. , wHN_* Q 2=Pʢ#6:XLg=:g9(W (]IڈJ5(Z54)hc#7 3 E5jփ*M3| R5zfҼLFALPha3BBvZp:-ǝ v)DlajlJ!@6(-< V@P8U8P(-lZj䋞P+Bb4CS3A8QZ{*(=uGe(64qEOU׈ ` Qc6n\YNH1ˡ옅jI2|I4L](I WKh$dip/t:R*?!t+<!GzTQb-n盵y>?ۤCiHN5eCzS­bl[GM~lbѓ(hZ=WUhXͦ-dx, }@oIQqn;@[5v 'uv ; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@_'~rRK}8N U`@@w3^b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v\'P@$ 6t&9h}N @/ 1@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N?f H7"jܝ@rN c'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v8n.7VǣWfԔzm/XP wi.V|v'C2.)o>0%u6.ҵ7*F{H'M!, VϝQLW/TDWl:"z}]TҕN{}@tƚ+xe0]@: wHsW,W":2]@2BX}cƞ^K}ï__^'ɼ?jcMedrxq/-LM'&w=Aĸ>;y$/ӳGeG3q͏8co/<}ݽvE:r)r5n9,[^+ 7?Mgxn^LKߵ_~2cq%}m2??K0ڗmvtρ+@cUY>:m4ݿfvo>~Ϳ̶D6]X_dm*..kհ]ڇWK]ꅡ{{7.?ySOpD/|>/s:}f ՙ ei;<[uN)tVaN/ۥ._߾8|1H}5/-bٟhqL&Uw&DCiiTqluBcK;8Ī&T;nBIINYMDǟMde_C${Ph&;э:N+%Pu!6FЦk/(ڽ{c*'m|~yb8QqvȈlyME*bk"t' DvsALhHo~;tZhBVtgwdDFp,sIǪL١~?綖^*z客i |>Y:3v[A{t *hsSP As Ќfɯ_ѵ@L0{1L@{ΡR/Mn^qsg5I7&ƛ^=G\ ȓk/fkkk)?ԕ<ˣgӳw,TKAЈVm?9'=ȠJr׊PJ:Q 1P Ct,V(R^MN(>b=&qrtVu#CLx/ݛl O=OBmw1y\o ^[Di}J%?*+sk?v9Ĩ8ƙ#2hpng*Ipx Z5.Z\Up솴z{IE9;^ 7t|R"Mkp)c4rq)L*&&FGm!GKhy5rL Bicd P2M+| woQܫM}<,>OOHgi)s2f+90** *W!8iyyQ蜥tWFB8PgQ恬U LĴ< " 'Bk4FH!).;%${< [Wn}|2;a]ֽ?e(=T9.|2 2"%d% MAбqؿz_4#HJzRԜtŸxTE4Cbu]AzI c [zq WU.;4r7ihIhr5Gփ ֯HXmzrwp!ᗆ~)Z"ڀeu" a E=~ȱۘnXN=afC,<5j 2tn\Wʳxw]B/wėo-2l)55I}DAKBӰiHrpͣ~U?}wQe}derޏ{5ڙl+)PCYdbS)[A2?/}Fgkdd4YbBUD) E&F'Ur{mhlH->_N i?G h(2+3/2,3K!Y`Ejٹ),#hl!TXN53 7G@̡4XHE|d4sD5NZ:;!Ptv֠yp}0 )?cUߒ`;&Gns qđ d-ĈҐ4k}t@ͦtD|ъ("Zo 1)KK=Ṫy@\-E2\Hs#8V}m옍`JKg$ C,:qQ8!Ԕ0фin949e9󄷐ge|1Çb $T`d:5ˊ\}࿜Oc̴"*Sʃ53DUi2)Z.5l^*~PW-Y(EI/%RKy?V 駌uݣyshZZ!:2TEp[]RẠZy]vG?ogt1f7XPy@f,}~*e[Sޑ`֡^Uxd4lT576j `)"["(qoSɕzъ*ҞDTRZXBiL( =˓F&C %0+ɮvn odICP|M? Z9X^_;(U]HRD~#.b bGկ}i#LxM[k&=ua®x/O^b6{'&"Kr/d^TgekquusܘS7VxkSHVҪmP'[WˬLud࿐ TMpte4>=.1|̎}5 q$ROmy8xM;=f^&fn?>>TS&Q)M(%e)s0>&yԖ VJfXŜ`D5ejC}u3 ~CtPn6n{à7LK6q֮ڡҸ;ji>^S]M'7')7E*(t4Bd2alԚ3ęxH1pXrS |' .oB뀇m/#vmߛlyݓRf>ye;\#`vy<\; W# V',[ayy0m`}Wf]D=ˤ ~7llڶL7N4)j;%-4VG.m%FW:0M VB.>0 Xޑ۷GR4M߅k/J?h*ڻگ\*)S*ZeIgbNͷH޲()CvAJ-w$h,}҂O[5HP!UL9o|5rHYQHT_[zSدYR#vcZ,kIt^ݝslP?wqqБQ.1 HӂOJ%"r7E=,1"8j)H0!|LLw $Ly)b]Ө:߶-<9P;! aϾ-aZ,%8ՔvH`j^%/6zd)8lb2D$lji)Y;(śoJ:*@hej5J!PrlQ "U 5_:q2>/30 +:k0߼z Al7h:-Q s%aˆICd^G:HQhPPS!vۊ67OSa7l/y|7 vnjP쒧v͑~5ڈF"W7ńmQH}~zErf\7E2o2T2Ѿ%5"zx"3:&HQtvw?}LC&MG*Tzy/r!ATwS`RG'y%Rb&uPrzϐE3GMBQ PK&}gUM׽^_YG%+m\c2HU)9\p-UʪHETS? !TL{µ@n3ջS[֋Qyi4svo]Clñ{q_w{B|dR[p*1͆D^d~fD8zVgEt]qFŸRAI6 °3bH [bZKO!v[b\k. HԅOk 8Wsd=f`k sב;gUN 10)_OvE_$t@LDx-.d_]{oIr*` es!d 2cr(ۺ|T 7") 13UuUcʠMoZ%)nq]5Pv>Wl&TM@jaS/f~[sE )N`0^qR+RF >)hDcX]|֌m=e0K=J|Q0;?L+fqo=uh4-v0EF6[ëO^5cfnQf}jӎ6VՊ,LblM٣3ͺTUOtlGbapظ{ Nb{lPaݦ.i0fpi!yb%ћ-uK6^].Fto]f^m^RyY_X*Zzz \ 6C׷Bԣsn 6v~we!w26Mtf|S(迪t[y" =ʇKy+?c9m&qRYﰜ _|݆!ZJ&yV>CfYc,EfvؓW,X4\v1D9ܢITJ-7hP ]r5—UguCJ.]eĔ?zϟӟPxV sT{Vk) [Uu2hE`iG.`k#b̂75TY!ШxH%h.gˋb}.gr v3:R3\>s9\>s9\79r}.gr}.gr}.ggroR)yQ*K)Kĥd)%jg@[RR~ҲsGhug{n/R>`B ܶBR&)덺ٹIC`r4nP FwwiX0+1{v1}]4~Wo3}|q7˃|!G!ͫkPQ](v7Pt>]ZI<=ޣ7ܯmd@`UOo[ZA-LU˗ g\"Y0`N0,,+Ăⅅ1e/?]VWNZJ7Ӓ5lƒiw ն dyyp(Gsf-wNZn52m֜qII%4|$50iJ_30%p#,-55IԯcrCO9^Og 68LfT{{xrZvPgc_A}+<ŰCts,"<:g&3c>~Lm `\¤`!A=HaN(-@[ ~ыgG8LabBU rAHK s šNAJ4`GsNבaY ^IR.k%'Rk9Aۢ:,w:+ {>#kpD/$]~!O>ekSY1lfĽ}P,7?,^.l1vL#;ߴjY+o/crC AHRVa S띱Vcю{-#chnDtCv/zߦ{m&8ga#Vk 2ڨDDBU6;"8DQBRE#<+)ȐQE.(qCvYI(JhR".)9E4M*$-I:np.; ɾ; LvCUg*i`Q{*8 ZƘjף=/rT+o>=14 Vh0Jmb0:hs5 2j4jFd>Z7()&zSNq飶ěU0DPR]k٬ף8cO]I <'f 7 %]lP_?P`m/\cYL2z&A,h* 1K f ю*IvE`g?F^:e%Qw,0~ªh͸v+^YoS"iyҢw60! AI5^p,\u;5 /m (H/o:\(Lq3 u{? #)TD2'Tfi%1  Ay'7罜qcPwa@zºG%BSPf 2.9X(IA[0AaD.iؿ|/4m#H[f̹RC0cԌ p~#k+5ANXW Y}ܵCD)-PAJCJ&x:Eo,X&30S[>V˸vw$lb BL庙k>?ZyFfȏQK]\U^챪,HThޢ|"|9yV.L{oU$)6^GkPR?ظ[ikB$ru?Uެ+f/n[j6X7*-`K0Bz`P>,HoEv{!L; 6eu9Eu袹7 N[lk O+UXN.Aq d}lXca8yd<?M籩jjs:&jػ}פֿQ}@SJKσ}Ws6ܮԷ-@_XX{ŧa4He^ g$6{rp}ZYr3_y\$niSjYޯ?֙ϡx۫[K]Bȓ)[Nq=JOj=e`c9$XBX&JE@ 4C6E%><'3h<`''XHLaHXO"hƄ5 V^tfO:33nO|jꭼG=rfketBS!U&ԑ*}E͊Y( B)\0зZ Il 9(S& H Ɓh{'(H# `9bLqbxϡ!Ie:R/`{ER`9N7DpǠ3rn̫hwgy)޸Srό#DY\jεS /;ja0ˑ4̇l udI9I!Ί T,d *b"g) 0`R . åh\IYb}#ED ci^ʽ\ ޺>8`fe9 $Wc(&)p j 6p]߽&]xLꈊZ0Ew,R1$#|(%1=Rns2SuV^=JgЯK8-8`kS;ĬW逰 ߜ rT.tm'O7)~+oC/,҉KK7XRܸ~:%-]\WM}=7 ?G߆+68ŒֳnaFX/ IWpO3&beWqCUU0yw>P/agvɑ~T@a4}ե.r 7JE˔ٍ3)* ODȽ $N\8˫DWy~"4G΃6JWIճvz ]̑ه +p=]Ӈhpuچ{>wa"?"ۆ?u0{kr_xQEn&Wӹ/ۯ]<:xgv|3z2vy2@W򬅫y~ӹQ}˫zR(T4.ypqJ&2n;ٻq;ؾͦu,b8:/aǠA>G8@Sâ$Zz>zQ%IBYE~KnXqHv.Vh&]LWWα+#Ob)(\ഞ;OG&?8ՈQ8pӖW/9BSZ`˾KmcF)ojؐIG@Bph Ao^r)tQxjGE5,^uR[g3nu ouBm:cnmBν~ӟP Sx+[7غ}ܺ^=tCwm67v-l}I7woRa2oqw3o3緪]@=7tl2ON8(]z5wSs]<-i9>NUH9pEo՜c}&v~mswJ16Q$%hC$(A(_8]:oNN#''}'Y;Az!s Scц5S)ʘ PZuM`ka@7hdָ-ysi$Kblv;#]L.籏K(W`5d_qNj\ 3r;4L0L5aCs1 ,XnͥqY ~ب H8Nި G*c"FĘʉ\g1@]Q 4 Kmdhښ"uI{)ZS+EJ\L,zziyd)D~E,Yd뭌^-EI㯭F(t:$GZp4Y D~;#{G^l_L]Mƣf4m!Tg3} +`Z(/r/G{sww2 )ǻAwg]ve,hVaw ]ֻןMf5'Ov~~<V~ d˪TP>b" p u@ %*G3HS롄*cϜoqEen3؜bD9K@kFwv嚠f>U1bO1yUCW*Ł~cēpaZ1]J,;y)$ԋҰy6 H[V6.tix˥W\ܣ-jv7)8CﺌYEB3Yi"t$[ι']L,Pp"uYA,W VCE{ )ocfLӤ˸Ǒ2?颋ϝPiiB@'}튾[ixy~نZ7PN EMIww tQE% |Rk2*J=u3cF\J\ d,Jߛ98|Bo>n_I_i[t 9y.5Fؗ6ъtEcv ¦۪^y sz]ǡ3J9Y`ObWSUZj"@. Ў7B\Ja`ʊyjs"mhDTB zMڥa;5?٫RήzyЙOx.YsON Ӑc-q&BߥO6OU݌Y˴ErAb"&?ygLD@e.@U`vbl|d-ww6jխOꛯa/>w"'ۗr9b7&}1qqܑ92{CGah/pq$\fxG|Kιlr=*LPkDh8#1Jk  p+*Hzs)^VtxSKh B6]"eZVHu$H% cɤ/٫ɏ?] Ip6DG挧hDP7w)c \Y6SSp"JG :xw,!Mc3ΒWzCzɸgwg*XZ\G7r8` OPlB Frߵ1ؼg>tտ9@VܿyOtI"p@Jlh״A*"ip.5'LM ߩ)A95KB%V;yk4%sLRu.dG7>ROl٢_$,O5'AZe2JRQk"Ph.-l @Kl/%fG7S\ho2 BydIC[*oJ^dn.+"|m#'G/ " $#a֌@-LZnDY'73d^ 6!vvkJ|*DV!qo'㭕_9tB']d{( RYnؠB`4JqIC&ƄHI8'+xZ)J4#87[NuPb#}.y` !bHH/ ?S(<@R9TJΩ,}9R**jΩwsSMͯn?_.2V% s>_Ї;EC[k5>\ $GKaZ ïzrB^gb_[O3udztw {_nFx|Oc0QbYH 1džb,H:uu߼Ԧ^.Qj\ٛQg^.]_</<8?G;}Z=9OxG^*F 4NScp8]9Cy{N$e)P5r Y_;']ۡv"f}Zq{R 9Klb.i ldigxGFO@K3il HNb"E 4O}FLkN)E/8U4^N}Nj[m_'V0=>p5ܶ)Vh 99kݮvس]n+VF.POeUL&,JLt8MD,ӆDzϨJFEBQ(J!$'zP!ƠSV ) D6*mEZr[Xlfh V[wg7sE6fvAcGp:{;js&)Be͓r1 fP[ XcLfAya @s M&h:xP`JmpDҥl*,9}fǮVڬZ`K$S!'QtgQx |B5Fem=''%UuC3"GG( ΨIRQ52JXޣ\1(I`nY#j}^Ftr>ɿ/~ ߴ%*ǭ8Cg%WH|NMJz-Ff'qƤV$i-cb`4$Dz/"j-_{W0X^ i6tw $*ǡєFUskber7F:dU-b|שw(X+1m1!rN{8C܍@]M Yv\[)KuA |q"3Ye*8m݋=g !/UO=Zsf}VlM~FyJ'z|)6:=/PI:"yRkFj1/]ђW`d|ҤHrl+QH26x(SP P2uJNksq&z 0*l~ea`YCj'ՄLf2<W)V*MY<{~rcR uwP]-oiyht^]*iu!˷g`~X8` ^/'vf⼵$)JN ^4k{mG΋epg-ppsW MWmH4$݃;EZU3H=e>~mT7,CpF+SG</;:muwL8]8r 2&Pʅ ԌYQ GZ(R֦M#- @6M>{ ] Ce^Ff'q ^@ $ uoi<鷋}q^&t^ۭe#co~n&,pWstvc3wԞcN .Һ4!zhYMbސSBϩ^)vw̺O9 ]L*z! * 4e)ds&M)@6OXbM]R v֊!&,wPMg 7o^ o^o u(}:x/ty~81# qNOC(JZ`T&Y*MVxY= 󱋢GV쥊|m3(.ksPRW1(̾1 c1$ksqr!)~Vk)w+SXTvAiEB5?¢5бpq"6|qOj^eEr}>NaϩQnbanCfmn)+d)jF6b-x ͬutSTֻTM4XIS MI, !HJGO`M2ۨsTE옃;g>@84-8&o+WxH R&$ZUKKE"YOIFbK.T{͖F1g'LEꋯVK_1ms(=6}+T3 RJA1y<#o3X"xZU;Y7We/^D"TZq/z#Y4T_x$FAA!4JP!Rm|B@N;Ke։YEϭnEgZŮ{} O[[+<̽6?mxǹ7+y0P;&xf")f%Y*$ 0*%. <麗<<ܣ%cw V{y~/ &zlL|y om +ٛD2ށNr^$IIᕨUgYTbFDÈ-;[q }n%k;%J|>"A U/^P:.`T6[#-J0gE[rA<ҝnNhLly2{f}oe%_%)QDI^#D e 9 Q;A.ؓnr=ƞ_hz #YtDl֌b^%˯`I,٢>Wemx(SP 2)#QX6:l2.BFP0fP6gݲ?0NY9<N9k8T[ }LhTd\]7JY7^^_Lg)CMp˿K:h}O8΃t&])[j!˷g`~X8` ^/'vf⼵)JNhJ ӷ(ό[jn6ίTXvRVkmT""y:ukb*h. gV;sHYb[;IS &Z|~9eQiTs76[R쎼+;\>orH^TjEwG{C| ~S8||ޮ&C?< \2 ڞS@Plh-dȘ@)r4[,(#-hlf ) xG>[=r{mhl}'5h!$AZx˼O@AH T)%(+k5!܉y\@:-]@v[zy*T2TFs*vTa]e*nlͼ wfM!bn2zhMT$, hd,:~T;/Y;*Y "1@Vh輱CqBL:*t@(TDt[AV([ 1a\Yg y9F#&2]`j8zuq>^^nH&`ahczψפ"d5`QǘpQg\q~ =@k-7)ϬꜭĨ] J*l6A*L$q>%bI%H\ ]Vsȱ,]Lt`G{QnW\yYĀ[J3NIW)BkQ*BF)F5C=]h>t%G֠p17XS}Ŧa<qp2fJ;iGЅ%P%](W6|ƒv}Y5GmrUzz%u61DY=K׈0A2E(t>@BFYT"_a ![ʨxIe|ڈyQ*qi]_Ni\)~m[hP]ASܮW|"T)`0W wߴuu|~5pYǾ~{zUNMs?X"z}48~ u mZsжPka#!`CZ %ٻHnW}v[ɢ#  b$〗V^HgFZ{bO4j.8VaWs_r|zqY9lլbeg$V v.!N-[cA1υڷ)d]$G(G6Z:hK53qLM]1RimB>PŘ󾀴~IF1Lgi]9W;־P+ԶJf@i#׳Mp>>V:۵:󿺐o8^]NfؤM]:3t6F-؈M8oN|X=FVEt~qsvlD?v﷗k5ERǜB^TE]dJ ^ouQ'{ Wή]&~߯K9:沱PRHܪ # BѱDlQ:iާw,%B̩OS3@8j6,'25nd-SPI&r6-hrtN`jl6#gɴ7Ex e#L\JT3>yn\ 5s'ӛ>yE-}i˾)7g:ߊT+oCEqE:Z/T<4zMVIKW#sShѕKoU%Lh\.>(c)j*Ii VR5c3r XR q3uQmu.ycۖLWffsf.f> gg_Ng\czp%dd IDb3ːfiND$P jNکy,y 2T/W'*%˪S>@_xok.>RiEf'yɸ<݌;ucG="%t֔,8$t(*f$FrXIȆ`PTc]`žQ}495D&j-b.Ejr!4և٬{~*CшkD5[(+B c^d@K_:;PcB KV:M NBR1A ö7%m7-a; }5L{U{?;Fpc>vlx1lOdJa4G-h2I|7@ו!>?0DV70KQ}>޼`t;'sOgWSz_q}뿾Z/n]ǼJaK,y5|͔էEJ{9w{܆>O`? :| uW :za(c;*".%w{Qmw^?6~K?Op篽]<`E-0j:͕O_~u(0~5o3/_q1DO1F!݄c/^b*EYYk _"be_Zk+ C>:,^,{<)=qjO.{Ȝ~ mv.B\K<}zw*Z?u~d2T*[]Gw<:^?=/da1뭕Ӹ~5 yd#ѣOnynɇЇ-f^k[]{mx~ܼ{9La[)p˚_ ݶw״gpy1W-6ߓmnX6*SV7?fb]qFvIurT‰8\#ۑ,Zeə-\["OeKV1 %Z0+YHơkkkQYT0EFM>f`rLkK.fl6j_~Xo.^VsQB+Akԓ??ů#LmSzzܝ(-@̈WWQ ŧΘ(V`t> vtjyf АR(%R<+ER$gI@1S&PQ̅ "39U6Bf1eHm ֞=p@*M6.|c#2X4ͺګ6_>~Z㈎) tӋ ggXƮ(Rh; t Q1| AHۛMr>{z ^d:mL{gcӇǺ*9{c-OWl#rR)Gvӯ-NM:Ca ZJFJ( @ CȬi@Xk5TVF!KB)'8$gpTDY l85A1 l4@9iUȐZ5#gužg\9\w:[:pί'=ȸU~3O_UyQm *@BN.`BqhBˀB́u&;w#ަîh=&>)|wfcu6V)"Y]˔d'A F8YӲD%@WCE%䔒aĖt-@1AFN"8/kpp#1`!IxyYflq >WJNS[+.}~ˏX׾%V(}խUwdV}ьJ7;=TgםN~}8m59\\ϫ1ߏmty }9_zOKp?KAO(L'kv}v5Z@| ESpet₾mۿȹzяr($?[{px%<4ۻ,훣"Q^6>b{rR!oSh?hdz5-I#yUq2Bo.@^wSVzD[|2~ۀȶGo{qQM{Z[7gRbjdȉjfo̗]NsΚܾ NOS}d\qoFѼG56yl3֭|yseee.:[=Jg>ގ*CՏy|rĖւ]F6owse?wV@*4$E'jH!D"VfȦicRf]AF Fq4ʼn 1܉M L!4xJ)׹VcTKsKQ-]@,¶*XTq٩͡:UƩBij?N6f,5mmcy$Q@nŎ-*Be^{z[s:xb!bn`VC[@'˥76H a^%)Ĩ5*` R -]'9ALWH/7np"L$4D*x g4K :AT>+xV 3GJ]N8nBb K^IKw6=P4.bJ;Ou>ۤs^".V\!$UQTnչ[_cY1T1&( )#ʑk8GTc#ZcscE~XktG@[s+Ơj<90YI WC|G'sl8nJXL"M>zm٠kS @ZZ:mEcF5b!hSf_.U0iJ=!&ji.*`;c$:'-uڭ|O\:r\=t8ޝmmF߿Vƻ.Ksy;Bvg{\ݻ[ފLCnFinFG˟x XxcVe_pkMmz`mͶ" -MkFZ3%j06j,BB'XA :+%qֱ$$j\NkZbsm^Q=+_zkTU$N C"')O6#34J 2΅H©I@e$_ɈO9Hg*9 Ww (Kk;>v j\16< Vs Ô[3MD.b翢C);Uک_x|j@yw;:PÛbG.+ʿ0F8.jB~YBvRhȑ=Jv9s<-yu1P^OA^h-2x!ou9u3{˸kdzf{lJ~]t~\j;/w~^_^f/MngD^ :4.za3;nu;7 3zĔ>^wƆ sn{2>O]$@c}lk۸q:uU }[7z~7qk>wWF?ܹ86xaQRѸi,)! co4@jo5br7. y >j:9`Q-=>zJ2#r"Q݋n$Eo݊)FcGm6O?4'>@SiŞR{?ZLqF*D!U3a?X᥶1i#IsD @48R"tE'LΓ6_,tw=n xnhz ?rgD%mb  7[GgH)cՉnn&Oo7JʭͶA|'gXovk͍Os<'ð_zm vYZ74l5$3p559.xY|Y~9U??PU@y W\^Ƚy R3 &*m D֘$t#$oW%}?J4xtslJV#'%}+Z+Aj!DL{`ht\gD(*&Yf9jG#u 0G#u` `A. y*#g>/ `.^VC'^B+A,s/ SKT.x0xp@PFLV'儞JFL-UǞ9SkF旓Y>rk}pq$|X\=\I+F:z <{J>A\*z P8!q VdU&YR+(;vqdoQ\I2.`p!vm/SVnZL*;0". /tםawX2X{8ߎ.=B%xhA7=_?&VDME;ckE|`Mfkhj*J&7,3Vg,?}q'%wg58 Q5"V"4!ppXy'BT/.՜ML" 5`lrÈ16lʐ ;!M t4Lej 9vMdVM8H +$3y2*ͩL{Fq ˕ 5jѸc$"*%g{q+8W7ɝvV|덋2eZF 4 1EI|`Ngj{H_ HH-w8nEwu,-Dz_!)S2)RH1kT=\Gy/憐Þl6Cv,`R1;,oG}Jƈ(~_ωdXzķu{/+r糳8qѝXv PrУkZ#5%$] SFU}SfR\dJ`NAA#Z.ct>+ rTuQt0:LYr"FP80l8{֚_Do~d>>m9o݋MOnz.}-Yc/Ii ԊڋcaAEg#Aq`wA.M(Eb yPf8$&RhQ$\ahL#Zr:SURi`l8=x#-B߾{5V:?TYS,JQYڻ}C(5sBё sD5"S0qԑTG]}Ẳ}|\dD锯7eD{ϰ|ZSY%.4P 嬂N;TPcVcs(~z,W1=rW|M=߾DЗ xb(ކ/7<ku)0&$Hl7A0#"< "4[^ #00DQX@r3a$@-6ፇRdA5|\;wL<=i;ݖU <ݛnQqy?y#$D߱2j*5dRY+Ad}Q\4R*h#I3 \^>BψƂkVsR\&rTǤ1 Ň`|D]'NQ#AO,~Q[ZQz%{ ݵmvQ>um^Mmc f'FD/7}PEyp O3?SL{߿AQyrxUI\: Tcؔ"ǚt=n): HsePdtRjn{s$BZ >SkKKj5sq &UqFƇYʫ``q Z -Xًyg/mqe>g-7!ti::=6/i_AɱI&6NHde-(1ȤԓGT#TZPF 6:L"TMd]%|nΗeM\ jf#} tVVjZ$:U#`&U:`` SBjjO Ok +r qR.Պkfby^ԯ20 "f"͈#">db!R b|.9AJ1hap"N:,Et<,"F48ck4< ֆ3dj`^s %f2#`٣u:pĒ Kw7 NpQ#Θ̛X< BY1u2{jʭ9 _gH|rPgt.Ν lZ?"chuTWG@ɶآ%Ԯ7$r5[:~<{jDv v;nSJ|L25Dc)#8#ӶyůG1coPb| u/. B٣rÍA\0$3Ɩ%db+uvsAԴ#9M"yy1R|5Oh!:irbǥbqQytaO!DfudRm'Fg1ַ֔D hK:ng[``v:T\ZXE_g-u~1ka?dO/Z[sMr|;rQ &tu{Jpq&0&L~图sjllZ$L/Nگ>x7?`no2/ >nl4$rav|Pg_ΧeBD?jA|r/,=:D̮~jo/? \ٲ"(7= ̧&⛳vF|) ?\^l4}HӛLr?4vV8AJ1C"k+ho.*ʍݾ?_"Nzofx*EDmu!#Ogw4 vc:~0>އbˤ>]sk0߂sk;^bNWbbb ";k/bOVQhݯՔNwyZh'z'Nqd'h{Ķ=$"I]<3XPF},:@aP59" ~ϡ8V#s{i b6]> Z!‘u"OC,,e&\0(pԵ֨"Ԣji>0|T f{6sy7vihήw-br*^'5("؂J .iV]U i-U؆8{6{," DTJ&Qc)+!\9H t"JHU[)b*.(2L T@0ǙU0 I LKyuu9=#]t<(m1bEs5 Gk# 9]2)e|2G4\*d9GU tBYhJqU F#Bp\ɻOm`CUAX%C&5QIe}=THe.*`5eL)G;@%Gy_kr{1gXԶNVյΤ:ߵkuR-qK νCb]Cѐn1C[ؽ!'QzE(Ug'1Bх J|al[kBOZj}F; sH' Bj ]v66U+RIŗVzo,A]ZD:&.Z}1V\AB9{q֡[6݁(tLLVzK]6}y鏚#J;jIp>D}V:mkQco)J6ZtޢRE;l[~vΆܗD=!>Y6v1ltƚ`ً/)qqBm[lxF )SmI"QŒJIXt) *(2T Ѱ+ -ھ# )h,9_~a^yt4]x^T+)(^GycWXX3ƐW!HR,?{WF>1V pwb dtte$WlIH-m9muzT^ESW`g]tLBUH~G#ZK?Exʎ _go,`R:RWTx|^O/3 @ZfH9LKc@JyRuM% ǪHDӚ^V[:t61TR 8ft@Öo ,bxو2S!6 )5h AMgI0ʣgN@i@yO2y?w'h/`/ -!SjQ:gL>hclB,rqw]GoS$`R>[,ߎ+7훳eXkķ};X?1 `8].I;0wɕUp -G yjpUUU UzbB"ΙAS̚tLoLE@VUsCr.WS$h0iĭFBd5P%V}[M0,rc4 )r'FW#p)+zv77q ۞}!97Kp8}A jEhaAY'#[HQ]2Q?:#ͣzD5 a5 4W !B,D*rE**1RdѲQCj:5x+|YwU>f._3vb*z]1ZdY[_ݍ[Gؑ=QQtZL]p:#QLc:OkGcP|+&_g 5:Q[.8 p&cKk.]b:SB Ȁɚc簛EȮlXSoλ߂4-M (SJuJJAx&,h* Y㭳N&C4!c!õaYZG`!G5iK`UʖUt2Pj)`x!1DcyiD #ΣeWe!L?WK>Zrì#=vHv Lɛ 4 \I4'JZ4 7YG볂RFZEqA# <ˏy3"Ϗ<)bQ-0@Gm9ȉ b:r4Gy]9Nt @{f^0]N/yӍ&>iM׶h}~=g09'?Ly#ؿJLh,^s X_Ѳ3,AG$_#<_L]Ūܿ<$i`tCVh,V9+SR Cd5p䉃őko_s ?<Ft>]r`ܧ7QU*GmCpRbB$+1Z< r w# zMv|DYo[[նڊ{C> (w8x瀼 gx1Zb=c>+s}Cڄ NAmJ6[w,oV;4Hik駛-J~ MhZgNbZe#vws&g?̃əhxW^3%j;cctkGA f%Y-*-؄*iPʞjl<OPNLTKb{Y/DM)ime$e栲؟0t5v-GN][=LU.ÛUD3v z-ZnM,J2JWKTdR-C VU+,8?dEWsA}yBI7d]>xyZMIȇ]f]xB߷z"0T r E#JE(Nxh7nG(&X}Jˊg+Ŀ#.*_I hc=#òi%;ɪxU|`x՜Gi!l2g睋S!jT'ߡdI+fe] 4"gX!QpmdTPgU6Aixc'x Z]vgaAzgvYnn&[4oth[OEn1=77ow_;l39 {yu<_׋ ?/_'G啕c .u Tiy\ŭ]+!x5;n Xx'\yZ ofEEä}h~gDTtW-A;|Pe99{K-E{/uI"j'5Y\D.^rG6k{@ˉh8N/@bls<ɒ/&=|o[Qzm}JmC+ᮈEѼcxwޝzm..,ͮ-ȝ+ЭmS%A!Ad M+YP>]T+ySm-J5(e/>\,J{;"4yBџѿtSb߿OE'{ݨݸ1AJjn5kNWJ=99;^lkwYY(FQGY8vMmUHO{r}I]E{{DYv&qdT&rTg{>[{ MR?9s 3'ڇ|), ( Z$Bʭ`Q& eW^0W? ‘G!("@^d&DYCH +%ZkN1jjB6yOQE &F6b lzǀ?M-*1`o>n=Ɏ Q. ޱ2¶j1b꽔h^J?\x,Q9̞̚bJUW,b/$+b6QU-rq!f}3L A\ `,3)2 Q f'}|6#ݼt?S l"j VR)`}600^*0BsvĘ2HÅ|OAFM9M sZE t,49;jZ{hJTٻOvmaw]U$JEE)1P|R oYiT#JVkLf1;%gy^kpϮc!lBޢF8$!fh^16ZFka5[F;",ܒGS{o ۺnwWrjD=|Kx:G<Ŧz[$,s!<:9CU]zL vb@?|K〔I Y M~,RkKWYZx7PP~ҿկ_;68ܷ/v 3Z[r׽"6g;llv9pkBw-_7[[ڞj^lGJLXlLkcL/M]Z3ۏPm4, ZKevɉ;T~Z9|_ =iw6|]hQ H@%lP6U+r疙A`7ՠ.Z@mdbɪZ|!T\C@D\1%/;g`︺B?@tK}=%Q`Di/,=پ畨S _Lt1V8Kw -/GTq\Ϫ Kg4.l:Vň4 3ḃnLD<&Ŧh>_\M|01xBtjJoЗKq")J 9XUW5C)M'tڜ?3?<cMbNƂiV~H)XuUԬ)Y ɑH&;Z[638QJZо%"DoyGƊ|}"˾T!v'Gˏ/6ߊ>(*CSF꣨trP1b6.j [NGk%<(2:5Zy(z 9LWLXF {Yʫ``q Z -Xm-?7O}+1/guBgb#v`|e'WL4qbf٢E[eF0%G6-EGq<v 5i&4w|;ҹܿ"9G|)D7%e-9i v0)LawfimV<09ۃ;n *'W93^,JA\=,K篌B :uN&<*R%Vɼ3}[Ňeӥ[@ -eNj QG!"锨G#,Jrk)~"2oK7=iݨM]_ڟpM3jvö_ޞQ78ÜmY>I8ՌRYh%@Tp+p@mFlWN|OVG4WLy`堈QA2IQfef.%6X\c)auCf>)Zrb8Л (- > Ĵ-GPm9Cp(4_pw?"n& "u}A DOi\l, Fh&#0,?)ug h5?p|}N"]U Ǔ5X7w9jOk8Idǃ;3=t)"LqV-)ǖ <ܲUTK͙k#^h̆x!cWnY]KWt/_A˫ -ca7+[V/u:x^j~XmݝwMbه& |fz5q/MG'yTF!qp=̛'U&ߏN厳~Ǚ OI_J&F]X"Y9) .qŲ͹nM^C]I)De%x9rBT#&^uE9-h{=7y:šf喙ۅ?E#76 hW.kε^C DdXMs-zQ}O2MCsy4 d%g.Wixoռ(|坲EK%7dqα*iI+U6ISEψ&)XEIߍ%o8\%ۑGCSLY?J< MS$s#8#Rݯnc6R38)P 9O6Xg\Fg}N1TYDSqq}5~%[ӏ3ԟUwfغd SϾ\Sf `3SjBօ,BRB YH! )d!DkUBᮐBRB YH! )d!, BRBZXB YH! )d!, BRB YH! )+b! )d!,%,BRB]W9(?J~=xQBJpG+CBhX!K1\e(S75r24D#&^DHi>=É)hs0B,W(4LjMeDCqs.AE`j|ԖhI[9[Wt`񙟗-斜 &$K2]R_!KTN|n:*1uDAx&h䖤QQIBs'-QUYBs­_$x ë:/ضgkkLkS0-NPD2V`TyuA4 #hcd<ɺ2YdArOƒʅ4(ڵe=B&mШߋףPm&^<*cTDjZCuߘVN(E4}j(߉>Œ$Zs !Gԁ4K8gJ,EL+/uvIQGdY`Xj8 |-i~?G8:ՠaQ6,&YYmV9N/69:}?nGy6 TEUՅT歯npݤIn\y cjC~ ;fQ~}+.Ǎ+wJ~XrlbMQ##ڐy|[j-rsLRgm.\By/]?0O榳̈8̦SK ΨOdT}:K]}hAP="'h{p2_}_[diY.Dy+8\27#Ι<đGlm Fw|Fgkw¬ 1I3ZD) D&F Hnϭ̀M̞fd4hU:Zc%'3?d4Fl!2CXbPQ7mHCs ӫCrM&T^]} ~ 2٩FsC%؏/ʒ,LUWɚ=y!.V1Z \o+ǁTV@Y.5<0pHYyPƠT!ڤT^4d_j@[@49ݨuQ9mQ )IT'F(hD:D"g[sԫ<\٥΍^F,I$ R[)m0)eyIԟ'mRffR*x(gDT EР' I OA+Gl>4U@Mr!ۤ!xs JSK#0c"+D2BsD5NZ e@Ac}L#[@[s+Ơbܓ92CxF8硡m(ig0DrTΪ*seWXR LB>fC.mF SIn(@^Bfޡg( Dh,@{Ђo[1 B2*Q^wrHqnq+ #] c{d%|QN>L\2?㎐`t˫ >{s_ohy3./bu_o?; |k=D'Kmv-HODZYB_v`~h 4$/rcզQQ{>zW~tW "Q>'Okp㜸xSr_Mi5ݛ??r)'.Q2΋uزܦ#q8 ݞl}v_iy>,f8[쾸[4kC-u:x^j~llλ&[5;hi3ӫɨnS|o:6!7 )'j2od ɠۀt'|zeݙbC)]|K9RHAH[ uփ(C*R%Vɜy Ro ֋KP (5 I! @!"锨G#,Jrk)հ}ZaI+&U_G/ 43߯V6 ۲&_4MޝfJ-Q}||:7njLzqfʧ@+*[@vҦ=jvl{rl~::<iLȑy`堈QA2IQfe׋cZYȇќqE:n˟;z9<"fyb brn(1} xJB+a~wX"Y1V/^XŌJLFl'&F M>H^dz;#%ssD-FN Mw=UwU+hƦp'wu2\u Rxk KWX.]7W9t^~h6b KΘvުyQ;ׇ-j^*Y&fw7}F淢]]T|lsg8yXxhκ?+mbjs絘^|?3Vt=Bq{S{ݞW]ŧ>΀866QU%-i%"&i$P(Q;UQΡGCS΁?J<5F ΈF䘍Ԙ`3`5`UqQ8!LJF *9+ݺ|@ |d2yK+qfz =7L=rLa*Y(@L{«Jjv>prB%o2PIy6r$*댩 `\啐HМb⑾(8S'YP- 8 D$S&$3AcG6xN,JB#3>$ B 9[@ !eqf|fe 5ZNC]&Pp?^- ::29tTB !a F-qUb*I 0O6lI/?l #%{K ѻ7D~R;]K Kײf+-j?ayA[(D%7Wʗ%VCܒev}+ ɝWO>NqFe2+ϙ%Lo`e,Hk#hThjKĠhJ3֧] gO.BIS/"q? DsTaKgZ{ɕ_ k*f.vKr/.%Ggv"=E,zhؖVuuxɪTr@\$wٽFJ8\Y$kIV{.:'xN91˟.־;7crĴ$ʔd!7Q9 JrZ"Ltt4\[ygkfwp~/7q$ssu߶R\wj3S̔[e.c1g"rݝAjH+S2 y*hmkƚ_i9g b \2.+Z^M) St ^H$eg0eJ=IHcd:0tmҡ6s5z*jFiyvΌxyۓ}h>U{BBnfI[?!!G% ބLtF#^"r mi [r=Aru%!&òKcT‘S\%2gYoKyTn^ct^SLC7vQB ڕk)bmG1?jБ&Ę=' WVs[B)?PVGSyusy*$zÕүMz'/N3OZ+/݉"l *3dz_ΒVZ+.0PI8bgK$!!$)DA"I0n CtH:R Dlr|T9-1 p\B04#}fsRҥ_G91rj>3n$n}y5x:"Č9>0/ȓx62 Ld<( 8'"@-<.гzZykw%9)ܨK!$ "cHP$QC! A IEZuze> nZ #G -y!Ҩ4ɳM^{8" c1Yzeη^_?-NYWݗA eAne ;ȚR ^x%Y"Ydq30F9i0RH2PdAqZ 0^?#=,9ʉ5&W-NWdZ(~nd~Yڛm'z`F4Nll\>H8jkN`bЬMUKGŨMygW,gFa2DU@}߳nz)FZ~y,\U>4.!eEeT<#̚&k:̌!0!Jp<<8}M5kZܫa#2>I$X PmSk>k`(֠dY夗a݂^-WF`ȷ(^~sz KHS|ҳY#/zjp3T^\"O-^JjVSNu SD>1wF:cg8OF^QS? IHӇm`FXhPZ2X6&_m1W-*}ja>'fApU V1pU*Ub W$iY V61Xkm W~( +JN{Kou\=^yqtGǢ%}v F, ,֞_VKY`* *^bZ!\ГW`#W\M+CUWb{+6`kdcઘkWURpʂբIk`Ӝb.60YKxupU՛+ܳ&1_j?`䗆ު\;\+6[zGp. ^St8*/- okfۛ(  O@>4@L&Ͽcuf<%vm0wh w)|Fԯ!UJiAVC/x}=}X zi~O4 /USPy)sV}Ս0-?pfRLY3q'1VPj3}xf0VŸ\6ŃU,V(YDp*YNvr|,zY$Q9^%.s%WR+A7 _˰tR哸 .zVx-ͬBJ:+bJĹ1]guL䊹J7Ek]+-UrgQrBrq!=,m \s-o \_SR!\I+ErtDҋ2(2}2'BA$LdDHI1J{/Xܻ:K9{~%Cr9;D"6&I{F439bԖ%y~dy_(*W  aꟛJ4?hLT.N&Y1ccT}}\??|~ԿI~R'|;zՏUc <<4Esbj\\=k mŚPiN;k?Ѩ||egp|4_¡WNÛ/T &*ny{a .}{J؍g84!VR2UT&4-D 4ܳ+ {?tocz9-JU_q2~:18/2n[,cq]IDuR/o蠒֗WAEϊ6|d &MD@L'Ɖ -xkfH2h&&($GQgtMei7WoH_2pg.<;/w>זtcîd99 8%pPqOƎؿ~5Ig%I!Iz#w(Lg<1Eη؟]ӱaХC/44'`XK7WGmE¬@owiiɖ^Oޠ~|KΘڕL?t:<9ڛ~7o )|6&O,͑,M7ޘ<OFjΝȎPKЧK-M+tjTRP*֜RtW?@RVRfW4V29QyҜߒURzeכRI.#2CJ2M2q3!>;8A1P3)(ۂ5!LLoaTb# p~Ss/˟|%˔#,G Qu' )MP”̤` h[Y#dvC&+H` ]Q:kmo.oRK)`7.1}gG)HQI :*s͐GVQ8d X8H"#4ɾ¨w%:w)[G1p. 'zX6>j]3k!ޡ0G}H|Y&"+8\X!AgV4QGk3gjлP5Wa:3mOnvfº'z< =m*^?鳷3*@3Cj. K $EF?D/A6(Ճ*{\[KB:LVe)q1L#ʁ=6sv{Zx8BO5ֿN_K7hܢ+Zey!e)wV+@߼\4 9HPVGSiNG+*4z4SǍ GhCD i*E /Iy`^p `v1HXEC:if IR$ǢmޡC/J:&PXeQ֬k3gh?{WƑ O8%;8{YcsApI.F/2cTDʎ~3$E|5Iklؖ9Þ꧟: 7WYHҏ-)MULgd}bVh\KbX EgVYfBFg!rܙd47GKH"]49ym:@K1=|A*T"3,M,&2Geu#ǕWف\1hFF22=Y!CcPuf`'V꟝q3K!J[SV"),&#R WhY??N۟kif"4XcJS۬8fC?b J<LY$u,v݉^D&òDk%`dmQjJͣ&0,`Rm|Bhn dYT[id]~^;ezcjܿ/÷-&mW7QzN]j~{oP+g+Cu%.{I9W^iCŝFbKWp`YgHW 1.-S+{㒉KvzaI~<Əaps:"PMuW9B>_Yip۱K7ԑT~-(Iܼw#hZJE/}PwzE=?_ēh曦=-P^L©"p#lv3?SQRA%Elً5>_Pԯ_–rW5Q=QO˯ʴQP:lAcQeAJ{ʴƟ~{BZg>`o)Vت࿕y^m,=^8q$R BJ kWi%`E4Ol`0 Tj,du-!YA3@8-Ipc8HC8 drBB c" ` siXHs4h>@tb@-a ;zzC^tKá }O1XܺЫig+>o)$JJV^"D]N~{ !EXQHd4h61A"!,̡h1cPG&}vkL,2+&O[ 3c,'Dԝ9e4퉉k%f73.G~9нKA'"kf@3rmAk!(x0^YvR{8_RnL@11\A,@\"Pdhyny V% % $ȼ.D'Dp–X.G,`bN8/U:qh)Pc ^#~8a#T911̫L) CᒴFJb5^nX$wosEWmͽ˝A;pO@W"ZZw%Ődz4DmlҴ T$mxe0BfUzSrPR+s׹Kl58[HoRlOq^:z:nm8&O<2ȸYJO1$\)I .| 꼳ȥ]8ܟcj]R{%b Wϴ")pv+(H߽fl~~>WlpٗEWƔo-bЅٝ;mĶ">\bN@ߚq[A,5ZvLkOL rɘH=_N+Oj0[Xj"+\$B|w0dvEcvAj}_>GX r)X`eReɬ|p\[tdnHR(cbY`ȊQH"s1,֐4bqu"M+_iO*|U3ֈF4[k|~B:x=g?Ou&ϸtk%(mbW!%U$CEFlty`:L.i {$x un<=g {^F=CH#c.8)(.C>J!6s9xt06`Mص1;;.<'-T 9#mg>ːwR,d.DW F&pƑJG[57Q1 Z&E:;I?k^~JEUyc9NZbfklxqUc#.o`Tkzkk8 M}~yEES*gq֛ͤ:&7ߩ[kgV—׏o7Ǜ1%Yt0gBpLiERBO: Jn3XZEFlN)"R\`ti2<&%M'8g5q8) [ӌPvP ޼>x|W@h.֕6oq2σx#6:P6GŸ<T^A$,Qlb'L*gWV{^8OV  Bh:Td1Y(@$bѺNg3b0-ڭiǮ-[Fm١v`gcTyښH&g(}VJJbr4#&'QȐcDjZìd7%uQ@&!FpqĒMIF5Z[ٌQ?VP5Uˈ:DqƓV;qA LV-"xr 8iSh#Ne#R";flf3&A$l,* ΄b ]dI e\Eb'[g3"/,|iɮh[Eb YQo]>ZVXFݭjAl6눾-= { D5ƣΏNWNrT\37S!WRAt* ]X]uK:1ĉy4jpkkc&ɤ2@R iN̵29-9uH9E}H2^|i̦BW?Oˮ 1C9:C[ՓTT/ gD>! ?xZs]yb6JP@ &;Bm\z,@K9wWM>D@ƈ +hA# s& 1j6.@j+&~y ;}@zwE&T~Gou%vO 8luʮw0Z7KQ=QLE}[| _KgڬCgZgU*#S{'Mr铿יԛo8yÓ &9+|=cIXfIŭY$Sz#~=58s_5:9oxɫ)j}PV#a_@_][6C%|m'tM'ϻOEnK0E:}|Tu5OP[jvp܀9=Mj){/|Gػ]_E}"\MſC_(xoƮY,lNg`v!ƴv5$5|ְL rP(Z+xb@F'xFeת5dg,΃2| vpqj.{ȜbO MOWn/$VȵG9K_MV?)Ab.&ctwEw_7O9oht~zʠf Yѝ]׽_|{Ƨ-/ܹ^F5ͮ2}mt~w 'Ofj3l.Js՜m+J,oı\v+֖[Jܓ+(tyMW\ ro-kIfnl5TBQA֮JA0r4F4phFN=DFNo'sJdR&'b 2ҏJGD(M:isb\J`:ۮ Hܕ2<$cɎ`2B!$N!2WjЂa22}J-q6S\ 2OO>^Έܵ~/'؊MPJ>KH4ώASa=MK`n7g$᫖PΘ#&9G]}ЂGm}i?" vqGW$W$VX5 GHJuW/H)UXO`aײc"C+R1WW)3C6\=Ճ^Ճ2$b WծCύ4XYp-nJm\.y͂5o}qB>޿17zWIskL+N=xĩ2jdܒΏ=LZ _/͏S- \qh8%IkUR^ \I';&vUhઈ͏+3vpU䢃WJimno6{eQozỿ}\}uo^oUmDG#bԎ0ShA\ɘ4E@C/4-W$w4uk.0_B? wJz0||uS31i^|'>~t>BFZK볘l4|l:tJ+=2M*equSFw {is@ݦvLS+N6p.yLN'˂vJ=&,/EQIl>9Yni5vB[="%,uAT;MBBL˅  Xj,Kkly*|w. T+ZRTYWqyN.ĚSvrEMμmJ!%d.;٩YȠ%]io=+ Lf}Ĭ{[iQG)ڠ!'pDZ*G#0p&'@!_ִ#Y!$^lAIFjKW@RIwR.*+h`tX!PG+&4Rm*I!i#ZY` @..h\-2fw7 ˂gi{51ޏ>ӫɬFџF3nvu1^^iUvf~0{]n>_UId|?$>޸w YTf^u!ˋnR31]A~s)JGuNC=IJ8/zW0(ޏn'5^.h>[XДXv,bHj*~f7!ѿ<sEW k;ˢ9<|uElhw|ƣy;￷jN(>T,#YzQ׽ -S!~#I^B݈ Ǵ-u: }g9orh_o`QFʕWW yv+ȫ~R&SJn1z23Q詿yo}~c6"gw7Zoy01gS ="AłyS{@cF10֊4S!FwYB3=P[p{m hl{?hhV2.GCd^BfoHYyB! ҥV$Y|yFۛS݌&lj*w3(=ȓI׌"k!Ȗ"̴"%x_{z|Fd%D]N^v D&JVwAMO( 6lBbQR@uZ伱)cq*b袈աPL$!V`KR$4?& seRL=+`_ga4b#5nзRnW뉞]_(MQF$Cc0yψR+2TDto.@AAj=8t8E!֦hTQ3hY-darI%l?l>[P<3q"1I_B6W/|ID+Jy dBD:JKQh CUprh ݗUOڮ@ %'0Yw4s|`؇J_yHy6=.+o>5EyᣗH#Hs 1 ]dJ h]TM^_n4FSL)zF Vm?1K-J'#b|RJ)o=SG @ZfL?Y?e *DoF{(>5V g7ŴbnǫI蝩nJ$e3:90!iI @|Ox7ۮ2Mk.vTOg+^ٚoE#n*w; Т,-J!Rt`=lZX$-]AGsSh+9XӦ|.Z*=|Q@ٗ浓< ) Bdl6=Bfq,4,4r/* FT66CZܬe,dz/Ѡ lKl$-Yo}xj8%vHfq֍`QXSRlbL ҡКG#ɩb%I+#+rAMSyXuF#"^6IIcYI$Hd8RM'B0kRM.pv^QӫLU0DlFJDh,aDFLSR1 A >#B@6 D|*g\ӄuP h:U9:i)&$$S͆["~$^rq-lͨPE3A.n-&7ŗ"Qǡ7T\"oxGUR+o/E+np<;"AXήǴ]ZwD`w?؋ԓ$.]Qn!*:# ͇޶[ vAOIqza%21O" =hDtCm)'zyOzQr+\,~3 ìGO%wPHP9m]s'~Ajq-`'#NIiXX.:c;A<_3Ʌ F),٠!&QD+O&RQ1c6)(EXbZm,GgT~͡ʳķDp2Rp L = gɍo(ho&ho%hKgs񄣦C_0z\ٿTD"Տ?8ֆ=RQS_U6|ڰTxQee,D ڞ2tYd _Zk(C/6b4W#\vXt 4 fijorq4'`~3kn)8D+nU/W,r.Sn%G֚[3Lb|N %eeUO.3Ao셔^I^Y͵VdS:GD5 ׮d!0λJ:0K_%ɘI58-2*mr!1 _KL[4JPi5JR.oV|koo ƍ }5aŗS[TS*h_ RJ1(V`t"$ xCNK5 qT] 7|qV]%18K%Ldh d.dΌ"lTB!MR<=Ӛ'tҤTk6zV-6ƶ|^YLhVBUWje8##RG(;&ŷ` A7ތ[$ΰ]Q:FCgK+O $Q ZIOgo7D:CDΈ't6}z+ܻa+ Ց SAGrjiԴ,;/VR(ǟM`o:2\`L5:IeeXEM9q3IMMAGEd@P^2V#e`I@:pvۛX&NO,P;ɧnzOI25m7z99r(jxP tA9~Y`̐.yXrKwh8VS}Uitr-H+Т C oJtIj:D% o7d(Fko-8Q>]S@E͏2:+ d1S(:;Ԡl OgaY 1~l6=71/_NƑsL>->_-{CRr:|MЇ5@YҎ9bu&^u=g=+?R,wr1\CA˕w!XRĈ>"0 +9%G2ceL0WRQJD!z)"W[]5؁#6Wm(nebx `tI2ƕ.NQMHHԅgE$Q4Ǧ )H=81VSFAXR^ιIV@/kO4n{_cuuZ_D7u`7d?^ʳ"/O~RQ) ^ _,.sI?2}YJޥSm4* l eD:\e)Atp:} dB Džg 1cx@1ճdWYmj1]']=[`@{p*KLWo$Wݿp$bB[V\TK?~J?{_8">^w/?MFЀO'('Y@|{\ Am#v6.oޛap/XoZ{T|=7K8u\Δ^/s])p'υ).'yy׍aom:/~_xK/PPP=G]Tz_> 8]oi0_U% T ^)Ƶ~?^<S8LsT{2MfyYrE,sD Ƙ+506"u<\5B@7VϋIަc;%jvF ~6\E^JFҩ[rYJOoђcP8'?yXRܳgi8uRj[+0>@p6(}n=԰츌_,W/˟k_n4;3E#P]\%y`Oqv/s {?} ӵGQ5P%ߌm+R+Zk†92WJ 0ǂ3m /sDzf݃x뤦 7Q(TKŨf0"UM[LahΫ"ݒuƮ%뚖[wДC+mҳq6*.m97vibP6J3@d"@`hRh,TDR"`j]UQ,?% w/7q93F{OC.?ZispyP+)u}~4yu>[\E|MCT#])3斀{ ^~~;;ݫt\Uj/ew(ScQFQQ=-TXj_NR!wz(pCiSBDA@c%Au'v~yaT m1h^O)u\Ƣ$Z&k U@ h((czfl $ TR$2$9'TIEXjhkl.9],!)uVڣRgD߄iݻ/]S#ϧ)WUOWE3䣣ǡPU+\v[nHS7W!&:sUz;zyRۘT&$#EDA AoS8n^mR$v s[ p&Q X `Y#TʖeklWѾ@,h^SvE始YkhjޝҍݻV|Ry,]1mIOߣϧnƶG-vxۮX|[l㳭_rrm0*BGôYclp`hXku]gbj.n1/[FcwS>wKt!u˭͏dę>PnRs6Y\%;K+OjtR. rZ\u/.qdPHJ@CIBQH8Sp>Esj>ŝ؁|;ҾO1hH!D)B0O*S{<ʹ9U>E,5ZX 9>܃F&p` r'ZZn Yj٧ؚ8}.Rw`?qv3^ةYv~ڶٵ5j>/ܷΪnê6O[|Ab:H5\,_Af>W_(v!_q9k2eE<j]pᅋ9IrX> 4ŝA$|iGr O%+uό Q!E 8X@>ǣ34j `}T Lkm,$Θ FiKSI|^{%4o;:;<;ڡc$iV IzYs]PM79⃱.x={y-Gykzr+(Ց((Ҥ׾ZMݵqjgم&R_K-@H2\D㨼QKWsF( r!4)iPPp`o@$`d4 62nu4[H\Ҟɤ2Y\T nM+x-z8Xv}uOe:)vEgYɲ4mkGnp#ߨyru١J&6ZP9=&+CF-| Q2&":E22Q(h9HD+j;vs)6bӠ1Qy/C_&`;!(?NszSt*jRI QG}\qv8=tpDȺ\q 0*#E̿h ą`PrJeʔyL0WRJ!z)"W[]5|؁#6Wm(neNx `tI2ƕ.NQMHHԅIRn*g:א4eHW G\C4Ӂ]1f M E!2ΥC $ϥclT%G2hZcLH{_L+3t k {㐶pOޥM䟩jEp>}-Ous̟'0c&2i_}+EՋxmp*_dz89:'E ͏?Zvdj +GYD(-ep,?Pr֮kwYWjAϋCxك.& yApZ#ﭓо]ܵm7&y9-/[xﶢV7v\U~Uc/6xŋZܭ|:n:{{cmQ}Pj¶j&nTUȍd>]^V=Ë|e>~.vFfg7?M kL:ںnVsz`}Yוթŝ[s|u4P21C7D/(18-N"1Z(OJgu6%B:G'UROv goc1o rxK'C g*1ֆw3 ,ge3 2b "2KXGьKnr.5/ךPx)rvh3\9PMsS~,'n?~Ϧ7ms\}}lȓ֖#v3rm!_V#$i"핥 ^[oα(`(Uܡ.d5XjFet1R"`b4g`:&YtMJi"AJ*Қ95v,g %+ %tяH,gE!~ن /7w4~&FW+`ȕJ 5STO 6c̒QZ-"%_g1.9T/zQzqЋ[Oi\m2x+FIĨA $bQ\BzzTa18T>z&9 ۑѮ;_6ilC$g ς]@ )IQUK!B-ʀeа9 1LvA5FPB֞ޙ<6iFFM4DӼo 39%Se0B,e3*IȅƉܣ)3x%4r>jKS$3qMO]WBr Srj\Rp|`Sn<y޴ s Nnβ˽ “4HFFd%$@JRګ48SGEs]!Wx/a4orܽe{qoi%b`ZANSS0-NPD2V`TDCAew= *-C!^C/Y5u/OFEƒAp%,Ez q}II/\wV1*"5g: IoLRpĢ b5 TVb&H5`r :a -{!I{Ρ7]Lʽn>ϥg͜Io?Z1d7ո.xY50Jusu9N[Is/}vm69n\&]~Mql_QՅT/+7{iϧ>\wn@-W&DusG5b9Ij3Kl.=^hC=(vNj>{Y5S@]GmޅQV?͵ 8ͳM:MeWBՒsO5C5..xF,\WwqA߶_sc/t1l\;bM'}8t儝]KMy8sM欙" F-~E )XO] ~aje޾xy~Y%`<˗񗉋(D̆癟]P-j*^\lc٨ɜQ20njJmj!; *}ɳenh1 p1jK +:GQFӾ4B\@uuƙ[V.ky~EzיP^vƊegwf F[܌|FgkğG+aVjDҘ h E!h0QS[@ `S)YA ph EFX  c[%,e|R){āqL;O{k !yb d'օ^o^(qZR[RdCtA'@MD0`.HYyPƠT!ڤ4^4dN}$4%hslrڢϥ!R"1hNO@tDhaW=RU_;4.72 dIe%!IJi1xO40*ǕyI\$dӑfR*x([2B*4. HRpSG *B8$O3oeyLKC$KWAdy+ ўʃTc uC|) `C9G`D+D2BsD5NZ M!{Y;[KwA[K/zg|&wĊWrS|EA~9foѲ@6:oYU\2ѥzȶXb-Y^HOVXY:q} z4ᣱ&A omTDɨh:Y*.|Ӿ-GMǥ;nx|[K;W3lO")qVUw?_7c~<4ްi_e'˓wxcv:ïю/kumGZi=Ҳ9cѐ l3e)U m"!m}߫K&9r TZbk/97_xUfH_rV!:2D!h1QFe:%*EKZy*>縯fex=ڼ&q2e[, @f_߹*𮟶Uh ?@z(HqyB{(qfڧ@kj[ @tqfBqrycE B ˓VG40͔ h c!Cg9(bT`2@ tRY$ +VPRޮE ]'){p/Oysh{=Թh՗St޾l!1 b2&W/8W0<`ŏO 7| #W'7?~8j>.ҬE0Ch )O<T"{_xuᅼZ}y۬B Zb>?|$z:zyZ/W+Y|ާ.7{] ߾׵i=! ߪ5lka Q)M^CkI)De%x9rBp$XM2Eab zAyo@Y"1Z(OJg5\= _\4 +KMeqۼ9ܛn:|*e͑DUjgD,BH?.J*COH8~#I?2b0rI6ؽb?\ "-opHQ4GH TOWU?U]/L<瀿 aiKΈFk 0O#J@M*Qϸ 1ROm7`lFz@]<6Zk!ۊFKSSۀq/ALWOW%c-fYA)*[(@\;HuƔBYh\镐Ȩ9e}NZApa|iYP IT(p+ LАxx@<',HB3 APh'&gP/xnkBQJd_ NÇRUv栬n=qsщБRܜK]IJ%D9Nm$D-"+'Za,H7z|-Le(.2c +ҰvDJ ՃEm~ݷqjਲ਼KC)EtRQ4 M7`Th%b5տTE=#To?RBc!-3ysh~J ݛL5Oe¶70<BnW{K>۷/hҔhEDA{ (F8 ?@><%,r1v{Z*G`U*ku gL"Efr$Εu@qB{En ފ9+_]cmE.a]Y, \-h-Qx僩(@|lԠ {bhG:K-5w#Q; -'*-3qe~I)QK!*Nl h*"6d7*饈h:vs)}ɕKg3|yȗ1QyǯL.턠rv8[͸QJRn"hRIю*0hzQ- A4Q4B嚍;)(@vр 8%!"2[R2BӐVB4F>Nl^s3DpGq?)cwfr%/t* 48)2!s$I1pGt\NxUTO4צyC)IQ2N "BcY7ZC!;p iEd`wH{&*hMqN9x E rEvFɎ6zaK!$$C+Է&N!h}G!]D= m n'%Ԇ\x.iq-Ȩ Z:L'&}>a7LRJ !x[j|iUxxyXFb[-ICx_540W/ҷůdVpw~]f?4__iz1z&=OTO }Joơ=QF k_PRSl(-)'W;*fI*?6ّiZnx137WG&EsGw/3]24#TwZajxA{ڣj0''y-ؤmʡ۳r'N5 7цI߹,uYW"X?xC΍5BH@NH@:Q81plNIe^rJgBe*yQ g߀9~_l80@ y$ U9*FƉ&R(Ťȇjh9?:׊_fLך5xi%{zZf|s=j>N5q/8ixPʹŸv)'JFO^JD8=gĵ"(*A{J- T&\Bќ"|Z; ۠SQNR!vLS SN!4h&AMskFre0JTC(: HQ.48 3xeT!2i5nn>hKS$ikѶҦgdzmz_o7S_^zwWUK^ n:!J'i!1S4&+%)5*(ITRp2GbEs]9x?(ax3Cʽܷ**h@htՆPD2VHFSQ"CA,ZݦYTy*-LQu}ֽ?eSb?Dʅ;g:łG NhMcӰnִ )\RQ!9k IИ$ w!@3W +HYaXEZs cf@"%3%"$핗1 ۝Lʽ6ZcY@q3Rsū0>C ٝmYv F8Lle'ϽdTMaw B"mÏ+FţA +0S+(RVܷۏL!˜Nja.  >@F&6 |7W^*0Kg.O0 qu[5?6F̮ aa-nF͘}/8/riLqQwH;xog6qxyü~GjY6̇% UKyˑooz!Tu-..0pb//jma%#"(!vDd+5heG('bB.yrpYû78e}m(@͑zT,+_p/rjVzD1j12vU_ukc 6&c ],b $nX:]~Gk֫FOC^x բ/~j*oyOnMWY#$8:޿` G.W/~.+@?JuO%x(?#e)Zj|@*W. 5~?EԂ>JmZ06vxɫIV{kȱ eQ'{V~]mbe?-\bR]oB;.~DY-WӶaF2zrcq؊7×[s9e=`󖂡 ٺt\uA9m\ b4H$ BbhV9pK89$|$YxR-oDJQSddp"&)rtT r-uӳnP!־jḄ1Lftr.7 .@VI*s e 7G@̡44zfLdIJ )#ʑk8GT㤕3i!S7,u-hkuhԝ^B"'?W<٪"9`Y@62GYUʜ(R&:nuk|>om32[S<- Dޡg("|0H[1 VzCd߈ /b$V~U,Ϛ`|3 Us/opGax(~;f-~6?12ۂ"gU[G|p`4΃F-sӖ\݊b֖Z͛mN i-`26+Bl֊H^N+fLnNO|g%IWhvZ=΅yгZ=;Spv5 IDMHh&. Z C.%eKܺH뤭M$HFzj1gZ!2bqSztuUų/#>P§[J;rM>'>Fp&[>7C[(]XnXU^Vn)\RwX4jqj&&؝ `eI#,eUɖdaz]Ve\>pm+T9N"u~L漝9/g@uS['5 VHM>-F gUeU6F)D;MMrPHY"!(Oڄqk1Yg͍}NTי:1+|ta;@PK s]Ts&kgDFOcZj\\E5QT$4ǟ?h>C%Q dJ'Sh_XQ]QJNU]m˗`|w.ɾI͙$&wu5 Ml⚜)m͚'ZϦNX=/5?-&nQTE,JLt8M@XHkIkC2-bT%(2JR<7 'zP!t+؜@H&XLqbXXle쉅BA;,ED(!bVgDxήMzvf0G&$ r/&3G~ c'1 *>R8E)Ƿ:8_>wּ&}ߪb'>Tц<8\i` X;&2` ܙ!ND$4BhSfIxŽ|y{|8f;9΄x]kϫ璧}m9:vợMG7G7Ϡzhsu n'=ydp^G7G7;^8i.e?rA.7&QRQi,)! 1ڙfP=PcΨB9zaT m1h>We1=<5y] h=->l秉?-6?m.U6*5B]sm5եw@MTVI ZA2I*8FI m5͹{{RH .d y 1z*B9)PZ5Ms?ԅ\$QǸָDp\r]H ґn mns0ewf.4,xI8&Mhj j6ep4'1_=_UFryڳmh!$HP&A 2kX\X$TF ̊hXёٷGӬ8 YFHWh3κʲ,|ʲkҎRRN8Z'?>O+q.pJ+W3[IJŸR/RHf~RG+7((&qi YkK)} xRTC/ q?143řP 82b% ~8߳=[jyKyj{p^^PyWx9&-S,Nf!Hp:̩ Iy*- "2X\Qޖݝv4/Ś*M`w`pK,vyviJ#\o=ՈR+N\p$GI+0,TNT%*),.< O8J#<Sm[HB.j IS'eH ivVsU?2`g%$jM<(ÁD@1c1qv O8+4޽Wܹ/Ǔϭ]jmw} -{*PUfD ĸBk." .ި jJj4~(J\yKJ\Ŝ2j7T$`d4Jc,1竏Fx)R.it*Ř[TIzTٮO79B!qxjsMx˯+:D%TAqrMzOHU.wm=1#-sT[ ^oԎ<;EٮJ+qrXoCٯ4YI2jcSop #Z$#K!I44P)Hba7כmۘz6[O׵EXB`mdR/('9qy3U1Iec>(hsC B^2C^v#CqAxb%0B 2曱2&NfDiL6KyG@gCQ@gW\-x黜sd "S3AHi['v+q#Z-H(\:%gμ|3Z9m~ZͬU?Ĭuu'uca_ =T*?Ǻq)/PN^_B.q&ǒvpЛZ {S=PݢM,<3E~su`ZkQ5_n]"G:_i@mlXvCΰMMy.4mI[#۔{cy^;e,,D ;b1Apy%77֜`7,= HbT*I":᎙ʜDǺ"G3soU}Q;ox- %P#Ԓ|BӐ=qdTsc5DR2R -GdzE@/@4n/Db'I+u9Q`͸% ³ǪW4b(X!]BpU7ݏnnF| G 7ף_qc9"ZDؽUqyFuzk4A3Ǝ/u "+^i&l% j?WY5a!%#jv|Y~_LoHڱ,E}l^wmMLǛl0P!ld1zkI]\*ŽLe1v?{C6~o)8ǩLQ缛_d_4M>\NMhr")O;I|_~e9/R,}3$dp,K[68E>o8}6/mNdKE[ ) 6]'mc]o^k=}A &ϭ90փ%k:?w/Oio\?ߙN:y_z 6K=lRRzvr/jNu2LT t?[@iv%4@HTʒ^Z9pVlp;EZ%~Ɂ‹9](qʜ=?ņÈז1=>W7Rtm!m}mtW9'8X"TB;*)+!5"X+"*ZD)쾮Pi %"[a@.YQrxK <!: &$-EuV OgHѵe5%7<Ra4('M:^,[7ǎI\^^H4Y5W;/%CURGe^Md萙:L{wLBOűdY[K͙oM{;N}"(V kFJC?%CqwHv%0bQVK@XDfP :UPy;˹(+* AQ(R3eVO3x9#g UD\*m9v"),&#R$ AYYX?OkcmM,$ @s5&!6+OBsoh ,+h_BډO p%Z+=lcQ[`(5噔(JF ܄[s~uZnX[idWExRѭI '%ڄTzr~6,Aedrnglq9͎.6_fhh5tOo,1YJoH!۰|L)w^WRȑ"QXf8FfBuМ)rA(\HI&h_c G)#)1{"y0>QyΛp6!>y4 E@>qU&;ׅ$B)۟5Ys/"/WڅAsY90c,[+}Zqۓ;<0GJVՔ+['J]8.I woo_OuZs~2$yhILqu!n_[;oί_hoaHi }/E&u ]Z5w@duP@J8' bXE2 yKGGYٳ|8;2eЛhg2dVXRH6(kkK5RL,k ϊQHGsEkHuݭm9+2_wy >{Os&C'>\#:>.t}à)ON{ >JÔTmb 9zd:;NgL,{y 8@xԀ:A(먕2V 'Q"p&'7R|,sq2@1zrYJf.s!D2xt ;g7kM!AE:zqPSD'TSjO,cvIa ]]1L#~wKȪV@d4姄ATRKș:Ek YgBųq ~Ф\\d*@($2o'J:9m]ѷV69d+*ܽ\\N.$6Вsƙh}tLDR&x9Hy"cd [-z:ٜRO1&R\ApMG[\*+NR!Yů-cgl|Vv;B-pp!Q eڋM=u : Onye3(]̓A/T%VO脩(WrDiEG@dEPyQHQ̦4 fY(_Ft%fGt<ݙvjeV[VG9edFggr8C"$$fD69*#9r@M0kc+S'oA%[(XxX)kZT c{ؙ8a x(3"--boi14eNGͷ_5ѭmtv?`>QUwOշ7J[fxqEX]. Q x~$k:C?C^i߻E(+xzwh& l1aA[$ S}^D! ePKdYg%FC|>IyFlm`9 ;=@ύe^M.~.~OtWɬ@ˎ?ȈmmRןt4|f]sSRe|Gr0I6H@)3s^1 ,3x8qv>úwͰ$&+8% p`&uuFJ2^iqgauLjpNFjN畏f Q)RޜuDʵ먣%LD=`b={nnhewRC ,c&ɤTeI%53F&ڭ{geN=v7w5DM\'~ ҳ>tBrqPgv{O|NݗSNI3!ѠT 7ÏnC\1@զF Ĥy^&WPQh<"h%&'0L!%0GxK4@ ~ /,t}BmLr9j.I]b}GL<C>Rd)"n]ÇIcӇ_0!V!L/#cZa— /]/^y#RC OBk%#O,I2#5h]v=Mz4`9mǗm<g솦gMww)Z&#xM" {79&1V8kn^< p=뽕y\?MtgMwoviyyg/hڝa<io~u} ~mt~+bsS<\Z ՜w~5ooDM/zi(?ϭ͟7ʙ6p9ܫM1wH:[kP>kW(ec?\'ӈIIxNs`ecsBY|^A+[n;tt"t ?7oRƸ+X'73KX̦6@ JuEbI(Px'$8'󗛬D^BX_ļuY<:mtNݻvD %/[&XS@)+K)g_@ dwJs}ii7FGbH]@% I S[4"t_?{<c PPkmȚ,+^E$u5%KED'}x𤨆VTA8~ 40b hf Jq&Τ )E;Q1xΒj}٬{ulrfIǟBd7Ko^0[)`tn~8\x)CE#|fnb_'96)/BE ٵV66_ɑV=بoy Suĵb73;m9;FYkqZ.WGqyL 1t6ixɘD%4P5,.׼5?\sJ#<Sm[HB.j IכS'eH avVs*~d׈g%jM<(ÁDHu Qδ[#g\󙳾@#WRwt6T>{tEZh& {YC |<HSb\57j(xR+m’SF] ɡj02J1h:{"uI{sR /EJؖ%5r8qN7O w^|ue:)f,OdYj.֚eYӔv><:;#.zݲ}td]x\y,%M!>(9 g ":E22Q(DN D+f;vs)\|9t~ zqXZ=F>O턠|v8͸RPWQ PB $Qi 4@QN#E#d^ٸS b8b֫Ў1[jg .OFD)ʔy 0(RJio.j_9"yGM )ʢ'* 480)2C(9QhR*c[N,E0;ӎWS='I;ޤ)[yQn(Z_`.kXu9Z`կ9U")u݋έTz.}\\.¹fPxtfjK>̻wiL?IRۺ鷔,hBm =c$ƨ,D"rWH!Mͼc^bK/DQK/ y>%|"o띉xpЙ>hqB+N 4R)ќh(`>EoLHЈ1RHp;m<8v(Cd(Z>4C|&9{U+mzFG^yb~dF$W]'*֗Gyn>_zxkktjȽʌIIL\Zd7$%O RPO"gd BǭʔڶB,o )sOf췮)XHڀF+JJD(Qr%@|T,{< ;X5;ϴ ^쟴ԒJ%:RBQ/5x܉8S1HYoAWwߎ}ϵ5#HR8yhw[FYc4U hLc;QƐX^1W +Hz´6$&z"2؎3$rA=9N(ˌ;#< (nB(.yzh_N@|yv Uqp0N;|+s6wnIAgb}ٿQ`.]ώCc-.7qE!^}\ sWq?>뉙 a5tRzd"arn*nt{1_&w}G-jIG'~%&K7w{wr?Ԭ!k8^Ty{eub -yjˍ|oZx1gNS_61n.52jjd;\2<0ٹKL}ƾ#tN{e0o71+4=o,6rp\fWϫRǙ[w;V=w_]}c^rZKc1o[6QƁ ,x8W&X? wŒP! P,:@ ?C2=6E)=_ Z*3~g+*mDW)ȠU10MXbh()%%QctVgOdANOq7}9lM:0X0sϠ~D\ D#Nީ&ΈhIruߺ!>s4"`Cf RQCC` ˕ LP@RF(#p #b'cg$cs*NN!kt; ԹOB\01.WclA\r'AD|Z5@;9ԦIuǪ;iCipTȬM皛unܿ_jQ(u}[gr^6noMz7O>83gǥsKo?6Z.t+Zo{k[;6j5o:RҪȋn),7?ebkmSr@›oS|7?'_n.Ex=>`~|L||{G?=Zy]QQ2ż HY~ nSe)gA wp՛jp?#n~Yxmַnb{}Zղ|Ejmw7^rcU_.[݌nڛ;x^)K|@col*o 8l9,V=f"u:|B<_F åO?.8/x0, v ӚEH\cI80EP ;(=)23;J+O5^* SI2:0D+!qٻ6$W:֎`<&bgֱaډ:IZ$ RnM 4R#D#/++3:o\JzFL9DŽRn&x92J#"( -J)'R1wl?F7P1K+;$? Bym6^7-'&2s>>(w_sY+MiysʒwK@G6.*SZY)(0lS_;ˮ%wKYX7NΤԿ3&$%zD- Y{J^Rh3KL`B<UX-Ԃ@jX$"﵌FM ilfgJ>$SQ8ev5H"cKrHmTXQ""*BP,*j`Hj`5FT~8!,($SLYH̊MZW.TTQwr52:ٶ>z@`lUmOj& krVErK\[ʴjU4/\O?-7+*8E( #tR[7x&:4`j 2j4YQ(6A5 ()&=)Q[0W12aL/fijdDl+VɆVƖXI^,x,_T}IqGêh.݃_?Pt;(dt ae(xMp*(h1K f5j$yo C"{ƧP` @h/#p;t$LׅTO#rx[PP;ifԦj\"s,`9%(P@,99KDL` DQ#o䘉_:  D9cH>rPM A+R?/=P1YfDd"jêV ,``i2kpLy OЊJXMxj,R]0/{l@iK4*5 0Vޏ+6OC{M< Pi&eS_U~WCKWxT(Q=f?.6N1:qYZ8I!/[{u3*XV"Q/~nj+a};ԟX]qVW@ƷZWE5,^Kp`=8-0Yԏiପ9tZ RFh'K)=Rl@1(.R.9xGjnt=qSK3jl$.K }XRm1MlR[`:KoeJ: z{p։cgmI %ȗZrV:Afx񎹵0Nrs H12̭>̭u9sЮnKkNW;1[bҥVX&VU~4KMt|Re,:uy?Tyʩj^uϛ+Nv5WHuWN{Nޚ7}S}!>5ˎw,·3+vh VSɽ/l1D()fX-i5t8=c-V7cNAQsw(ew㪣X]]1q}6c\jXd{Zku|}?=g܉F9Pl,DgQ )|,q q_2?v ҹ/hUF%z0k<Zkf 7OB;9nLd]}SMgRqFn m#l NNFe̋2mlc.䲍MT4v5Gޣ-;oi}TmiȢ8M\׶;_o B懎|Κ}``DeP>(1VK>xnVLާ\k]cB0hgp_B.']g}o>s~E@p~='N3Iѡ$)bLl9q%5;\m$eL\ WI\m&%90"j۩${@+'%#񧟎T xzoO3-ūdi @Ԩѧg|Q#TbjFގ.,ΌRe*|Ha!+ m1+п@Ox~]LxxbK2Ujfw:pbS"0'!JE<.R'j{ V<,nϽ_q63)}&.7\.}xe q!.y|a6l3|ͥ+n (5I~#0n]Yǿ(̥r;lsibR%c6qx(s4*~ы*/xObhDJ@#b > S}`j9ž(ظ wUyOPYh|oŷqHV*Y0Y&/Z^$֯ƫ9x-^$-ŇULR26xE`GclQ.>=,١O-x!|+?^z7}1G=&G;+0>\a'|g$vk粽v\\VtR ۶' C2YoZ #_~Υ7ohV; ­aGGOvIs} ]#rYEdzyh^_6=ubnax\ \|qh|;HdײC~Z wpИ6y{`G XPJ05ʬ#aRx Y"+냉KM1ݻ#Xy$0tf18:5.zwHvގzrqUgׇe]FaR=ڶ_gd8FB%V 1P]Ō &cM(UL16;y Z`R5Р3*yQ9ҝ) NPJ25885x8>?{U S }ͥOsj7垘%RgQ:U9.^#Jh`HCl֥EMSL<xzF4\"nw__4,XNqVZx 5⨕U n9 HIb޼ \mJdts=׏ lc z6n5|Q1v}/s0D ",`0xd#yH?(И#CF0 aB1"Lb,!#r)%׫,5 ;4e :y6XTu>*됥I8J# p!xƓٹzyk5%1pB`w,${P4<6 9ǩq6C_^?,_;4}W8n|u?u'BTƭ ىp]7LR'7Tmzr1ULOBfi8&IyW%`?7J)Ɲ6<@%_ӋgU<|fw=/q@̓;ɍ.JؤJ1Y1@S")ҚOy9VM$JziVhNVMlv'Z9 #0xh&pQ a2ͦ >e r GWG죶ɫx[7"!}fڿਧ Jo6\mq~|z.t|UT7o#T!e! gixkEыN 2,pq։ͮussJ? }.{etqBgLӗ[ ~Qhm(2 r^xfS!S2Բ{eOƲ٪[j~4!5%DI4ͩ_AzFpiԚ:\h8AxgKP+kT|ԖhI} 玶4i'W2ϭl~y$g-g-Cw{PGkϷvw(ݰ,׫^঳NiId$#|mU5**IUhdH*E,!V_u4ض$oW10 G`]Z‰dMFZ^}!($N oBNN*,X P.Ap'LCT,zIlpB6h]@폵5݅\RQ!9k IPH cZ;w &U a`W@1VAkn!6,HpIsjRĤR;fzv&=۹|eKoVcl3~[0V,jTN>6WN/TEVm>WWc@(|a'gI +0S_Tnpdi+I'ǧipf3Gdz9nradQ@ɑWb3W\!X?\FKȳ#wYJur1@՝T63V, 3d/F=|FgkğG;aVjDҘ .`2րn>iA ^FvQ5V.0 aYBRSJ+G޴= ͽ -F{2U6Cͽerڍ>jkiь0Ɨʄp}z!###:CkvHm D J NpS{5»sED βȃ4m &ͼu':4$p$s( $%( Ds| EE5 KC$EcP 8$ pvzե٥ʍ^F,I$ R[)m0F[r9/v"  Ĺ AN0RIϑ /Q-1*hD$Ev7H>}l Bi2@Mr!ۤ<g^""1\ $#[Yܛ[3O7G@̡4<3&2_$%B H5#qJxG9AyqsbQ;P)Ѳ@6:YUꜯzHIyg:Tޡ{gYKHOpfMqY{=D:@CMPXbPG9Qy "$ybdӹRs\;t\mo.CurF{RN=9[g7p汯2NvKf|6h4-y ul ڼH뉐i-ԗgLa!j+jD@`Ō& {zĒ+EK:A:j};hp6( k2$YVy%-1p8LBtQaZ@p))\H^'mmT%G2^ ǙJ(6݆RO5wtVJ[_4|1-6DЇ3fhqHG]lw_7j}fMD6*ב71Zy Tba3KyMLX2,:sdN \ R93<9Pdg NRtg!1 L ={KDy%wVz=SaDsT|V $M.qdL%7@ e,Eі8`E %ͿEp,#h%7)/ךPQ g7AK?k /kTN- h{R@p<^6!J˷]E7-kY)n8[?TE'I;di,%XzCtE7FD8;ScQɨN1& :LL2=L0Rd=-&%r'B%\r]p|_ (cOY(YYY( * #6xCfS~b{ron4|_cNAj('l$$N%#ZD'Dx樀 qʆ I"@eEgeɕ E٨vĄoD6O#J gq\Q1"QǾR|ݞ{d AIJdwI#3VDr,) 9j&Ԥi*tp얈.G8;ꓭ+ua(8m/AE4HLhJ(i5(!D 1Jd}2 !"QǾ{@-ͬw*c.O7>n'~br6"KFz!R֧:- )|VKod`0xC~Yᶷ Gږyd ii .CtR+e< ,mSR$*ɭ58ٖ Mq_@|mކ÷{ҵF`v~O7 3߯K\o7m1wDPOjTqdz͂0}60F$y|"b 1GP}']e@Wi+`;}YGn5^;H8_dzÛt1 W?v/ !aIk6Rr`Cʾd$ߝFOl:V"܃kL]OÊok+Bl&/m}KP'.Ӊmc89$+)9Z0I+ɨS.=I\_3݋]֟] hy!cCu@ٻU`#&{kPwTמ[۲w]R-sIw/_~Pm[-csbyxsg%0b7P_,߱}N d]dG΢BI lxM!ʖ[d GJɠwHD(b>QT.{9QZ|ڴҊEiEaTPB2LҤT;! G4(WJ2$'MZ[lJg{ X(93wu+EVhPcm9'27Mm6`ms\grna{~r-2X:e;ȬyUGWrȗ"32wG+/'R.(*^G :/cPiC[(HExzOtV0C3掠QPH x(2dCȿ\&"buݓ2`C}b ={ճ=^`<|uU+Z$uRB Z&CCYup(Ԛ8ک54jͯZSYP(֐㍟v?*$v*XA߲S! GMvImO7yA@Gk}US3 !IޕP)e$K"L."ABYt1!%Q+,)r'Ĺǚ}"{Ui[t%c1}wodݺٓ㟪3ףqByYY;mRR.Y1a0Iy勉J6Z?BEux{1| ab4Sjz=){sZ1!m1/{ |׈Et3Nsvrz@8Z"sȎ.\ZXH 4BFJ !@)byztWk޶}oPQpCpZ9lƣ dȐe.(B{Uo69)qLJTjB!(cR`2JK%E</xtܗYKv/ԟ)Hԕ+^`[̇~kZ 7w;`j9z%[ ;y͛[ϻ`}s!/l)G'{aV}==֯GT;q@ ,Ң3 ک]nפ݌wqǺ|Njh`Z!ٱ֝~'VrU"y .elةxKI 3PtI$e<)Q$pXSz[(d@W+Z1( "l;l@O_Y|{6=7pd8VHGV"Yf`,,D!D霷(FEL!/3ݜ^/Nj`,n/>X?t7X";n ݪ992WPƷ+N[ U%랫x؍QT.{\Giۮ]oikiʑcހ1EW̍lǩw㖆]_R` V VVJitk _#+~t;}t@ƫLT:툄B dU6>X/QFAEB!D{\J30)Q[X|b+[cgYOے:XO"ϼe4C;vYRi'Yy:k$. =p#:`ͽH"J Q} ESUVTi9=&sT[UH5.4~"VHƟwL<>MjFKkh:C_^vя}ͩ6?wxx%n~Jxk v؃E Gs/YojSNmsxCV|Es@-]0̺YxĴlש85fS6y.-~<|.{$[lR%2y)*zP;MͺL-veEv!fP !$Nנxu`wzKqk, ipPP%ZRoz%1+5B5,@(Q94G=ߦ`l"`1'k*TƇ5I9#^A|nږ@l;(-9 ->59YC ,%RjlFr, | IXXz.d^J61EnsN;-g6u$%k<6dJ|CH y*'Bvs6DK xє"Q1@>bQV]k)f2HcBYBGm` @;#&%Tat.YSj:8ms<[_贜[!a\%-ٲI(KD0um1MW-d0L&Sn3a0`ء7kFS]a IZkpyHY5X`4vfc\702`WOC{ofQ  xS9njl> -s meI,Bј2!S : #T", wB)Wf="ÚIQ!S_.Ⱥ_a+@1\ƒr\ \1DNa~;ߴlg8FO(o:HR8ae%3#f@D2ZbiDF:|%q]w5EI`_N4UIZcNOVi}үOxcy>k8ܶh&7&q%:@dl2-mOO\{|Gą6"w-쭞>Χ&j 7|}c U\;oWc~URMAC,Đ>Лř?^`WtdKR?I\ ӅAhݧ! .fy.gt=;^ iVP9 +8c~`H>tǸd욝d.YX.xy{\!}<+~ؼ +AIJiZW+-r~۸o%b>Oڼ=?\IF'7<'c1>^\r<7mܛ>ؿ޹1"wp:|[g"`9|ZŨ4b;{U?x:=\/1OShƕ^~qes߳83l(6Xè 7\2nDF*sTEuUѮ*UEhW]UvUѮ*UEhW]UvUѮ*UEhW]UvUѮ*UEhW]UvUѮ*UEhW]UvUѮ*UEhWWU#u|F !#S j["%Wх}W>NG([^p("F&KLrH ՊCUVQ2]PRڎUxb2$-nO٢԰9SnWt{`GUHBp25cEs B}r_^G0IR/H*V *{Nkg1&EQbH]V(ijNwy*U+R&k2Zo{Dŏyu}}E^ XyfTtN!fAUB(\k ͱ~/7o^_ J?h=ه `_!C/_!Cj~ J!C!oW/9$û2<-Ϸ[ Zwe55{bvY_@" >:Pw|9 DOYZxB`'/?//%]}cJe6J!}~_G3:)(ㅳI' >L=uaqy1!uH>v{Xȯh|0_S ^ؾ(IFeI;9_B^;-nlXr\CIEOsV.eCrzO\ŲkzM'-#:>>}}W0Zz,[ڷ@t`t9NEVW$2tYf׫Z[Qo"̜[Ǖ!qM۪Gj|55o;8Ɲqa8}]>mb؉ה1PhwR*d.z2^1%1!3n)4-4y|w7c??Xt|nntu}9yH<#e'6|88E%_g`}'ol;.S^_ͧB &|r5|j1ֵ_l/EiT~3%YS:gD}Ade/<@8FĂ$/sbov̜#c;,6#cmc=~Xm~/f:.oiu9|9_#vq^ٚ.Hd P$L8.ӆS"p5n'I{K[]2'0 YF!MilVyVp;|Kɺֺdyx<~ǔag;8%$MRR'7QERlPpmTf"?T旁 l!pjȅ49f3, ('q_#OYJiLf{m8a 'a6 XlqEd-"+X]R3 `@i2ekpLy OЊJ-(p!Ȇ"f~xwQ[Us:&%E.b]%!zqIM/ 1R{4QbOafcl|s WC\{O_kukU@su#=y-{:!Ao(j 3S)<ƒ^x zA/<ƒ^x zA/<ƒ^x zA/<ƒ^x zA/<ƒ^x zA/<ƒ^xп`\K~q 5]c^ o^yԙ*_@ D^v mD#r0J mH(";m{lQlnIWmZe,: +A\eq7 E瑮wU$ɓ?QB-`vgJpq=:ڊ?{}n4L,Ns{{\(OBQ@zƌƁ[|s`N1Tk%i}9}mKza^zSaP'95ڥ$;0>Yakgxr7IKOcc @mX?sܙboZ?Q l;ߓ6~sߏ^^$gzq=ON2})k9wLŘ bUB e2v0/)[e2N =r>L-&ZV%A͆n^,n_A uVqޅ0MFBTx\ g"NTuo[aD q5C}{-R}uv 4roHˏ,ibpB\<,9b`"o.lś7jDK aoRjHI񿻯j)zFji$*֭xj௓Dh,,'8ZSnhHB0 )L -syNNx]a?kʶkT-S:5;Hˠçuۗ >7vl(Ʒj%W6:\1Be%U9.,Ey";rNRȱ/EƵL/Թ\ <&u5 )wqdh` !O#'+| ]mX3"- '9N4=E8ZA&T]58ٙϗu n=]]m^ r!,誩IS|^PQA ZOa9=.F+ݽuв]ûݵz^5yu@k-wl2_wu՜jng&. ޺ͺy[4wMwwۻ6fs|<5r+98e浻Zu͛ g鯟l`#UWRU-AE/ 4 -N| ]H?jNҏK~O KD)R8Sn#LbT*HDY(p.@88l˻Ub!'ԫOpSn7uӁYQꜘ>ak`z;փb.zuI̜YչJW3k\biwNCuy\Jh{3ZesIj+K=i/f&^+k)&՗65%aJĨQ'M}9nI^83.#@<Q X#Eŕ8Z82XiT T6MhS) c 2B&'M"&?mvX6j`oFU3S#{Xx'ʺhRM9jnd`_dC0х7<^;L)7׿ `}j~>KߛuǨws,3yGiN/$ط݇j>_0߰j1r8IKVVR1|%gSS$֥F˵a"/@dXaX:a鞜^̑ߍΎhmpvnťz&r% "ޅk,_ n./x7s'M~Ļ**Nw>\n}SQp1+X0J`D$| ˜a&SD@49CRBQft1N8QbŲc"=!"=A"X{))qJO09 ޥ x 'pM+^ű'o|[9x<טhiCnfL?hS ɇӧ'D8FB%^bF P&̃HBSLͪNc-T`(3*hJHۨQ)RJBԀRELf 6zzbOhڋկBִ+d;h؛qEAτ,X*YNnkl3y2HR>X'8[iu%qQG~8֑#+;I g(^` [4F 5wAK `G-{&W^dm~t3il,|y/Aq>2w3OѤQfx#]`AXSFQ6i%[|x5HHAVfD4%Vg{fXo툛UWѭF-L39eʣ¸Tfhت 0vSp,xwYvjb3UҔfyT]%ʢl8ͥLS8_aϼK=\ .vufY q^صRT,g){?x7s<7m:3I҃2ESUKrnSC\ xQ58ܥ o"q&mϪְ+Z-5^niff݄Ϣ.x \8]|3o .&@_8GYG|Ԏ%@;7PDmYDh,Eʹ1[=z0"LdY2ЁeBWUCT)nSuu?zPpPRkjUZ8?WFnIZpM+[ތr+a.uqjV.b]#4Ւ_fEbf<݂1;{: pښ$uݿ' O-? G4IXsx`i?K =w,Wmkppzh/[D[\9]fӦ9u gq_x)ປcDeSla|\fSG#P;zE$d`Ch$F3P gq ǀ\qTSl5/V@- qnR`W:;vm"!XT{ơ)H{޵meB l ~vvLb tȒ+u=v,ے阬 #RGs{z/FE '(RӠ0qST,z0$m&a4 (^cB5TgASV*iK*̢ bu]E:(]>fU$5)vʁ$u K%3%`,E+/cvG ޡ+NڳfFRsū5Epѝ8}nEH\.'Stl&kVeqɱl9I }_tԯ^y(` >ifcV?R0ՅT/ 7ŧtY_վFLBe"B1ˌ~u?X#6@~8U^*sKZ̗ܯNkWdD?ObaJ:Ibv0 8pwUnmޅOx̅1%q6* S1y˝!ŎJ8vbΊ"6Vy}Wa9o|bozEhR 1{t щ٦MJ'5W1^j@ k:O/glPۙxޝ8ugEUDs01ʏ`z<թ6Jp\\߯N˟lUQ9%/oDg>2WoDxfv&_W⿗Fy憛,86: r)Y ,fɫ ! hLT ۗ^ Ҟߺ ZBǣ EFX` 1m``%4YxrE㰨}@Gs[j7jC͝gk7,X%=qb Btb-OZ"ʑ TfO]pg]s("gYAq6Rhf:3,r(蔓6$wA2'N2,%h7W )1%c`uzb`Hm x[p6V˫d7.3w x L,HTj+ L,}Q{WN8%N4,ڋ4;,RJi=p LeQq 4 .psnP (qHIm|&>)ϼaLOftvGey+[}IUWc }i 7GCiji`1z#A2Z(pVVA:c0QgBמqr=7X[k/;܁F[v6e]?PbL/6ժ*MlQ%WGoOyu>[=*pwl~ -zTofߌ*0F~w<%qѾ-+PRqa%Ţ/ R|]B=&+p֞WeGǵ'EX}r7±493o%_ˋU[sXUIsMۯ)IU|u@ur|>]{ 7(^9ch6 6ؾ %ƛ*/蟇0w/VsH)X75nIZFظ^ m3)1RΪҺK̑w9&ܰ5soJ߀QEq1^~,Χ^6p\x ~~+ͽ!ou:O~Ds0SKu4s`SFEL1Vzcbso 2\#f>?|Lfr4ruSQq1M%>W0O80*`^dAʮtv1o ] s+][h4g7nYd2oDzf=n;V7Lpd;, haQ,Ol(Kb2[0N01L,K<+I ;X0V_΃Xу"Cd٠6PcʐlM+iф̉f::bRk!iuKIn]$KufAUY %#zj1!35aZvgB]p`nGoԾ4eߚt+Ip0}1DCdpt}ݤT;/g?Vn.RpJ($`qbv쯴M:Tt>tp^Hl!8NR93<y626V(%tJ.%%33xCmJ =R!ehuzu9Z;!>wG/.a9Y~ Lfb;gd)ГJ 6 ZNQ ׸㚵6Op"<3MtP@"jfΰ#f&l )Mۻml]a֤P-6X{`o=Q2 YJ,(Ie 8KXI<6R4$#`ԴʇY*`(PXЪGl |X-Wˆǡ(ZfD10v`g(UƘVThIp͍x`%gb1:O.#Z44(JRpjҔG-XFđ<`%̈́NSm4h ݌x1P{E(W|Iɡ[E9I z)HC:h˔ƻTNJL&&a%>eŮakq(Guת WszDu~å;;B0*ApCcяя|DmR595ߑ%ﰂcƏI" z݃ I#`mzCג45Twv Is,{DW3BB]+Dizt+`*FF./th9:]!JztO6dvRǒ ^fqUq8X>_mU7[K_Y>j(|4&8C堘//WԨ~IGp\JC7`lo޸'mMIDisf}2%%1%nLID}SQ#]iFvř5fieXRBP[r6Uj[TZGU8,0GW㢎ڎØ4Q/8u%ᖔ+Ui׆l2 J–Vn.>U}nU졄^ v|,NҲ9V|^*bJaY| G/nQ 'clCInq4"@[S3\3uQ%EG娒D}Z[u?Qfz='.QJ&[!f7 ,="6[?:jb WaH[|=*cKFo*VnKٮ5UkKEUVjv/- ~I}J| 'Z{fZTC73uS Q 3jT3J/PRU+Dk;(#]=C0g +F7~{DX Qj5=^_vf 0*tZD~P ڱ-]zYc!Y>ڕskUhFzC44(5h9Ҵn|9TLa/nu6Χהzb6M;;B³/fڟlGM]H/DB$~R'Gy0.RJ%"r9?=ϞK̀xF_)'ڗ+Ӏt(ggJNգEyKWG2'uDE.E 23UPZ53+a GtE}+D}f@)Yҕ=[ `Lo ,c+DiHWjNIJaD}J,+Dٵ]B^[uocwI:E^˵onGi^Y[|WҴ{ Mn2i2)w1u3o| ԩ3ֽɋա궫} }~_nG@p51-cq>ȨQǛUr](~|AX7Ҋ28Zbi[ǧDwás"1r_P-Ai??{W8qv/9|ޗi j^I:~EIIJGHڑhHV*!\\ rH# beFꥦ0+Cʘ=0u%F$|)?KW:1|.⺘Ckyi0вDp~l`@& X^ \.2eׇ6]dU%D\Ҫ%L._oOe{_noac^f00&*UJ4Q70$sl3eP L+C!ڑs%wz쵲2aDD&p TQ͍QmQ'(߆o$\/\J/9_?a&)& գRt;UX4|Q Bg;BČKߛ:y˔Όlj=;No1eij쁙,VHH9ղ *.D^!d4LQ*Zzma>W]S )V_KвjcX/A m-X6c"OTF5 Ju 4%f67fs$L`+1u/^?ۤMLryå O=cF@1g NVƗ㗢鄙Y"Z:e)LeXb=׿ C-Kg ^ކxf{Io$sOkm淡 n2Ρ/'7ä> Ld ޞkDجyNiIbJxUZl36A+ظRR^SLdvf %jb@LIܫy 8.H- <጑2+ &,w8+Ԁ` o@ Z>N(|,սrQ6˵:4u.xE$ !iH G D+&{%ʤcB)7j<[֑aYqGT.k%)$43mk+iMaoF~G+|3+5wl1Pc++߈BAK|tL "IX& bRX1Qxt"ILKˋʄ JD"pJt,OsF- 4˞"-vTQ /e&#QHY5S0k5f,`ZFL&Z ih^&΁! k,'l:j{n2hb揙w˄ :͗Tq^_/mu&\M{fëEK{ S|n"ILBNި Āk$ΠtT:+KstUf7[,**m=]uW\+[ww|O'qogUH2 LG2o (: P#!_8IחFҏdi{Djr X,u1CZiYPTF0QT]!cFٴc*h%{gqoR0#4'(xAc$5q.t#ڳ~wUb%r{k:zcu0 wM~1^=^U~ j[xuz?>Y^q6I$^ dE%t4cTLn2N(Ho Km{^ {,9q,U6h3cZ("`۔acV#Z'ۺzfq0{@Bf7>؆7E'7-1M[jwqt֫ ," F x%1Yg9+g`8Z3"1,Rg2DO2E4!)`Hn؈b΂wDðqmG$&~#^]1P0d5LG~ !W){M]t`ZSY zo뙸ּvLۻ, WF26Pix`,f(1xf23Ńt5k[Yb'3mM-kl!m4B]X°``^`j9zilJa "H |rVk/sQrKʶ Z40KtIxz/YƵ1RK[cĹmo +i;tttD1R LAW mwӻ 0FfelH,@2Fϴ,sJO0;1RΔ&o˜ ueK/D>@RC_\r4vN:bwtpz7'$q)~/'$iO}NHR8J2ܘC[fomlPQBBe,YJL&0">}. bI2)PMΐ2\`(3:hP: c4JXJ0,bLe9?5pb]o킖 q$sEb!É'Vznq*h ހvۊVdҪZS6jQyŵb7fn9L1 !Jσf"a@OW$p[`8HUЌi3+B"HayPc8].OڅIH<2QxE0_c&rBV%(e۹w. oڵ2+!7hأRڌfr>3waYAo7d8FB%(S@yB LW1# X(iA$JA)&ضjNc--Xhm 4wFUmTx)%! jCe nMg};NbCh޻.K׮,]9EWtLR*[eE(Ȼ8N*壁~"# Xg5 N1G^\*:}qdy0g>8E#D]AP1ʊ`Q PPGbXpY@JSbRy̖16rbYn7~Ybz#x8 j McAq&$ QYZc)9yx˕ws gp 8=H9;d#@&TzB$ $ h%Dcb~w4ߦ2i۸L~ܔ%$\gb%ϴ_TZjsg_$ttz/iB`mf͏Y W/+Pn+[Z]Ϣe.8p;[)s h?*ZGlJwOQ+RB+Hَfðwy4FDKWw*x8,"qy)+ e[p Bs[jjUis-du 'JzyYƭe&)+avu5ok _ /3P013ZMA=_=Fp"IOaU?WOԭ> 4Y R5zhRr ,rx7ZyϒAgU=\b8J=*[ێDOK&&',98]_pjY&O,$ֆɁqO,B X;$Vkb|OeNҥ X$7e{2?蠪~^SoUc\y&*/HcAUE/֝vWkkI6Ec:RIi v<Ї!`Ou`|9q| g/ʵ~pJ2m\M7΄qC??f.~a6CϛHYX7zmnH/Ԛ\nSķ\yIEmn(YH 2NŖp`n]V1=e^JJm"1DTgXK=$nr9kUe6Xp9 AI Kə$ T&pݱa-|d*y{c~Pp, PK"B`ׅ(lp.&sds-%eVQdqGШ!i+gSʞGYTÅ1Ǝ38+ja>\Q(D㗟LNuл8觋OuUZ/DQ&uO?to-՛ho P`xT0`.eRyϓeWZ8Zx(gSf֔J4'VT\@s+'[&' M/ T;[3.L2BŻB#>y*sqMbH 9JOu^ݍHZD1| N)C&]2v̺¥׿1z7+83vM/ݙtڢc#kS9DV%Rm(JqI, _$R-9CK p!Ш㺑F!1"䚔Y pc-P(o47V^?^Ybՠ! B*Q[q&b/NbYZr4{r1k\UvmӼCK~K77@S]: O}ViOCfٳhhR6Θvqip?>s(?ՋB#o.gҮ@}GrEΌeEj:kiE5X{rWtvJso}dh_L#PVFR|CZt/Ezv)tZo wl{90 "RT*y {G1^ӷeD8}8n^[k5^"sj~S}r7c>aG7(tԭѯWǙ~$ϗ(;q|m]bξ$ċ9I7MߩA@q*)K `Q ]UN~+﷍>;ڋM^E_~U] Ek41&(x 4~Bj79L-5%x$-k{(wVǟz8ϼU)VXgEoX殺 7x 7x a\ R8Ɲ2Y2(ix:)IU$&dtfW@KfRki$'OI:q,@&9N.'n/1(8Y5owt4hUdb 0 k+Ⱦ@q;/MP,ЧÄ> b vAtU3$E |;"3:2VI|.lI ֓:`/,G]3)DxQ HXcr|Eq9M.9H3D&|TܫN .$Ȍ0q BLJZ\E l]^ew)T>rŋb)r.22N)EW^h//SHE|nMhcx6P. K2%)ܠh=]WE%yel` ehފç 9tjzq7@N(s+]鑮z\Y DWX ]YtE(tutc7ȳyIzV5ox>_94:S\q|V]ͭ {LCgJ uF׋&+c-OSP~Jѫev=%mn"oΦyC)R('ˋrr ؿ[:77o~COi %\mVt&yVߟ_^oًjF;^݋¤'R4qFIłWqĔ~E^@A-^7*HYTn6o#Mu#]#]Y-+l9 ]\C+B+UPe`ximaH N` BWvAO Q*62vl9tرC.08uZ{̑;ճf2#];twջ#]TLI}4pg67r@4i+ M#Z4M(ii4!v0tpC+D{D4n#+aqՀ [1J>"JZ4]!`3"b0>;Bz#F:BRDWتXWWtE(:FuE- ]\7( +B)HWGHWcŀ [32O 4uuth9Alp ]Z!NWRʑ~3tez{ʘtvl]etZ~P-݁HW=0j'yv{9+BHWHWx]4? 3N VR )vڮY=/в{i9;[KA֛3vYyf15L.k`(*O((N#+6yƑc[k' dl(B+T] Rѵp m0"j07V\ xJ=p  ]ZgNWRJeԀ  eCWP[WR-HW_4Gv|tDn(thP1p#]!`Oε79E0d,x d>A@͗8x+,$OfZ,_Z#qN ƺ~IzB?F7:Я} *+Wp{L;N1(j B]׮WXp%8\ռt\{Wqǡ'Dm!$Firov_tMS[\Wzb3o\ތ+Qk?qbpշמ^Ȝj Ӡ1@ݽݚq LC05 E.Q0-j]:EeX1}6rH`"wߧ&ujV\ ؚ +;N hzuW+y\ @屓ZZw;)`W^g~w|sw_GǧX[Afǟw[z؃6 Gțָ6#çWIwl뛣ߎ>N.rsYC^|W ?]~E]|ߤ7 W!B wNFW(z8DL"y]^迼8 Bz)ٍz%k??ۿ愭69Gw4}Cۿ9 ߝpKީy8>kLfD{ on'[sQk=U{Ko0UPk_Uz=xV-zmW{eqn x\J7+R =q5MU{^({UD{ZX RWz_[ڞX-iowPqW' &`Z:5 EmTK4T +ӲL\lZ 5obÜdgrfKo;M$HK5oZl0g׼i:ʣ7idaeבp G^TbCpn[?J-jm\z-*]K,M@@f\\CJZ^:D+W%7Dpȍ< ڹq%*W6:r#+ֆaܕan=zr1p֏v%rcWPhkWCW\@ J+3N1(ryvQ_ ʰ^t|o1 + G5Ǖd7+qD&W=߁:Qn$VijJb&V\=w4i?D NY+Q+Q܊Cĕ7!oNN;a ceɆwnM+=E"Q0-j[:Ee +͙p֚ap%rWP/W튫ĕl @`i\\?̚wW4W+ݷ؟}>\A08JaAQv|pptd=Q n\AW2Wxpk$ƌ.r0a *Y:D\}.h}ROjA=Ub{L@ 5bR҃ӌKBjRjn1v6WG=&)?8q= 5#"@K "80K eiA.iATu%5Ύ*8+zf!<DpJaN@-^:Dn_qMpe"7R1y\\G ,FT.qD+ 3Qp%8Š 4 cq%*Cĕw>p]Pzw%ryEmX|1([qu ,p%8kW"7㮠ki*y!*z6~$w)E7u_+Qkv%*[W_< l> ׫ap!,W1hW\fpevל(bO^ic R9\Z ^m)]hBoΚO>Kٔ/M|z w]LmkO#yl0U_ ^ [X.y|Px|-[2@x^/g]̔O {O3o~."Fy u;O|![~wׯ4ǖ4J2F`jTu&#;rlg \d '>_n$i E'jڏgH'WN7"/hMm?鞨(=tW9&kt8.[J&b6z燚}9eЌ1{BR*ƨTv)K5XWե?QdA#-/wB-M&fJmwx1^ 5]J'M9w 0wi !Ƭ&ރ%v;٢0Z%h>5*dVNTRZTC*)*F[ur=RNZQ ~zsߢsDk(5)hK si_#'֘&a@zQiڈjbTCJ PkU {KFC66WѓvzUb1ȑ5/S OsUW)6QO )ܔ }Q\[*Xt=KF Ȏ ў6d#oG1ix!kn #.nc4/dx $>yt_ 5P2&ݚ"[`q;2q zL,:?jpZ5%wV5@6TF] #Hb"ҷ)t_]g~6di*YrS u%V4XBh;K)G d<ŬuԆHTʗPw|̡c8gmG]K (vКf6ܖSBE0ٔxNJ@LEWXPl5nbBzF,F,Z=E̓_ ("(YC+Yc7~XdЙ0351d4,?=(F;5\CYmXJ cF8P#YfRM9Qb5\,ܚϽn%Y!gѝM'21VZ)!ju&JbCY*LN0p&mHp"=j\O@;MY)A33%P]&@"ڔ#ܞ[r`@"m@xp(muH3- !`=Jw!|?Sqtd0K[=5!8 "qp,K ^q 薁3? jiTڬ\hK-a-̎hrBX9@!H 9H:PnNB PqT=[ $P@z+%jQ7@og8S«A%*P J T@B%*P J T@B%*P J T@B%*P J T@B%*P J T@:C%PY]R`ͻtF '@;?-UC%*P J T@B%*P J T@B%*P J T@B%*P J T@B%*P J T@B%*P J TU)H@ށ0'Re3J @++T E%9*,\@B%*P J T@B%*P J T@B%*P J T@B%*P J T@B%*P J T@B%*P tJ E*Uw@W(5'RTR**P J T@B%*P J T@B%*P J T@B%*P J T@B%*P J T@B%*P J T@*˟L޻jZZݤקEB!8B}3=_B)LC¥X.8t].\pxYv}rNS? T0|d143ɼ0+,;_7>wI=O-Ȋ=G`1\z} ?p;Dph+4]ѓ邒 3i8eCtUv`L@ o骠܈·SMϙ})] ):CWVyt(HWgHW*;DWE! 7 _DWNWZΑׂUMgN0h%9`dHWHWZIT  \kBWV ʍ+#RCt5坡ɮUA+N JiΐМti -t&e'PP tЕس@A?vapՑc]ؗAq] v=UT[!FU;CWztUPRtutUߣOY*d3im?Vnl0lJsM(K3u`7x25c> P(VrqTOg0/RTM_CdQ-'d<^. lyZӲM`_U1PAH[}Ǘ5lCw >ׇ1d?3K9SwP(]oi0O e}}*5}?g4x_eX%8RTdeX!R^ av]5:&"+UϿ"3eaʕʜDJTJb+/-BNȆ^M-,iI9pVU\#D\ _Գس}v՝e&*+"TnA”_Ztg%=F}m#^PP K gМǣ:DW条רᦠdzt%4JtʮpSUAkũU93+Xu',p ]ɢҧNWe!] ])mo]3tUо6 JkWHW0H Kb;CW. ]骠4 ODWڕ)G;UA+O~ P2fΐ\opDC[tUv',h{WRR\vJXIO3LWG2xZs$ |Cxvt%+tܮm}.VD䒚 Q$]Z z 'dS7lp]~a9zp˫;&1,~5X/+di;e_v~Xsh2aM:wye]_wU=9!m^c]^Gj*3qç2/@4WwUOgku>Gg 8.ʍ'^om4d:M߼\(#42Բ eգ6`0qϢ0Zǧ:5K._:0Й2wbw81^M D_~ۜ ~Đa3zU|o<ϟofyCEʜzli֕?eɃ.lqrih3ێmR-8Z=QKhˉ1[  yMw٧(*zmLqs>q ze5l]06,SoXY >_Qo$X xݿ^r}pÅm]?Oh7 ?;o_[a?ELÏf{NpR"<ϋ%~ rwq2l3֠n_JWһ9>Qhe8fS!dzy]j=j(&uk6c(?m'*^dBxoa37eEwWT˼e|>1f̟ cc\rۡ `-:@klW굷t}^%ķv|AhK :hcbv`W1_YEm ٲx '!RS N0M:(;h!5eW!Ť8 E/bL9 .CoUK̵j#9X LJn1-SlYn"ӠZSPhѮHϢsҳ]@y3nt*H%SD;fP< I-x"# #[~Db@bx;b.%2*Te N FH,PƭSќ aBӘ[%E*(t$Ȉ0KmԚ3 3%q<8NPNk r[=2 y(]z;O>\X"(%QxFngt wdC ip:xx#f-"wRGBSljp\XH<O=]Eԃ1O(I2NV`@DXH5.8<0Oq]@{T3{"A#3GGA@0ւ9N6y!a!! ^3CH,fW7"?HQXoL0}W0n&-P\OdO'LKE?9HhޯBkm#>vU2f],;APOYXdnɲ-l9 &ѣͪ"Y$G+=yn]zF% +?PaܒM `qCmAj5:[]z^Xfw/'n#vߍ~no:~t-p֤8-m~?3y{=YF<Y0Gr71F-E,G,x"i<tY] 7p*pf{o|W]5r >qӇ4x Uh< +h'bn3Adɸ+hd>J<Ѓ{VM#",}vʔ~Cdzj%c8'&FA6*f!$*& j<tLIjlW>Xk4Fq &6MxU&K[颼ysW,JfKw>)&_Ho&$Z|d#cf~30얤E}E ` tr0c$Q1/Il;[)P\R(\a |Ss類^mX<\Ϳ_]cLeуF/d4X,bjQ'_$z~0i黵dо.==-7XHbVM$W><@%?+2%CaKl̩[`RIL+7Ԑy4$"ro!Qk{CB%7CCo! %󯾂痲D 1LhgBӫder:/6kܘ%48ȋ &V̘ +a | Wy KB&>EIGvtp6(n5 $30-t9DuAXå().u),/ 6#WBz`{n1! 5/":smpC5rvj+!mx_a[J W7|ruo:dd%{bnv/FI@GXRP*ĈXBrbo dGToj!8NB\dH2WޓDTC$6 6V刜AUPRR2 Ì7ܦ=rDcgՙCe^i/Teð9D-lUd*I\9|NS(9#IowepBe6DFFbK0` jB`=rE-)l^K^^D!'!$CΔkL]z9pܴoJ+3wS15'6G b2ԖBXJNt˷%}ۮSۼb` /+.nUX֎D+g6yma(fHG d!VՌch01zR0HdnhrFST" ]v}s95c=[.{B)B]x&bol1[o;4 =4ww4n<8ͿrŒ͸qI=mIX J& id0qLt-Rib!6AHu`Fɷcf6!!e}Q_O⊋y,Zw쫵ֆAk6'LJ*KYT%gd4P<{r5UaJ;%G2m]HtML,( GQ1KMrv[N"(cшc_5"qЈ[;e.K*㙣d h0X MJH~ΩF$iLi($YiQEC#)!ȓiA5b5rQ"R{ҋcuV}e` K A[L4ޕ(yH1E2D (IjA/B/>g?Paˢ54v |o9׶QtApЏsP-m"ZڅܕĚ#]ţ]Tl ҖB f;yMhsઑc)eb7:mh,q#uפ:k{ѲN_k~<VxFFͼ4gc 4/ $`83AbA:^:S>4Bj]J\I ol@3@&yr9Zk{y$zM@g6<-\UЦR׻͏(n&K*kw|}~aYpMgyD"2dR6s2f+7**Uh?I FEsLdAƷ~m/r *UZKmi8:Ƣ h5IDr$I,Qw= *-C㞧IꬼE Q쟍2+ 3K)IPj@;KfC͒A0$ R[cII.КGr#5\g)+z%$vJ1XA+He0*!i 6&JM$u` $hPHĘW^#\"|I,A?|b '}nYr]Nf%v87IIEy ч ̯73ch6' =!C^A2D.Q(lcG;ӆ-o/DZ -ލn'x^|tnr8]P.*g6u!y>grtHDOn -p6nwL{iyig>Q(ئ 1xB!cmA $V IDQ*B\O c{$cW9Z!ۅf9A[s/Π#w ax~q2 ._@|gyS=uxؐ2jndؾX.}0jQ־U24Nk*h~a- >0R͡F-_\\ןMz|Ag)?yf~˛-UW=_ѤXD'ʦ07R*%Y4-Cv6秤iaw ^Vom%~G,Sl3<:Ns֯ߍX"kNG}҃JZZɀ(\(H5%d0yh:eo#B_]k(&B'=b*)LE%hRYJW.ǽootIKWM}[C2;Ȭכwlm|7fG6QbP/ ކƢG_U8.g$5DhPC!o9'ǽFvwe b p{ȎI>DNŘFjkdA2cTfTGI1G Q 7*v8f ,i=?Dw7#2]$>4]y| sGf'Gn,X']10. 9 _+{xi}.iiuOE0KOU$v :)%԰B.['>/X^u;gmv]x+}CJcn#SK_-cӨhxtã篣rk&YS,^vBYͧ]wf~3/\W+]]탿yMx~ΒK9|OZkNzE|ל5g?ifysco-6ߍmF֒grONo.Jg@-bLY6mRˆCHߎ>mmFۻRGa/dĖ(_K!Ơ"Dtsv;fLI cMF+^FKPtlv@w!GJ! [3#)rҪ/rSG:=8ּ[ s`5U[q% +r{nzc1>`[ S͉ xuGxu@ͨ*L3*&ׂ9fToFUjhF Qie:m%l,_Ź_?`eW |# *+4:ce4 3۱;!L}wbf/S[xs|lX+|\/`]v$QFec NsQ`CʾdBY8-=`l{_{pRAR?,F[M݈q5eCi2%Α~鍏!(L> IU*NIpT`*_BN*\6kg=xֺZ&'qّEZ̟xuIi"#6W}dbMYa#R a896_ V-ÃWD!Y'\ࡨ JsI TL'osh=|%$r+ Վ W^imBXDٍ6j OQSBJ|[͝zÍrr^ }YVH.hBw%/1Ev_E M."AB.1JARRL)9E%GYbxug揭:>gbUar\9ߊ_ 3ӳˏ?_'*8AQ̝6h1,ud0XǗWg_D}aY}+S?_N,}8yF Jʰ5%7[,і*&TGbc.V !Gj{XgfGEҖ))jYJ>}^Y/@kM,g~`k(G*~bRۮ,Y`ҿ*x8_\-%Rk쾧1 ӿ;I+6i5^kJ,:Yywyl!ƣx-fsZA)-)a8Zi()faonvSzefq~VŎ/gV;.?Ŝ'C%;._m(rN1{Kek9l iR %[g٦AƌhjF4V4B)b))\?ؘ 8 E5J!0R9K=~BVa@*CDY/e ;݅+2f$O7}qR#YuX_Z zz牑;Xb!WT1˒% <$ dJx`Ȩӏ⹸>MT=yi'YJJ&P`ksEɊy: ފIA"cAy*\ɭD]O=QA xǚޅ,%BL̵ٙ%;t`G$`"R4& Ic}a]2)8T0-Y͉ /īOݝ/Lzp5n%j8vu*fh3C[(zN5p5UYr*_i Og#*cטZ%GƟjԏѤPy׋tn ]cf;ZgKzg?64k֋9^;/v3)lo-V?s+D8Y|Ο.E9!)s}}m̚i)M]0֕uEKQC&K1y5Nek!3w|~x'bܓf^ub48ϳ+IZxU$}'-Qr(c%>DZ^fW-ݬbDd̔yTw6VbG>}>8?~|nO跖s͏hȄ.<4Q;k&f-3 !j~9:Nw J:'矴֔C|6W^0,&X=W IfHd<(RlbMBaPAZ$@R rw'xϢˋHcWxv]~dAU/UlZv1BQ9TB%a=7fA*rҴ'jNnxcɍ-6tc/'2=tVI܁.=½U\ᖡT7' EvT⹲n܏rcuֈ9?F=dF $:˧jPXI!5m1 3zvb #iF0HPB*,ELJ@B ɲȕ IG ΥQa{˛A𝟳<׮ĐlA" 9JRJFzEI(E ''`}Tv:A]7q/@_ýW}vje]wdGmU׼/)ņ亏xʈqQPMHh)r0\^]ŰEFB Ar$d0"eD W`n;&gv/r4MG,NuׁlЭ w3qm91U<_Bk7YzGGk=zCDtCcXP6c4l;wmVf>V$?MoEK7nx&? )R2"5eFnjuLA@X%)XAs1F$&v:DEV1s)ERhxP"o JQ :$$' z#gC\'ºJYlxKᕟ4>1<'_ߝ->1}~Y@]kY7UC bE.CָR=lho!zr*ļK o|^/[ XSFgi z,!Eg aȞ!kEU > 23II{a݈SB{! @#DJLwhDfˆ8n}niAG~IDFj9 $pwΩȤ8AW뢐:-( se2QdE F S1A2^h^SܳfߒQZO?qu?{WH՟)iV]hn5ʗH`06c* Ʀ;{4*gEUEF>hM` LOG9s6>.ݰ۫ٸ⡟/}ׁP .ū9RՅ)_ܶ3˫p^siZV0G!ߘ=BF,nl;jFڦ~(8`DFtَχ 0> p s &{/|R6?+_O1Yf#9zд:NjoMWFMWikp0 WO;hԶtA(>(!PEr\oGGԠn8+rtQ)Gn:K?=p*~kerΩO85`?sXwCŵP;iR;g)!nR^ϴ@2b@x8Z"9Ldb5oFdd.0D,<(NcFm[}A >hH @.HdQ5d9^\rrm4DJR0HBbhgDQq@STT$)rvT)h諣O ԇu8qRI.g0MH 3:9zGdy+ RʭաB}s 񙧀 #\ PZ y^$%B 5#qJsBP>w){OŦ r{ a1;y.ZEtY7i0kDJDWBD)gUwjr!b`B#5="Vq#3j f:|>;\W?Tŷ>ޯ;߄Z)zڮAxi2y!OH,__*Bg0dfjodlU{7]ƟG+1\1i{11ת{1gn7RD}8n`t ߺĽvŶ"=!-Ũu~56mB֊HXQK'+fL&#γ$zV %7iõʷ٠6Pcʐd X啴9ĉf:::2CBRtӅKInDM$xIFcBbk˴B9Szh!iڊ{'A5=uzzgOʷZw[}ݰroH-H41Fy tbg7tv2ǞA={Fp^乐0Xd .@Bi {U) ːpJ&fZR!Db:Q"u %Jg00,&3w4ךC$d.pP C I &?;PÙJn!3ʠZrm,,- g@h4@x$qɍAg&&n\JN'-GXٟ\npu{(d[G{ t`"{㕿{ grȍLH-t+xH/.nQ9Nv8Pbk 9rlrX(Nu,E-2:Ř|p01q@0H ˍ( Kz8H/Ub--…Df,Q릷7w'4n8Og߸cN8N<)N،IH@Jjᙣ "p5+[' | sglrB!f6ꄾ161aA M݈Rl~<\CŴc[ [m^vE{yrOL!4IU@. 4sDrJ3E@A(ё#j$v7J*To(z8h5HP #~z$NMtA;Wg=l9mc6bǶQZj 6PK9yG&Z?` ܈LIň~NP"ZiDiP &)Z팏>d9@Q4;ME`>..&f U{Ӧ\,%E].j]\2t LuЖ)wQ% %46FSs]CŴc[{h -~v`Ȟkޮ:m]H}(ڄg$=e?\QF+h%dlVFJ6ZF*h%_F+h%dlVJ6ZF+h%ddlVJ6ZFk.AK`%+{_e}UR*{_)2UW*{_k}@E~҆W05?_C j&> >$77Y>UϏ^>㇟?/ EoOn_%0.W;Χ?xܼOxJV9'abڥq,_e! oY .~6Zᤋ$~-~?Exڟ[}n;{ȋV?_c"!ms/ilH@}H6^z$&ü m>Vw_i[CC,5 I!*(QD9"h%TIncPoOP*LqGl_n̤m5T6>ڠ R5}Ϫ= Z !@4L3eBb3s,.h`2@ tRYʶ~˝B7KX+GՕӖW𭃀&Z~ja9Q;|TYbgBLuDD︖z4%„@<%!?aiX"Y ~Txy8H*fTb掚&/!|bR i$Q` ~9g kL-.]4-Iuw'o.^&G9:2arrVWm{n|b{'Zp-<,Ͽ}([)pKj QkÂvVvUL{@ [Jo&ޅ/U3\W<*% i!ĩ\Q[.Sfs3Dܡ=Uwe7Њo2\uiWh;PMA3a%ॠ֑e7S|dS̑wq)TGDGbs-%5.jbÑG],}vR?{'8qbcChQs|#) \ q*$1nn.^J/ ֏klcq0-GЃ&4޽|/lz*&^re8k5McAq&$ QYZc)9YXTu3 3 8=Hxsd=wFb8L6"OÇd*_}A _?0il7?G+1W6+4;ޕ=~E.CPdy0n} e}!>UVAY! jʊJ+–4 7 E(Sq7hyt[ͤE7߆t*/+rSu~oe!Կ?JYMM*U~ [}"8!l]٦Z ۪KMBf5_F=M >"c~,VOEb7>߾4;+n V !*=dWD_qtN|询\M%H+'QȮE0h B.'Uغ>c l(rتm.t%oE|D[ k=n;Www]\.00#pv,BfC[Wpͷnr=t$Pp"U$53dEŹfƯˤ"0MG0Af"HF ؠH\X `:Q"84aHJ?ke^li>{1E"OzyP]sbb-GoN8PJO3N eWqϘnM/䒁1_?zi '"4Wb$ 벎vYGvUʲ,\NSe#,0;0c("۔acV‽pĈ.EWhF_V'w{ j]WAXD"A^[-ὗhf09 3n׮뺽1 /@_ц{kc,帥îî vb/>TE(a`pe \X,w sgewg7??" 5|ss|[>e|vVN m'l0($)LxJNxGω3,E T ( 'G)e-Ъq9^r܁1s3BϺH Z}-?a^ 3eM6˦Lp~sLr3D.4U3޼0BL}7- 0Lw#h lV)sz)PUZ߷_rsM8f;y_%,^]gSgUjg'x AOqa`3z.mh&\WjWY P yHޝa>Q!rE<΅R'jTLtӹtnIl6$ds0,wJ@0÷d&y /I):s*͕XMw+niIyWg3e  RջiAߏΛ"?=nj|-fa }4ֻ x%jWL_5틹υ7u}8_|v@=HZ61|z;&ܓ%=].հt$@_`$d鋾7:/&A6޽%6:3Bn%y "q؄HkmZ`Njm(K\KoJSWDb 8snH\qdh`o5}%r6ȹMic:2,"1rbrX+?"D4]F5qVo,1onQ>6g.Ii7|߹kѕi,\^uy>u>:@.fJ>h7q]ۄY\+s=ʵ^*@MZKչG^*҂() hY >#/h#mUR`B<UX< S띱Vc&ye4zl5ͭYli_c&"@3Ç9JQ=k e.箯qj%e\wV񹭸gr[Woo{^㐮{$RJJ$]>0ZMsFoA -gRl"zLxnv>qԛgZ._~ꫫi |W/9nUOfzŪYwn*]lx}T!A9bSAj*8/`rʓ g_mu*a#yU-AE@n@:GFlG }#Y^= QO¿DDOs:Ș!4,(*GR(A\*.HRNql1waʽ3ø)aeޚ8k,1qvf3~g//wV}ŠehW˯7[禶n3=Q_=(R JKM⒓ɢO2}iIJW*ւ{yeϭQ:%A;w-iI,7MkR0s6{=q9\Θb[l]|-&bTYhMkke-e` 0GM,+&9NxQuؒr>-u:'n@;.^rjtv]ouv]UޑwuH:'eJ*R"atZ)lH"W:{Z'ѕ$yιSK_Ku J9J]I9_m~?8+$T΂y*$ OAs߹R~a1̤D`irDQ8`"Em4a0y+~,Rg2DO LHJd I%0l\yZg~alԻo穏C0knU'Q>|ƟJHnv)!R#nk!&m{S,Ā`:J 7Q:N%(-8;i7\;mEy+W)a/&+*:c=9/v,o;|s|#Nl#*? t Gi*A3M|nEy)1v+Օ:rYIp q{2*j,AEiG nFQ٪xF)b7`x}_ EgWq Fh><܍63>>}] xAZ7(@u3R(?P6aDV1OĶNc--Xhm 4wF]mT$%! j@Ce nMzz*fņX鵓կXӲŘԌ+:dy dIퟜC-\ϮÑG]EYdP'VzbcChQs|A) \ q*$1nn.^J/Uv`\[7"Mm{z_Tqp8 j ˛P%T'Lr3Ce %.0&NAÝE؉EHkqyg:ð?!}Hj̑!#A?{ܶl ʟvoҼrݻwlnj|pTyȌ)Kj]3)AR!i- 3ݍ3e:r< -4H腢1~n˞M|x܆;6(nqH', GՑ6!&5&-"wRGB"gq(. xl*g#φ@<+pw&DWDOJȂ0P<0dDK`N3 $R4TQ\^,UXصtx﵋q^ /6ˬ 0\}y& ֲȸ~(*-hQbϫW]soɿh AO<>)򮤫:=+ɂsEp^Ȅ|2f*Xc\7g InG{Ŗ7ow-fs-[W]ar!3Zk\;20H^z['&k0 Xf LޡMo&?|𒸁4?Z %'fegQ&4g_̋d_~nÿ-a<-?C+J`n:_,zz?jju k؅_hP}e p/&ƹho8/}9(2;xwvx:Z^1)ilu=:\`5ݰ@ՒT0Οf.ۋ~.ObOC=sk߶yFd6˜WF XM/,~{\d-Ӓ,'1ÇٲTJOY3w'xC)s.UyvMG[{o* Bcof IҚ:k0g!<}}w.ŵjN!هѶ%+}se~n^S/|z$˒#wcң.Y)ulh5KغG5w𑻄U{pevݎ0ZմbbDӎՖ- K-;עƹ}ɏ} ݧRx}y##8l$Qj'$@b$XVg!lN䔀gp~/oe?'F<x|~<-8p$BJʆ@Zg'H"U ,VGtQW~Ŋ{bX͝w6^NKGNqQ7a8x\u?Fڍ.lXOGn J  xC*<Vk5 @ TI9OqϾ^. (OXb/1F͜^#+E9FzKZ]E3& 3 q5a{?ur]w];yք-p&Ё&X!Y.aY{ɋg?o]"Pdr2 I?LP_)b|Pw%cq|ꭢq{a׽l`zQz2wܥpQ5FwV.{6h]g; lx "Wݧݥ7}׶_< ?˜XaI[=נ=TɫAwDz`yD-V|17-?|W%1V a/oUHirzR4VWX?A\U1?RvWKSnhq7*lv|wz1]pRL GGMZy 9>T\|P|D|-8 NH d!"B(tA {Ff*[nypJ&fZ;佥"WD'9Z(O=Jgg׼<0ngas)σ7LHJP4_w%3xkC3 aeP,s.~і6Y-I)PTr'-QUYK1|Mjge<=8VlG_N `fe˩}Ƭ͜p ʏ5b` tR^~ ]~svyYi[-2 YQl:L0&=`_Rm.ei._8UeˁځBh9E:}COWjN7F pUqFUbS7Cw/E`Y~H(ON\fO nPw 5/,-k{עkiߠJ7\g^CEˈT%ƺkF3 7Y qmtr[¬ 1IxA9\6M=aS!yAK@8]"JGk2Cq"a b  2R\8:5vlh3|P){=YL˪k'0Tp3%x.Qu_FFFu vHm D"; z{SO.ث/s!"gYAq6Rhf:O  xa@ 9٫HBwKPxsinޗHIƠp8=19/c4ҡ%B{6ztU[@]<(e.0ɒ$JB*c>h{WN8%ND8:H1q GM9Ma(5~ru];00ާAX/_`JʒyO+B ?.q/^64ߏp7EV -߻vo҅3k'?ZonmXhgTkbۘ+1- dLk^\LM:G0 + (w^t,ӟ^ QXjQ|oƒA6pDejeh,&& PU=tJq!rAZ"Jyΐ5W COMrS{:,G+o7MH3oKl\&zW6|˾Qˀ l^y%rޣf7FJhso6JRmFOQP\e_H+UR5z;pe|D%5G%^%yWfWϓԋ6zZgipuZqi{W$f7pE"iQs+J=• temN*BE_Wft;< 0QɏLox:i,>:;?2QN%(?}} ƨ4 ?ĵZ7~0-IH`%7pU \i ;w*Rjp VRH\͠/pUppU\ٸip~ઔ,xGּ?{vE\ٛ="WEJ #\!}$弪y _ n9] D_XoV1p|7,qC 1fGPx;!Bb, 3}9Z+^D %>CJ/Q@g9Hj$3* g2k& # '9'R|$(jP.Ky-A)%Qi0t}sC{E>q4KM{Oit)ލs6,guѡy:,3c^lӡ8DaOR>֧xOV^!_qi./aK/aKؚ?-AK6o} [_֗%l} [_֗%l} [_֗%lPK/aKFyfgX֗%l} [_֗%liZ_֗YJKz/aK6[k} [_$צ%lMsZ_֗%l} [_֗10b/aK/>(W(H`S4ռ7U8x(HJ{eU/ː._׹8W7ƺtw7b[:$i" 1B,.`sN c=_s=R~lp^NɒHlR!CF I/'sQ`pޒBsMJ.%@M)zғ@[,Fk29rI"x+90c'dgkL6V$?4< PjJE1Q !BFRKoUCMldhd"+^~ɉCOFbT*zcʆZM.ogEGnuFVӓ#S#WYIH#Fۿg [~voNNg}j딋k{f*xT͓mޗ?━d82i-gȠM6;Gb8Z:‚ $+cUdhrٗ:YrAГT&fn9+8 D \&9 WW4H,T.*ްX QŒ/64XWhߓ0N? d;0Nr'-Td /]&4nq  O ) lJm&h&of6!e ><%~Q\q1iDZ-+lZ3r8J%,I $&V1br"kx=9rHMUԮiPMxU{I0s묦%⢩b]#G/)iBcD!*R&i03L6FE/%4\<. Vӎc+g?'s ȃzD1x8=gׂ_'XpC7z<5x{Wߡ9e/+DCuC 0ym* &hȷUb0@[CMG\()HSFkd:,O1Md,ZIk[M:5j(PmӸ̎軾`o\֮Kc[-Ap ߨ+Q=קzq4RmXVqX#8Wρw-$-ҍǽϬ0gWޞQE{{Y-ʚ|Phe)0Bc(-=+fP!%ɚ JךּouBn (rOeWpdW7]?0hvo0ZϙS(+QLÉsG?Vc01Y(Ǜ,}D)Ѷ +F/Ϟ^Kмf6{HP Eme(Ͼ_Qz[t@,Gݼ2 s :ޭ]Ύ rf9Y- '䔅)A@nz{3q|hQVn!۞lh0?EzZ s_W1+%I._9 ițKĂ@ʛ.3BF0Œ Zp\eO`ukkwHVe)q1qcPAKGF݂8-x#zv\jB-hzk_>ż>vyYj@W,w#{0wCZ4 9 #vFH6?gW`ճ}v~pn{8ѳgpC4ĨUH1iCGP3THNZif3=+H po><[@}  t9n}z摯&l}oڗth޿ΒVz Z+.1PQZ q9x@rD . 'AT5`hr]09RC"6`b>$\`0<Q%Dco3+Wz7!&(\;,miVF/]R]u/t[!BLC(!f$yIij(2I8.@EـoWAVi֐'JrRBrX "b8  tqTAB].3O#@ p\2r@nɂ֒:e'(&7YK Q;߬?n7bٻ綍$ZCU]۷I^k2cdHJ~=HA "A*K 0Lӽd6mu'PԶe׵ V$iNdKx %ph Qyj\pc(E܀SD (<# *DWX`%#T"Al*_g^t3âT S ;}7xrXS<`s-: ʠQsɌqǍ܇o,$UoäWuwFM>|kp9L;MIhDM~ y~e"x~ًW@9"XX3)]c9#5$ON|7Е%wֻw??߀|Kӿ$)ӚYVW:;F>^J/bVմ Jӓ~;̤_vF#ipumTwMr'qt\:>Io;3b|f۷ g-&Swt  /BkZ ·_9αQ>g4rjs`/ ] P1jbw{ ?}mjc)6>Ds[fL9l²)tI ъ >d >Hk|cF8ޔ4@v'М,f&ϖF`Oo@/1Q-vW#,!߫ʌy i%3i$Tq=xMě[.N`塧DPp$UL7LP :6zL{øLl2{aQiwv@57D" _.R-5oM6n8QCGͣXsΆER> ^W=ݷ t{ޟt8@縥 9ʭ{r@g OI gi '"4Wb$ ;Jy{p\(asmH.:s8*ah3cS="k2uJvdܬ)^y^aؐ4Z$F3P gq `0:xZ -QW| 1PXߑýեg vv} U#ɮ "Vd~,!l2`%a`+'Dl!SBz0wT6yqÌߞG~s8"Xas `5ߝvRt돯R;M Za뢠L d68uV7ș/Q4 Wv\(S@3` ЪM[qy YNlZO#u< 0̛)/O(Yyk۟`ʨlX܆g&TxA͏Y韗YAY75499@hFj*zyU`濏mz@vbgfYo_ȹ(9?QJc1F1sɭ`)$g;T{jc`eX6;9"0Sfs YcU;tgTg9}F墛KBӗ]F?]&oߤ˳xn^914If?; ř B<<~/BB&tr%\ЈsfBX3((dQ K?w8ũkywn.>U .c*˞URʚ L,^A8yxIʥ[J@{_b,lnZeN67'*?tx[ RjMAlxgʠɣ[+JJ`zSZX`QSmm.7zo<Ӆr?.& 6Щ#rGOXZM2nzՐȮ,%cgmI ϵ,wJÃ&|;nCSp1L 9 PӎѰkVQw./]s;Im;xRDu}f'MRoqzB^.\qXt5V5yJVU:ix%j_[f.f['s :Z#笫rljQFܙc7Zhh-xP'; T@V>;xY[aI047?m:] 7M$UjIP΁[ 4% dojX%xUK۔:]d7ڀU3WG.vupTLˆ9B9olX8.6c5TApmdM4yɓ6;ԓs^$f\=ZjςRIw"XSw#S$6nJ0wR Pw P؀MY}\|?Yr^d/kAE@~9x!ᴃjy6)2Ti?VuV^mU0 `^?y-rWW26*g};)RS[W &E{ f3]v'~/^Jej5.՚r@f ~ojF,B0#BXYL^jc1h# טR}S>-{i@w+_iˍ"4+YZ^zGh;ke-e` 0a+5;LjnDh:vF6x&y{i2|\h&0@[7a}of޲i2_6E h]7c&&VQR1qV{1g NVƻbhLLFq-|Xx6v񧙣xfoB-K -oC]gB?@I^žӷT&4Lrێe/y?_sr'ڿ-ow7Feb5wWT)/KȜi7^{g"]oQ{0!rwPSN^/Fa`ls'T΂#"ᯠ? m^F>蝞 -Yqkf8LD D3$mQ8FR:r&wЌ(b` L<):SDcXQ"!@ Q Ƶ}5rֻ@'{v^lzPbI|}"Ocm^?ީruYgO>uZHaҩxW=3H(i(QD4gzʡvۊ6K]DTv_mr ;5ނo;&<@6 #(rɣf1C_^\<cћRXJb*U@e4p:H*(Pn\;HBu> $I I#2[ŏty_Yp>ON|aKlmoh=oB泋 :|JgΟ)1ћ7"PupZt~(*}yKZYbNj0X*G`U̕JuE.V39OIQCJ@\Zw8 Uv]{$ܖݧyvgLF0a鋡Qݕ(y4 FBDG(i)iq WSJ4mLH/A, {}aRX ;s?d4 r]M{oL3>m>u?~sa,>○8_K~(^.@C9A=A)@w]y8zŷIwz3̩+;ξE?mGs^DmNW4āAN9AܚĢGͯ89?>]֍~nWc~K>#:O&7{ӟ??nJzdLgjiQĄ~x؟y~-x:$4yFt)UzZg'H"|RAXޓ:׊鼜|Huo 5X ē=[n잺WTPP".K-8\;HuƔBYh\镐Ȩ9eU{jc[ONB$$3AC&avEeA\CrK.R7RNKl$}Ԃ'E v$Bd2k;{0H_Hl89>VA\wt˸p$o#Ne{[TF%}ءAGj-Bߝ/𡜎8J F do޴,a8{<=?ƒ|}10z[jrds4׻!|*c1A"X6w7 4jMP!gldü[mƃ(GW~e]SNz&HѲuHze]ЉQ9l)ܗDO ntHJxA;(%TS A}ٍr30+A/f$4SGÉ AL.dd I2F_Xg<2*djʔ, XYk3aiM(RؕGFNsOP│o 5[.xyq|K/(+9N:KE,O$Cb2§hMVrKRTAuGPU YBb!ũo"x`1m/a Lȴ<І*Rfp"+$X)ըѡ"De-iGѹc%P( {{~eKѩ6*=%Dʅ3g:łG NhM!tTS~d^81J+d8: OLRp. 4y)RR3*xԚDf0 ,)!iԎYؾSI^)nwYFP̉\\N@||VoD} yl9oG.dJfyjZTzתz l0͒~wFThYy0nOZJm75(uط3e >"wԫX|覘U @"[+Ym(F{tyi"KˊZq??@̇X4QꍵDN*Muse7p;!_9psm@q& J4$)4^tBCIܞZҞ0GN(hU:Xce!32p"10wXB,WGnv@GEњL<&vi"Jn>z&oв9`=Ìw<Q+q갽]>v%8P/uWxWybgqu3O;*޼qXʑ 7jIzzR4u<}D<) ^}~~ԧ[J220B\(Z 㑔V+ X ZbAV ɞ;A}*XdD$pH&ͼu'X A)@zWGk>*do\[/@I 1C$vBb2CBiuхKI.Q&AUI %#@=E` 15ή9;xhJU*?K>-)R!){(ZֈlihrI$sи8Fr WZhIN>يEbw[P.~X4-fNpM}L}#SpjagO}޼;חgmϷ{ɻ/2˺[.WlBuԗt/7.%2ue~^f2eOaqmHf!Y h?HCڒcSZf D9ZaxpIi5lC1B%1PR0WfJ )` ̉l8Ee/)U N|iAق$ߤMlIR+fDÒD̻7/[N[@ *&_[_1Cx$K":J$ Zfj^_thlqBvоz]`U'周}_{ߴCh2ĝO2LM:: 5İ jh$@ QTd߂ʬft[)-uPvRzj,֗[HkBu@zqlq{f gUfslDZׅV znxΛ/n7|xz7'M9Oc= } H7!a&9P|ӱАCeL2݈p6v(KlTk֦k<)g[\eH39 ֤븃 $MjfaG7Z`WdTTV4q}==SiVqfΆ;aw1.ňS3W#FRkTKQ5uVLO BKFs9,EyyAj 4灊gl39OUr֑434T` ;&~:c]y\գ95bCj0;&2TWia.OvQBx])UtIc=8{FU;4H0dtǵnMwZkZ)ctxoớ[/m\Nj.zp}wvꉺ}PvqדˋMYw([1sgKf*̽@/Sݦ&ԛ+-ϦD_.6ן'ߕЏ);Т}4Ec;۷n-`~zVӟ_x 9MÊ&S6[dh|˶9=חح[|`#616l~*?FKGo7_Y5Į5}LqWP!1A F Fʳ۰V e*7m8I6vl_ T˟3b+u4[qQ؝ڥ;,IHWǏsEZEWօSuAHL&070:-u):B]9c,x]u\?Lvi.]WJ:-u 1q]uܯLb+D^:F] ~ ]u`g0(괸`duw#M.) kɮ:HɷߎUȸ/= 4\ 4Zw4J& V]=As+1JWXz.]u\Qti4JQW ZNϬ;r /7|%nt4;Q45)a*QӤu~ ]u`'T`p]uZ\|V)٬:B]12n]u/r9g=_WRue1@. e]u`+Koa@J0ΌN])u剬hv]u\&RZw Z&RzGu%Ι  L贴K흒a*<е+TZY:%vtz<5Li&:8it(Q UWOz`HW FW|MS:u%H:6hOά?c9KPQ6hT.7r1{#ߖ ɔJsovZK bu٢fRҽikŤeXfB5d#]ZP`pK ONXNI~pZv ])f]u\FU=3(=ફ#[q{FW7 sYi&OUWǨ+ ,8:.(vެ:B]9nj#~w` nRJkY|Ǩ+HW؏3s]iS":B]14:03VFUUu<i0؁eg$ ]uZS5vtEz:5>W_Vu5 |}=w(®]]ѪV={H q8HהrGit0;5 K[wW4,aۿI0£ %X:%#i&n ])p;At]uJ^T5Gػa߽-ʮQڏ9=滓__nmQ}mɯvwAѿ:~s?7qE.nQ=nA('/}h mEo_wx~~B Ƽ_mo_iJ@( T7ۢBȆ3oYm̯ot|' a ͭo?XO>s&}+>X>q5wQ{J[OnKsPPS߶滷^U݋ #UQv"H eO;\f5')OO{oy䏋)\)AR2§f|*ug)Ԉ,Mr$B+*B.vdRDI!K"-yCB}IګkOjyo:W6Z ˫R!JK^ٛ%,^qrB4l@Pl^u=$0 L ɃBboCǔ BH,͛b~i݁;`OB<7ra9{ zIX%k#  9i}VP#I Z(j1J -@1|BVOPS9_yBS}Ԛ$$5sf"B6EGCYmZ сFfPZI1plS]3h`8:eOx h Tkٝj6Of4ht)5*5і"?HU+Z[XZrh>wZfgG#1}q>Q"@xՈ &&^\oޞ\[d#^b̖-;*4ʓZ2h!JTx\x7"@oUs[ 9S+ֶi4Tr-@h @x講~36=14v"JDmڭkǏG=׈)jl>ڠCK>9aKY]WH5B5d9)$"[]htxMJ0j{4k)~û(~hqA| I|NȸXM]zf@CBP{G+Rw) A SDeW5YS@`Ų }Ҿhg,heET-;Sќ\5X`(GNX4VKS&*- )O&%jhkji6m: zƥ5T9[JU2U O:{a)N{RZJAq*]}5QL4tF MvΊQ;@ м=P4Ҵ+m;oTM7wIͩhaYe\53a֊*P0',i", EU11ی&C~^$7L:nM OSнf#nIێP5R}U̕}Ρwp:̲pq>(ADrAۃ5(4ph"]Ԣ Aձ7:Lә1Qjfq{auTw#o,Y l/zu1MjEʏ$EQ)mk[[] A[Rl=:I 85W.+a'uYո.RDVKL>#iBիf !Ѿ&伖H 씈JCMȲt1! aUh#ǻ}hvW[Ƞ xG \[ݴbF\*"͘&"9ih>(ؼ($0D!N&mhviLЕ|.:ǫZp zjVeYw z$M"X # o b9PT8xi7olJ?l:@CVL2{H\C RbtLECNZ,J茸3kN$\Pd"U̫ FMʴLw5}tX%$0AɣJA&ԭ.HGfp6US Y_"bΊf lZA]֊ANE"]F/f ~^lD>"C 2FMB&ȈG!A]JrAjyq:EjC$*KtPK|̡GP}ֽQ@R@"TePEx(%%l-ѧ@hg]+y r.!z-bB bjwXC-D{PhEe{}65^ PФ td!.h]`mB$1S(ͮɀ!JP CEdU5\J a!dً$P>flqRe9QbզX5Zl?v XI3mY4g#MGhTf%Ei*5^ZVE*~'XHhZ6H*tM>+M-)K `n ڪ6G?h=mNg]㮽 OӤ[ަgӶ\GI`֣{H7[`3 .-zLۦs'QEEmC^k Q$I"eCkBm1&MZ~7l}˃ʌ' 4ڍ6FECrh[SP.O(7"Fh8}A,WRgB+ڪ;(XfhTYڂDe{!nkQ6Q1l]` v5+Bq&SurSp, wH +U@ &eCE5FHmr3u#; ;yT=$XW (]IڈJ5(Z54)hc#7 [HY 6)jK=e2 b2 D hs@:'/wysP Qє?joZ 05J!@6(-VT@9 Z6i/B\čtrg*AkOV(T %׆&I@ b~ q7XoC1jEroӥ!D 0r( ;fAj*x* i7u҇N hZF !Kk|A9oU ]шw9з 0H5Kh!~v7fRڰO;d#kJCP)AVq7/_n}6}(Z}j9.M?~ۋ_`)F?NWbn4+l^]dB+ߵj6{=Z]N>C7m/>lB'^M:|3Z,gR,B49.K13-&ӽmö/lފxrs0}h4:]k>zj3M:e~L3#}:'@p@fZ @ a$; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@ ,hHN`Q. z8{==Ev=C'F߽i; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@Z'/e'c0 DpІwNJ@ @b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; NB~HN 6~8N qmGҴR@ v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b';\Z/Ԕzm/_/Vu2.aArH%b8%Kwc\(es0.}Нѕf0tEp ]q (Q]0]=Cr(o]`CW7vLWϐG="V ?zuE(#ϑZ؈lMp ]Zs!3]=CޚDWl ]\;vB ?]{v9>z}/V>T~hu|EWf2LWzTf@tɟ\gBWkkӕ ZlDW:빕m_( ubu(wE̞~Z\| =Lѿ F9A?c;);GRIw"鴻ï;zUbKz[QRl6/Dž [;!O6ljxw9~Z/+5IY@_A4{isgRt]L葋EճQٯ۳rɱz+K]#6J*郏踓I$2uBGt Ε*>ӝA8΃dM/۾z Yפ;SDcHFeW/"&TZM-0PԚԂ!;oCW7z-5LWϐ7G= uNW2HW6,+PstE(gzts+VO|F1Z}BnCWA8#T `CW7 ֊j'ڟ#]EӽKWmG] Hy޳퉐 XytjDP#{݃,C64C+ PxtG ]-]z$}FgWN uVJDlpT%}g6ci0M?CVJ{q=W[DmvXo}amưVr^^z9itX&iQ^]}o/iMۋ\"*|H,R/TQZ\dy3mbwE.fQ-_^N&] :SMMmјiVgbv'gN?[=4"}&-H\wWAS?GݪVlڅ\驕'k+y\o iߒ}S}I6:$m|*uڀlN].e5x49br M#G( fj˪}b/j&nmb|%H{#*~(|;NJ_X?w淳^)-^mi$$!Wf*oᮎ/^_A5ʎӡͩ_">ҜEikI.eαQX]j &b6GA!>9}Ȳ;`9ѲքIǐ#^{U*iUl#h;֒U`vak{x4jЬ٫|OȨN ' 'Rnγzuoc(BŶL-R!UghsY dUYy1)RrI]%˿ȿwiPw#rK#t.ތ}.@{ڶ;t>{s>i+ll<5ݣv}6DvǸvw*nΩvϝS32n=fPsyȣ`t6oNt.j83iںu鞓?L}95}|z7o/^.ˌ w)f?zK|);Joch4iw.WuEih6龗dx鬎1xםj%5ntf.~UzzT]NsMfݫ6->Jj[JKg?M~7/wq_y3/HR(v]8pν ݶM'\m_s`]M]W0xw ]}q[Zq*;'})Y=ԳѹK:.;f.&hM߆WR.EvFeo]t4HU+Y};&esdwb_탰kņKi5(\(TV=MٔeK+V5;"=V*m*:VEom7wآDYBUw^^x`pCw7Mx:MG^h2"`[h5˵RZc݌t>.T}[;RG/qm?#]^[ȭMGtPop'Ob-Fohcһ1:n yd#?wOIc浑a:oouu=Q/.>fOwLlHy+v_ 7&iyLpsLz~ބv~37֮op5/ HEڮ8#;:_ vp$P'!F8IFzbG"ҞBLTL)9{ vA+a$ݗ<ɚaڧI>I6|$5 $ FLV$8I/A70~v~k_VrEF^s_z0!?QZ ^յ9YF [cV}!Աӏ/ UXxG4nC&~g\^._ S -1REK r@% rC(rPGiԢ+\]҆䥢==.r{lm1F 6HWrByeUT&IUkoBtt(1,ER =~ړ:3.w>[mOa^(xNY`f]gS{>YZ{6K_ĵVp}ίH(S[Rr{4HॼzqŀfݡAZJPo .0E a+&[K,;H(0IMA݆,ѻ]8w9ȸsΏ!fL47+ی7( ZTdPB.YTtd %- a koMw޾=a_=Cv$TL |7-) !]sG/#۫+߀X+xE]tEZɨ{ӗeh,)ã&JRc :D% o/P(.Cc1vM^XI XWWfwzBuj1f >'gd(tVތtkoOL~$-X9PYZ'/M^J}!kI.R`RX#S$UI"4f'q`ǵbVԖW_i[&ӜbQi30K(l{f9~7Z+v3G<%H%sR#wNT <+'Wi~r~;a=#`EdDmRlL5(oIXዬ=5@Ty1x|QYcxmX'7qW8byr gpc[iH9F)+\P(!2*e<\=@G8G@Fz|BrG\?`ʌ4"I"XPB@VI*-u?OU'!MB4Fbo^m^_#tGYenC;x%痿yN0at/YDrbQ%t0̙JA tx8z׳K<)VRQZ:CJb1񾪍^yGPdtHAHʌ{H".hqzaqVc6tlb"Z̖}62QMt "{ofo0ևb?Z v I5\k_Fx|aD wXR 0D P5I g N)`*JtE9U1{̹w7YxSC+x&?^jOF|aJ4AvYB E}WQ Tt[^2u*f׳4MLA<~}7?pa$!ąI)d 3~?ۺҌf^Wq@Unit=\}΃| ͧ|{{Mo(Gjv*J8:0q9-Jf~\/V+~>gå~?_oCFQAIz^?a\ZccɃEW}<<3 B=Aw??wQ_wxzq=bx30w~O?>}(l.8:Uk|V <}.Nb4F|3Ljd 8y:7@T~ZOr쿄^}/5Ϳ_+PR H⻐UtT)1d )|.^๾FxHF1CbTc=r1r çގGq6ySп ez^:x(lG[*Ϩx!ƥ *@v͏0Wi]}7O%k)'qW #Ay`okN8V=6<Ы#$;J?ݸt[@ˁGi}՝|ګ;&1.gkl&8OQ0|سjO֝@kU.ҡJ"9*/<$ {5@k0y|$ȸF$t F`_U4ZU?dvbLˋgjt 0+p<ļ]|{`uA <.%֧$cLJ ' X ԓO B$?;&Nj`*,N>mL7lċ=7b}=v2/|h OWqf#T?O_([yaud4g!tZqw;6֙*;<&0ʝ2iPB;[t QJQ /(Q⮵񉖢`i8$ā4QBca%B@H툤Z(m Rbt2KITO>wV|\釋Y'Ml6wPR$6 U E]"@DND!P"E94 r kx;c@-Q`_`S &)ۧom-V mB/;˯u) * n?M~Ԁ*2cZo.t׆ƨE {uN48!*N-* F&lR6B\XtIlٓzx뺜C#J3E:@1z %NH]HM.*6gOºeŔ%57ӯ#$ dBjb " JdUfOWlNB)Y-kqжA8FigQPg#'A%2D6XKFC96gzM2%ף0&N% O$ )χy]2|mOf`l-՟rg,|yleWޣ)hQ\`K!jZ2 >4zIp xѢ+D%J́e-&4 2a)\&+Y ^JX5aff j }ѓҍnn+2ް38ߘ]8mva~]׳/cgP| AKvO6C_1H̩LlщL!AA;նc* TsFҪMm]: s;(\Drc7gǎQSʹP{m{n-^t֔ KYq61%"P]hHJ[e+#9?,ƺH`Pstvk(dUV,GQET &B6z~uRZxfqGz{ĭ!8&Y A HOFq:rbl1I:˒N#7e)Fb^detaWKy4"6v`,EɣM$2M鶡wmP])T82RJ4N歧`-j.$z ]oAiwm[J ]N{#74b7]&vI9lw}ұs!8H6*c; 0ĢvԢ.@]0~}Yr:of?-9;ǛٲKNܥMʐl^C|}{>d kХ>^u(v59\_6Zzfe;{}^tz|YJYݗ/3O~ s?®ujZplDO[?Va} :~W}mw;:Ϫ뻛=Sɶq !Nݟ1JlSgWD“Z:v<{4;D[>"ھb֞t=boV#.t Eζ~}F7vH'j6Pr}h.>ϗ0.bUW[2GbuR+6cvjwm~a l Ux5U*6G!ԫ9-Ϻߨ-/i2=3&˼NTuhۗ b-M6&ĶvottA;tt [ߩ2p{h;VwϿrB|Vm:j2jqZ|,TTEЦhJPPvKDsVGOBӮؼQM7g@_TpyqCYK0Z^ ϳ.=쑯huV+UU*Kjө*V겺d3Ϻ`:ث b:"TGݚ )}M7(JARNNz ,7)֭C0R oDgzGWHN<;l«뫋9ӹK?@Z['tNm.DbBkzH8D__ۺn m n%CrsHCCwڶ]4]RmY^n7} 'OU!ĩS $Njt:ӗTӵ 5`w[P7ѝ GW-=F:VҘ540Ӗ﫦 5Шvuo/#1 ~`. p2I:Oh(y><~m=M=mw[F}Li WOV໯FݷI[:N=fU%jArꉶ?Hzu))ep]7q|dhj )۴UЛ VRcM6m G|Cu~1&<4ouEr|xӻrvK?ǏODuSlo8 GcMn._^}|dZealWJsc QOwqq ݮ pjöFi MF{1բY)բmբ*WEOZ3xl6vv~}eU_ƻ^\w'j(Ԉ"BTMJODO^UFOZ+;}M)HuL>6&@h=>b}]TQA}j6 `]R'#6΁6ĺkU\coR DDLn@?uԓqr uފG/w/ZRϼ㓾~9N oҿdql*Z{!X"q"5oh"5Ɗכ %RcZGjLޱY"EjD kX"\]1)}I,MQWh #HWF+5#4rSj[t5A]EXAb`qQ̴zsS/ftlz?WNY7*AΌ=F] őA0/]EW6=н HWQF+EWD;Z08R뢫) =ߛ_e3rBzӕ͑뽙, ϗNmg-Y(37Ef?G[4/q6*b@tjn~ݍb}(못ϖe\N%/^>^s i9;bfszkmE)qb/.aoZy.f]%lښŧ~+p7+@o Ʈ`3_n|~}MY57K 6Fev__68HMSӕ5bpJv]ԶvO=G~Ӑ_T5ɺXmO`{7l|RˢeOѲGܑVv}gPu`&5<(nBOiH_(V>dxkN蠱JH3W[|j89=gAPjI-0nRR D=Z0!Jʄ(FWk5 ѕ ftׂ]1sy+FXpRRtŴRǢ ʇ+@]1m)cY7E]4JZtLAI3RtŴ6ELCuPI]pTF׈ uŔ%w5I]J  X9b\9[6uEvo'Gѕt]>W:j0=38 7qu5J4G*]A sEr j\h-];bAxsMᢒi0ֹ&(!MOPFM?UW uT^qyج,7omn9xo],͛o7hM1i?;l`>7d>w5n+̔^Zȯ#n6|0>Uo>q+^-ݼ|D}*j w(g薉y8yQ~F|r23=y 3 q|S߿^7M΁ _e믟5%` |*JPfkkmN7S/Ub UgKx+߼wqo6WYćО?_ *ohj}<>mkYW%qTIYB/Ng,i"\Λ[Y[ ɔ\HnUK%htZBnc?Hȥ4S5i|G;:[;cH;BC(B%I:,]6yCюzhs&tDK)jω6{Gb:0JHkf9<1pJk: kg 끒rWj9вb>;Lf 0kD34p'-Ռ iTUiԒϯ}{"Z#)^mɺM(Km 7 |P"LCt(.e`0 3!MB11ИU6F 26{WߨGJ<D=@#.DI'9[dCU,Z:d %SR(U-рOBr>=7'!hUU oZϩTRnm5ljxIl1/p_c;qN>I5b'-ؓ9fqu$~`یU) 3!9*_*X )%$ڨf)@_"$U*֍KA_ZhI5jkT 9ʷ6j(U|_G6Tق1K(W~ln` :`֞Jb=w|֑]Zy;a:ir@,x&THdFG>rN rSNcG3PQCm>+h-aK>8c AL7Xu]/ Ţˋucm̭lL Yـ0#Ȍ,Hw0`T` HJƬG6lmC *TD@5w˽AAQU6MS`q^baNp* Ӥl`_Q&t$6GI: d>Q"R}W:a O@ezm|#rd2 VԠ uszR uW4G e2a̷.Q9Z Ȅv%H  OQU0uk)& A'b΂G ݄b31RPkJ u(R!J@T`N `ɦ= p\AQ{Se*8(@Hq`Q ڳ;!JPA/uj84Օ/H!8I]ZD5=( EZbKRt0Btʚ9fdYk|B H\4l=x_Q>VqC hA}p0~Ew R5xlPBr|\1fPTԃ,0F!NHe]񈝇LˋM?_2^ׂ7`X*f=f~pvm&!jhX 3 b{PT8xiT Mƫ9D%@ۈقj2AVa1 %OHv9!Lu %tA\xO(z+R|$Q*LFȼb|b1;3]Kc4/zOG3}dH֨Vx$ ;< cx.¬JrC#QhzBb9¶l^ A; > $bӗN7~|Z+ ."O֮ TG<)} ]{ #Aˈ|uC.MAy1k"! 5y]%@_!80f;@]Ii(ʠv0FϚ%\)䌊 Y)֎ a<Qy@1 bu5;XfՌ.#8Xx;:B5^cgEV|s-m}^m6gGϞiOp}V >fuqtxs&]GJG_nހ BۮomܶtaHLmT\GF/pU`^9pt@ȌJ@O $ 8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@X'P`b#'NE ђ{N F $@&'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N q ;,**ON W^  Q=z'ozN $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@zN SU{L L {?!j'P83 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@O z޶}`_8学~{sՆ@kb~Qח/0.^=mR`q7%8)1.={CCV]1?O\c ]e:e;]1Jyړ+T=+M/tyt() ]b8t;;<u?lYr^/7sI ׻j9?K.Q/~Δk//\{ƗT6%Y4hjȩ{'lx1jՏ/ >ۃwkQ'+򳻾#p$=rNqs\{v-/n4G?qp5"V?틚ӎ5` -XBx6m*<=[~jpv\xAT:R~>aFş~87ZCOlR_+6mGox:nMg׋gf'{smv5hA '/:-|> ڷ4~]ihü? ezd+a ZkmH ܇= mA E%X'{&O1Mr#CCQP$ b4UտڡK@j}A f\r|)*S+乫LzPNT;}S`_׽MxZ2ʷ#gc`ܔ l MiT"e}N1nj@q7X^ގ@(</gaa>ƣx]"pXx6)L-T q<]G L#]{-Mx6W7ѿE39 ?0=j\ : ;B`z0@śնA&O#w5hmov0_URU"Jt!ww߈y" q rh)ʆnX=%r9^c!nDƆr")>fe5[Ǭ>j9r=j95 Vl;7RXdQ#5""pqsAj(& Rz)L-gZT չk usN/F]eru5P)j@z> S>8=V] j2bΙ3ϙ3CWQ]" +H/r9t ]]jЩ磮T\JbU&rU=T.9+ѯ2R^*s)*Sk>/g|WR](g`ŨL.2ǎ~TtՕs L4;F#XZ]=\u;֜(уԕx:tsk.H]GuԢ>iȣUT3RW?+] `2W7%Bax!k%JiZwVn_h4V;7ˎ~sY}nCUY#It q02VI9e[ϋOS`ko,r)5Z0pg MMUַ<\f@+bX}Ow뀲y:v1b8vhSr}nTR颡>}/)gm̸bɫXճZg?qJۘTD%7sIGxBpHN}ov`$G+~Ϫ{:+V̆i4Gb:??rQwr˦;_[4ۻ6hnT&4]VҰ;A+(ь')~]̟87Yکl:w!SȆS[ߺ?; 춪7La]oμ>y ;:ޝѰuJl=le}LOa$$u]L> 7Ũ8{mBYu}3 { #-V(+|P[CS>}-/oR˖?HZRFqzx߇9ҷq09ƥ%*E1P)YH:*<0͌Nalr1μ xC|]\8;9fvY?.Aי0'1a$+m0*T=jzǚ2G;ش'!BISc 1!fSi?}}GD@ZζLS FA.sA0Rq-'T ^z)&Hn \<@meLh*"6FB)+:mx7j$mdm?2rls'r uc>U-ȽM =ZX&;dN;K8 Qr%h|BI ,vmHd`AS"'4D fV,=` jl6^_nb/. އ8͡],?b^zI.t|ߴ:z;5`LSջUvyn 5I9U||Em. H+וn#& i &Æ vSb$FL|<~ܘd143 ɘP"a@DԬ]~)$X,,Oif ׺M'傔2Z;j~Љi%Oع&.~QfhN.7eZBDyդ]lIJ"s(uL̨S9liR3BkU!"9P FFC4:᭶&H\Ҟ-QQ+Jۖ%5r8+35a]|u)$w]nS9%βs{=S噬;i͓ sasо/&+Q} Q2sEtXddJ&&|Ks"r @'F8Fȼ|;mW:t!XR؈_h p" c+9ŘG eʔ0`(HH*xb҇UL8@1 1A)FqRX rrK&ˉmFY\pخ3Wŭ򒾟7{F_d-0LQ.U}A7gAxZ]Ox` :¤HI8I"IP2l . S<(t~:Өz:i)IQPFkOA1Zq4CC{riEdn0뜧0gjĚ8A Hp>`gh 3D]H1Ƀ֤,Zº -dv_p`>ڶ~E'%Ԇ\5<*j\k LS-gXH阴OAIJT1%u;5p5~Hi3ɞyX7N~$s!!7^7v6~_oofOo`h/{{{wQ"X2x1cGuMo/Y,wπr DQRi9wTt_G~12M/rJA4<?GK>D}T "ư"&P mz`o0 ft5-Wƣ]Q~C?MF#q|=vUT7˷yEg?ZiZ^]oq ?8Wd#Q 9dIL^J!ޫx2ɁՃ^qg=B^:f˷~&br $"VY2Fs{0ubD k T缌׵m@1˧z `\M..Fyn)65\(fVCay#}D[-ԣx֫)2w"Bn曦Ϳ^XS;q<7PSJg nȀn8^n'/B8n-FW [VmЛ#PUrC>7oKyG3T5<2I9 7-So7LP~^ ocf%YE6ӎ/Z:/,x0!2m]:ҡ~j;g\_xƸ`{prA!`JZ%YZ069uCggľxZLx缫("D}y"i'yfՔa O> 4IkEe|19 ƁAW( 5-³U□TJON(턐?=ʭ&C;HqRrѵc4on7r`db3Ő"AOrК}ݫ gz#V;Rh9]gA )0,IE Zx S! یaRrNHԶ}p%Ƈ]>ܬ.pbt^mIȧ;\hKuW0w;`w MYHd<[MJX-SA"駪U>=Y0Gr^fOQ8 X XE&8y.4CԮܖug ;5W\0qMٞ> |Wژ\U῿VC(LBu/Y$̮ Ӧd]VS*Tj76ޔu 8#23 $.4LBic R0a!Dejgpo:U+_|0L|]}ކ-SXp%` h8YcTNAI$%BT@#u]$F&a/`{H"w)q!X4j 3x)U1 eF Ru/ &d:'%(<8(mmy vD6kJV2e8$.p od|4ΕBhe9:oVꗔ;Uf듍?<a#1PG8`0.Ik$VUi4!larB77|$4Z .j~(Cc(1Azgi,#~-.5(_xM+{=m:ӣ(;]\No^"\]/=}Է+|w߾HZ%~]f>HJmnuvddI\)7vLwYE~_-{3j#o.k ec"\5Hšwym3hyX.im!M 6,+_kT}njIړ{~_OXrou̯Y #@U u[{{]Ur==HWi`_0:@;~oъ..׀6ZJkpV\%bc8f}Y*[Ǜ} 9CnH~G>C%9~ Waڍ鹤i %Ɣ%yQLz;PPEg_doyp6ht`7 6[ZYfIȒs.bAX":]#Jq!rAZ"Ju.)X[cL{5əNLq(ԧBl`[ze{hÓ@}D}v.7- ܬ曆!Ii*Rf*F% '6'wїH2Qd֋"ymDtJFUIbpoR022ed6 sjwSr)ianS$CyQ3-f.s.DY5z;&G!OTOONB;:OPjJExgÍ: ֆH{t La!t[/Z{, [l$}V|ɉCOFB*i 9S^EL^^Wl7t6mX_!٪m>|{fztSnmHڜ84[:?,.n=㔁dז3lhaf' Hb2v`1{όr4 %=)`l=ѭM0EI ^[t; uU:4cG,T[,T|½w__f2@y3{6f\~/?gP0nO:[8 S$Z-^8)5$nX#ΓCq\,4AD6b&ߎY@jDWlGl9k.f_P3eǨ-cOV9Jd<+:KY.AHILX|鄤y 41"5aVy y(]&$-0Aq$ Ǭ4ɩN\xؙ8qꗵz 0;ӏ]:FDq@G۾Q aiDK'Y#Q(%g19u,i$I7F.#g|ȑ4GLI @C>t]Ӡ3qPDC g¸LKvEՀ.>*0rRPcĀVhwQ)FL@$I6FE/% \ vw\^=|\<]4shQ,4jEpCcF$<>! n~U{H8Qu4W}[I>NhcVӜgOW m#1͍]V!?\]6Qr\U(7ؔg[֜o(3ql:zR%r姦\iʋhwc9ʅ#zg:+%P;-dQշfCuBD x<\U;G$$CXGD>^= })qY9#pC^ L\3$2,]lNIzo,9 zk&S,MzX\zB>Hy\_ Ɵ.]$#r7>_fhhuSxV4}A ^ik7& R ܐBae(_Buڳ\ԣ5M/J^+RRj~&Q\Hƥ\f\]fG nt]c8]]ov~(ڲwUeOF_gBg>jenئś?qgQҶfZrf )FD)!YLX 1+@ EP 51Lԉ K{@KbS7{% -[(hUphئ( YXƓB札,IN0; 4/YĎYBwUkۄ~7j3pqr KVQwX*v4ZKw"Q}.@RCwW][A;+--S UʾIp A0GWE` GW$eXH+z߻H)W hf A \q>*"f`W6GWE\͎`r-:ձÕ~pCǮ'.8v \=OJ\gjס'Oňc"GWE\c"C/"5\FkZzenՐ*׋QZFl!bj_T?riipnkg#!FO8 Rov=? W37UbNp8_^O )HeEiQ6g>{|c_Hy\4ǟ~f_^h4)@z>g>E"Ci,/UyLtj~[Y"jY_)uhQT?~7*'( !B\c2/^띦_t3jPٗSr]JLKWϐ = i]!'BzteD6w\k%CN~CWfϪ7gZK^]JB< S-> <teZ:)cJL4.M+D+ũT-]=KRhucN%h9#V0F[ʬlr⧀^w/~p3]v1Vv7xr꒟/FWi*?|eNJoW']-\#]X8L9"w/|Aw+X?i"wY|@A6 0ߩ-Sa|Q,Y5SOC -Iᶛ7Oؙ p@3sޗo|5=`y}x9|C.pd]aK?L.W>sU RY,:嗉?|za)]/sĆƲPJTOv;{V/ܖTy]fV-joi/F2gb~t8a'i*b:qq%H6:9&UWRR23C7Ԧ༷TDY&1Z)ϑJgj>1lX/UYЭǕڀ*|~mZY ƷOoY4"5+@K ?L+: y ބA\^ ~AA{~Z!t>!^s9tfV(xrqMq=-.e!HP x6|7<(*UB=a*MhwR͉4;mj:{pѤʙr(+twz*XGs+);YV__0|rཡΛ,`(J_q@))FWbeQg Er!DK;cgt!(_v[P[[R3|j4HG*Xe |e> 'Q⌯޿V&6ǕuCfe?.XtspY ^OgʹynY0Y"NhX c7IUMns@Om1cJ;{(LigUU~wiBRֺoߝ<ʰG+)wh#y,f]mFfx> 3.'Um?u~ָxEݙt4zݭ h$ͦ+ʕw!oZt~JPLl)sW4Բ3pM:t*j#?Z-8FL1`H~^]|P@Mv&iֹr>aY):e(j(1o2㣁ڣc(`6SUd-A )gD,OmXa‚xXОVX{(ݑXECsIṪyBXR(:GB973"lv̂ox\ 6P3.> 'dSV2YmpyyClE/[.$9賋BϺz{X+N*Mr{loV!Sn.`J&bZDz޽޲uel}ʻzp֝h8{#Mb](ƓFt7\uHm&Q)ML Eʜ2ruޖ m=0QU֎5s9DO2dXnk">E)xPLof7oA3,}Bm< åZYR S\UOη( uei" ^xt4R5>v:ǵ-NQ!sIv0R`J ̀$UQB|`Xd2'<+IjM]J1Pp{c89IĤy4GK9d^=%P|+G9+KI\QX,n7e#v3[ˬ}PGv1$u`W%(4LX-D]8炀Y@73IBSlҁw@RdJ3D 2賩{"vmp5 E5S2"\ csv$  (3-/mFup#o!78x8mF}ؿq-]fdx)$gJ3`Ȅ{<9.#m_-`E9 v4fcVDk(i~{hI8>7=}hgHm8{vziP΄a3αc i_ y\RBKk_!:2DE4qo l:g*EрCBh3Z!Mnrvϻ0IJS3hO4,Sќu:뿆B.~{GVU2m84"!/'|Oqz #7ZDVЃ.fQ7&y׳5OSɛBQ aY,J:,L6'*?)Qh1Pn\;Hu> $Y+A c3\*?lécd'Ij._dM:TBK0C,WY1խhXJ]hΌȽƛ `8Z Q41aY+ SF},5X,ZG]™depfrrTkN*jmu7`Ǹg6mtyn!k}fSr_zOYZCuTw?YQ y?RmLʉ/܅lY F֏|~3)s~nV4poA~sNZ46$F䉑٪NJ*b35_FP%\7?\vJ& + dzE~鑯S5{ߘ);E-ͧIrhތ$&Z*ɘ"1CQBw=@[FxF|{ۮZbx8lh2OR}dvDx";^jx}W~ӑ6'C,9XZD9 Gu \X-C<ɬ[J=-|kw`%9IÙ Are4<0u?ZԣD(h8Ο9onZ lʃB$B.fYxDTηV_՟V'cK-6C8 AnϮ8Ev5:H x,hoƕM:qf'f hãL&hFg&dMJ-NCV[=/s2ޥ;6pp)7ҙ/KJ=70 tpSluGӬ4]KHW%gyӴ}%4/g2?TKBx4IݷN(|wC9;Bh c($'pt;?\N<l3pΖ[/8n:+3.f%\@E>nۄs5bIjOc#/>e.pG5E2$_ IQ(T#C4ӜUu=&wL#_6I_D?~}˫>+s>+3NLgR 7K?{ܛק'_LU_ohD<̧7ĔTJ}f%nubTVrS*F?t_/nH1>|M#D|z5ԬxýM Dd`%0 KwsJ[7v,e7&aW^Mh8K@zj̸Pxmqwl1ᕿ57L s7dNqn8^5˴9n7\Kغs $ (Z[_M{y{t&=VL-MKoxAQ{SpqhLzXֆE&v^c֎ K|0um! 5._PͭW|q~dasH J]bIcd)1͜dYzv%oX%9 P7gLgW!0 P< b@DY1ʳP¥\yQ:筐0Z e%OJ+yլ/ AƁ ASj4yJɇc(R' yϓm9}ǽr+6`ct9W,+v38COgـ4BF 9 cvo:4}|;Rh9FQ d(10ɪKWL2A-+s#O: _}%^>,yr ɢLd<-)R2;rb>AT*Gi]u rW/}2خd ;ZMYe 6J/sU 7[0 XFr1O鑷ՠ Pnדշ )t RԳ$!bKU`|肫U824-3hIu| y[:g*ӫ(v;%x]Iȕ >|5xbIK2$QVBKT$Y\"g*_҂gz ЁЁFi sIjm{l^|^K G֩:}j?]FVgkoQC>O=~ףnKIn#{Ƿqtz2"n]ߐ:Oߑ^Vk^zk?xk 9hnF@l,Ӷ)&K+Ha7t* ^8&p0>Z3Ě &G@.4LBi=c)0p CC҉ij(盠݁7/?W{!>boCaS,˞URVh:YL"D% -?LEuĮVޕli6=(}`-{1s11KSݾSg6yxOp"|jR!ݪ/5@`c=sϾgn+u<} ӌ;dԒh"̂L{0#>*Uf,D`Ef$\GI<&rB)"A,h%U൛Y"g?[GϦxjҢJTi"ғ&nogWu9߿EYw\\1QP +N12N)̩y ^jŌTN/!JΔ1#r!X`h@"o R* r&xN%,+ydZdHpIR }AJZꌆClsJzT4s#ؤoq($Bؼ5Z+cC6ȹ 6!01+ޗg/*1$2'#|8`#p10P*$8` .k$TS8Xdl'nН]hp grj3Wr'P eH)C醌ܔ=Sfuݠ51{cf!$Φjq<b!ۋ퀴 i9#"MᅋٲpآY zSb%S}X kKhX&VmN}Nꇓ]ƃZWEVY$dɌ0|2)G,WƀꦋT )yL,+Jun)X;km"Ch3׎HF~]-ԧB0GzWe{91+@qF)g>ː-R,d68jKl)-[/q5I@v4巄^D RIkɘ *c"gEK ?o fOk^O5>9IWr~֯еݞ-Qepj:z%6$mvZY{G?V@liS2,p))})eЋЋ}чոP}h+~6zv ${k؅Q)Qaq*h1ZY, uJV7h`7z;j}=S_ ZzdU2,(k?b]mTA 9Y$٧1'MCͰz56ڴOUg\ZA 9ۅhHdV6[BhNIrױ=ʣWNKċ:'Ͳ}ԐR(Fa9´B-:PYS*!GOZ"dO q*]91=(V[]^B@\YQ[IQD иhXyؿ^aO>Ҝ'mRYn2p̚~;M+M1et4gpt4:ͲDcTXQP*1)lʔBG]B=QFjv*/>Ϣq.& ïF fqq>)NIa&_߽l19n.;-];jբKO{롸H8Aupwj'io٭ Ь3nw90ga`1ٴzGʽށm7sEe+q:ˆWI79.Gӟ>{qG+o[mTb#^rCqJB=ӤZNINͧ;C78 ?-lggU{Rhq4#K}{_^d.ޢ\N%c8zK&NfosOba|}X"wyc׬έZi]>V4" o.]ӋWh ,[f2DB[o[~vhGr6Ǟ=pA+^aǭHSwW-pK ٹ}/~| y'j8wc϶wLxgİD"Ѽ)HO)bs <1n8\nް2~g} iZghenخ͛?!7̓moWK" 4.y%J ɒ#p 0SV`@J2hb ڤ c[@k`ӴsBy$6Yrda,踐9g'5BQ *.f}FU]7:Oq7ozS0UWs ^? yLqR11P*$8`0XWNaoc 9laSm`[(`P|p /%]#8=[nMgͯɉ@4YS^N7|F0|NQ/C|OGf^3޳9ֹwHl{M[@zxD~ڊL/gQ-|FEZ} cRkIL <=ydR)D=2A?\&"lur*pP`Vؤ;[\MnzZ8(KE8農d~Mх׃O츂KW?7?lp7eV v^)Z/gwg.,li.zMe8;+ ?۴O/?_.|}9-q^ЭkamŶgZĴrVi-Գ˫YѰib#uQb%Y;N/n*׭_yXWr}V$d)$UDg{THaeb2Hd. NBs G* l xDL $P L2'3I$SA te?8{&`.y{5$jd PjJEg>ː-R,d68]fڒZJKU5Tf(m & Ȏ@#[I?+^~JE!!*c 8*A#fG]t4XJjF=0k4ܴ $[{}g9Omrq큑1.U{`JD**?q@lHFjQ+TTjq *Q X<0"/p6fgɟpqx9#6Z&G`z 4W8 S)d <L>B^^Qb8O$ BR : 1@o̢7Ok0b(]Ajq(jʨ-{^20rgqg )Y195GyWƓߨ8r&1J45 Y#~ż&9hy[;M.` "VӏC*#"#.'.z)0yiy`#J#-8zH<.":4 F ɀgB ȁ6F (BӠ8# W¸묦%⢩wi=2 m >iS2,`@1p+xXM;C[965oޮ{mXnԞQ{$ϔ9ai4=k_H1,M`胝THuDXp#oB F4"l2Ss4cjs:ꨜI>9rd!v˹ByIq5: F 4YsT9PA%{UJ;ݡƃ!8^]7mP}HNNdY:I>%=|i90ɏ\`3oB8eri55uB ^4/^ڮUHםJz 1A4*L&B\e@D-xŲ7[R됼cy|%!AR1[Yh^1 +xbI:Is(`l| )޻mCП;glED!jId8LH{#a:X@TGg2z8&=yiltkeTR,Y2d&ACLU_57mx2J j r` )붐ZO&jw@Ps3}`srN"a's mמ)$X v"%x -zcd}BO;q83jц*5o;F/P;jCn 2\*>E0}ƍҠ̈́TI~ypo zG:|r+e:Dޅ(Kl:g d!ZI笑y<-䶬~%]TBZwmK  ,I67\_:F?%^Q‡"%DCڀ%tW>]]I}<7Ei ೷}͇ypopߚ@j@>Ɓ$aMw5Y%dh3]/wUzJ+zkv_ YcϞ'ӧ3z8FB%xsXn  LBI|Db44h4BkMHeΨR*:vYy؂W`)llY_;XK;\ڧ&4m+g#ϒ(SwHR>'~!RR&)JȽKrqS39ti@5wAK9pEG-<8 .88T n9 HIb߼ ^@י}x~[yI>W6=tZ^ru|~8 j McAq&$ QYRj m)s8#V!q20<0xdG^(Ȓ!#A0ڈ"`H&9D6#j7*JCz5&֨t58MkO-,Q*T,AtP˱Y d.=!xJ`p]hvg0 =Г!sրS\ VpGh%}BSkF!85`€giy]7]F=81rk yZ t‚CG#ʭ$ !Y*fo2}eԢfmDQmϮkpVx}![Kmx2[ qSXJX "JpGIU!*+ bV!d~+ROןø;2:B ~4c4FML^0t~N Wݢ4t}qrR|);ŒIJ+(Cu'd "4\0҇ة~|KxCT ..`{\/ K1AT[B1 JS~z|a҆^v.~iיާħ1ӿNM/ n`q\M|xw|o'H-$썧(g<NLJxpy" Eq°0xL~!:{{r߶Kvp0<|  5r\b|hԥ $ ^)Kv@5yNÈ ,fvFӼݸ6>D3n;8SS ]?*<"_t3V: G"=EZAAdrW*Bn槦Ϳ^]cЌ~ . $g{pInqy2%>y6zY憎 =۲c@|DC{.ޓCOIEɡ,{CխVynmz[1 Qu{hPauGN@ I6]SוELϨ?sX0oJ';0C#JFYR0b,C,,x6ZLx%i$lP$.)¥ Xh簵jmH"΄ OIDV@/i('R5xb5ţXmT$CpOvr*=w֓9f\eg 0[\ypK}"ҒQIKE)9,F5]>۽ݴ>Rm@dYgNSe#,0;0c("`۔acV‽pĴ}}6'>\bf|vYD L[-ὗhfxc\qTUւs7s,kV-j8ق=Ooɾ|W8g[3Ok}+˜RqlqX,Qd eJ+gI1Ɏk.߳=@[C,rPqK)AV{ 4/`5J!EぽJP0pjGjd^9H@(Xe[&v)[BJ,G4⤥}B~.A|0tq/+`鏥ܬNI@%5v] *?'H0or4 ] 82s/{m$PԖR2|%gS}҄I˨b=ե.[9[ccbzo Sv}4ۻz w*͔jGJ!-RՇ}>4ī$a`$l42U?OvG&'Go᝿IH.ϛ[]:9]F~ͤedΨfRru1E%@$KS쬚.ęʓ8L517*Jo9f ؟H\[euev:v[pf慉†QsuӷOTs\1y79g_0Ⲻ O|;o)L"Ѧ7hc@~.j <,?-=C =dV0k]D).c;GJ+NR*Wpp5m)CUX$&WX)Ufzp * J^ |-XV=94&R̚*KUL bz] qy4@6}te,:uy#;,$FL'}|_λUz{b)C/T[tIIjȖbܝj6v벓_ot=]g bS"Qɨ"YFog=WiyMF)$κ9Vs5V0#*)dFRnw1*vMhy߲6Ft̔w+/A7S^b8 cMX;e)~Xf >s+pA}1LB:QDXu`FgT|ϭ'Dpo`D@Z/$)YjIf$W -Fz*IIIWW ~PMI`y8pUP H;\%)IWi8Jp(pb*I)3*J)@Vz4:qV(.p.V#ag'7l/~tܔӊԞ8[☼@5:9G?]jQݯ?]GfϻyЙ~=ؠM hIfI)=[%x\jI -#ޢ,qsTEǑ Ao/3Wv!gd /SIuf4:\=C,D`y@rベʼ{HTٍx$Hs9gNΙs99gNΙsP=c9$99gNΙs99gNTQ*Bv۔jiPW[^ i[R!rFK*Y0R`D$ Lznכ1̤o"R!aet,*(3:hy+oR|0qFu c̶^|SzM^&/*"5X 1i 1 N^sP>=O}# Gw*㩢| pQ X!|@5\>EO](J^Wwz؇7gq zh9z]{rVB4ی>N!wgjJ[} ;#=$etr:B ,:d)Xyj)q"ua@\xp|34BOF¨d.yl^ezKPݓ|ws90}eeM/eE 7pL8k/^^zﰙׯ[O" -׭" VV pz:~~v*\]hʎ&̚Rɮ{9/&Wof/<߾}H5oypU26VF@&^Z{Z ϋbUZe=WfUM_E9Ge:/Βw9u8wپ/M5cz >5"$/Q?(q9.|>ev>>TL񟘘) xW/<ҹĈJh0ӽStOۼܓɀ2eo6 'OLв~^I1ǂiEQG .M;u ,*~D Fo9faqsH ^pVTZ >mP#afRM\f F¥/1"jF^Ч'+$B*y(x8X $Je ϩčAz2ZSE(t*W(e;+ogb#7;0Ȥ.)ƉKPQ./f#F#bj!&&2 !eGJɊUT,'%D1hN ]oF)IR0;2Wewk؞>+\9Pxor|or|f^9Ⱦr;gGt|҆ 0U3m~Wk Z_|I m$Q51JzK I DD Z&5xԀ:A(&"JRALLF' X zE Fgᘋ{窔[#%9ỷL#hlj^m8מSr92N2l>@JsF)5c`!˘wR,$HDW B8,KͺY/7 4^.aU$&BjJzϨS9̨ՆQQ턴h0)unN!O!d1eMIJ5PD]N_9v`<DF"GT$@p\fCtQ3W.y;iSŭd" .y6c]DiXb(ZI\&D.8S24JtI ee\AIU g?"v|B$\R52.{\|20RcLK 㒋Od(%eçe{\| \<L"H^;AaNκ˃(&v1NŌ,g$SA{>-Blg0mt Z֑EZ$֠V :alWAe,81 ~?$\~wU4ܼ>Vg2r REtȥ"S@;iJH7eZAOCӻ1[mM0Q5OW$P4ج8fCNLY$Vо-T;f9oXhLa3C9(ϤFU6mqnXZlZ~w1֖b7w40e \bN qZ8'fNb7LXv[Tn'wE.ht1~nBA*|jqÙV$+r]Єqj>]:]{aaڇ1E7gPt/̟xvzu//Svf]?=Q0غ8¨#بlqs'Hk^ZbOQmMl]ee:@CG5=i"n1n0h06~?(acM &:No &C2uD]oZzTgܑWn㒉[vvi0uݝMg]ȷkWgEȑGuZd Zۇ+_tG9KRXPbYԆss^nB*ul:/dx^6tEע[oVmR`FB-{]O&"1,pG0WvB|4?w7 街QH`U*HDO]B[H ^$?h- lޓ2~oqhX3.wynYj n>QHi} ^e,1ž -*yZY$RR?WDg%~k h%ش󔴠 x`CHK8-drBF1 Wys!s^BRv{CJ^&WŲ57SO;[&7]ń-]B̴<]5q-a|JHo]>Z*pMY::|˖^%?24=90#g*A z sI&PI^Gnb\G\mk p3#6cn|q3veL;B gM; }ֿkkxg6/nXyZ\)?Zg.Z@uhT;|-ЭkC|[qkFۉZ7^z%II܋IZ+%|x;+6mEU b%= X,1DѲL!W^ Ko99\}QW?Y"-cNDzwM43gTY2+, )$J`ZU=tZs! }IJf}Vd[ wW*25ٴ59ͦ_H؍P\r3ciss$iN8,_WVA<-TF(\++UAWeJ nXG#CG F9DhbhmN2t8J !霹,k%Fu[㾀Я2 m Vbk1t%O1jͺm9lzOE WzA%>oJgi{q8N Uvv6 N/Nrʋ雿ϗ|?_?J`qHOg1>N\)>cwܹs68)1OpJb䑕$Y ͒g<"΂wwNfwlƍ苚&i 4GSWsE<:v3i(0C:|U_\{Mg#?Ր;E͏Dk3A?Rs^6𛴵%'nEi=ru4^.{3Ǔꅴƴn䴞?҃+;nDBXtYĤxgv'arکlɵhgQY'cdƒT r0G'Q3Ph©UuíBW+}@ˁ`w_,'1:UWv ˯t[< "֍||tYpZMY9ƪY,L.yC: !0DK{hN|<z eg1A4*L&B\e@D-A];tHޱ*4Q,,rGi.C>Rr&s9x'SA ?kM7Y:-BC\KE$ϡpT肱T· *!uooe`&_q8~fwF MQ&.'Gr2>|*[pin?}7(& % |UV VQvjEGIqs퟽_̄_fݒ<z7Ľ L̥}nz ]-/kM^QOk/<᭟6ٓX G#/YO)x.F.E-ğfQCqV Ւ=Q{uU1 O#*w9]Ylv =rf5b?LLum5ћ~Ejh[Jks8cݭߢ^SzAkUo k˦.ZȍM4tjY;KmЧ1m6`:/gWٛLitq=aso ҙYʹuf7uﭖoN-/YG7<6:O]P 7p3Ug{Xz ՜?mnΩV޶ LGomCq-a:Oy=xjq^7d ~r+\W(3ee3,GG_rۏMwC[ݩ,ܒ9%*[D1HGDgOV҂Lkl.7pmU,Ỹ*y52[jhäen֚8p (qy>Ƹtx3YMVlBגybIS? w4 6Us><'1_}",b_+wCs܏ o޵@[ bұ ֕W:cE4uC[hxY3zWc5?^wܞ_nGd %!2'A:cN mbH(\2uJm+]3]ZL)ڪKX2^589 HNh-Sv"`NZ9`R NEoSz|7:fvY?۸7΄iӄ[09 {X =&Oa.m=&>=ysWυ\UW \i(\)]W_#\ |D R׽UbUmc z/d$'$V8azL)bF?_Yo<6&M8FUW9SꌳFIa47BG$hP+屠trXw/Ny')k d*d,e 8cMD ,cģV(myDjnLL~ |=GNhwƊw}z]q'4imoN">׏<&禞J*ԟ(V+XoK$ UFFi1ۘniY(d\&;{{|Kjox#iޛy#uN^Z y֛F׭=NVwy+_܆:T2wc5̹d:,r@\7f|dqIW5zBYp2Ǜ]u9Nɉ.4Y% KNW%C]%s>(_VmUEZ\uZ|cؓcKΓۓmdӾ}'@,}3.9MOiѰIcTp1Po[X(QXAGuk|emQd6ѻ: ڲJꞠcVd y'rre Ⱥb5GkiWS7P={{|ݓ`G}80 3 hVB sRB0`)~o?|j_YۛTWlp#JSŠX )AC \g_iU,.;X;Ҿ}V$d%,*E_2qF +A"쒰tNb=*::k ,[qj:>OArׁ8#Ք3eȖ[)t2JhLa%:Pq9F>+^>%"h!*c-Ĺv:M'Š1>׮_=|^kzNɶ,Jn-K{Z^ղzxUȵxeiH8a0.X%.] ', :ZEFmMN)4 ؔ%=K D0ٜL@Dg5qGJkXؚf숅J WJ,N/ 3>[U&ד8_\ ?\?ʼnόhq%'x\$,MPhuN9`pQzixjeb$ BR : d1@oEjxږ8#vҒy(ݚvڲeԖjw vƓUёL2P  KvN#&'4ԨwaVxr&1J45 Y#a}Cʊ&ho[~<Rh?DlM?vEDhC7i{^:Ea4 cr5i z=t< E[',g+>LVsd?%%H3%hJ@;z0 NFK N?Դ@*Γ6Z{y,78fMNLAz;tV4Xiy,>|sIfF!ߌ׫βϋ e|Ң`ƲE0urPrC'3MyJrRGy{,\g%w彉$Kߝ>FҼ}o4{h{fWz=V+Oi v{Z7(ۀ1]ظ,`*ʌEdh)-Q7|Cu& zEA1 }G/|7L tjQBD\<$Uأ8=4ArRmS\Uk?;]-kWHr$GSH-_1L9Vy;~Аi8Uy"0yꍢnQ㏋)EAt| `ܞ > A; ]ͦWh 5*vc X̨7IbW4Lep%́\|y<}@jY]2x^+8|LEceglM]97NvQ 9pF'faK: "iRh E!@hbt2Q%րS҂ģl EFX` 1؈m%4YxrE؇|JT*Z*j׫C'TnΗU| #JK2M[>oɻby[:Hi DʑuTջ@.8h,߅wK98"*D5yyf ԡV!@ 9* $%( Ds i jܗHI) X%HH Z j<݉](e.0ɲ$JBR"Q6WLhV9pKSy A:N9Na~RL¯X5FUN'R A+G|>4U`␈@MvmRyô˜ "1V.HV`G{QW[3O6opCiji̘@IJ  @5q $ kom-A9́\[7ktrw^MpO>`K*U6KSFψ&9XҢGl6Nֲ)M'J=cR2bPṪyB.MbI 9ʹ@]}m0|vFj0˧3]zetGᄌ>fe%# Tvl~[y^6rw iy]ލ(|}!0E ^vrr` cyuW«Xbm}N*Kףa9c Eԕ[R TiX ^t{/[wJq$KpWU1ءa^~3+<^N x2twdjQ;V9>/66&`D"eNeU ቚHTKحP6To+wȯu=m) LZڌ!v߾Vr+GkB9̍a"Z2dXnk">E)xPco$oO3=,}z J*ܚ7Tz,{3q8ZeN2RۮwXz X #|ۅЫ t ]&j8]Ng@9Ĵ ؖ_j@!J)-ʂ26xm%^s $V G icIr9?]ǤbH8J*Q LXm&H5逩 QVDzc0s& A<8Bh9O: !Jy)\:tٴ]MQMo) SAH[^Ə;IhȥĖ^z[f&h;^ PODŽu˓ƾC&u-󸤄$4!:2DEa4qDL9S)2vx$h jCM8`iK}\buFAOWiXLn'`SpVmQwnۖ~6?G-ec{?E?܆7B{9*oCק3<2L^޻N%z\Vl`w_wc+A=˘n?i[VL\ 01ޘIM9RJ*6;ΉA/yrc')i,!E4S&`f @LK8B"&).=~6̄xJNץi&=!uf^SsXtKa,(/HQ Y’tgW`e`T,KKI 1P acp[*D,ΔH]rgWڳ9ZN KXŌLQb&6{\&F Ǒ=&zm|JK<r5ۭs&GH> ? EnV h܇MeU{Ϫy5 'Wi`9PVpZ{㧳!qƩL\RJ(#5,(ad]*ꐁ^nF9{3~T&HyIkءZkv7dضJh]Bu9o{aD6OTݳYw-KhY6ԺQϫOwDZ0vDm<,Wb}':n!x/p;"4W67PÛ-3)1_2‚0 #mN ps$I M)r'";ӎMW=[EO'y5w%9I!Q  kZC!!HL$O+ oOD WoK"P-Hg'K!$fc+׷'7dfb ?Գ@Qۚ]?=l /!đF ٷnJuIoDK͸ T&z'9yiA!.JbVaAl"0d^97)(5)} =ݑڍ SɿTL3ؒR,Õ~ pr*owi{O\OGHO3FDz/n'iM?̽pa,ާ$K.Y??X|C9>RhJc(0_We6ArF躲 s\׿WO/GWUg\X:J[\B>֗@ nM3-^4)?׸w=oK$?. >EAMB f[ {^O&=USlb桘y0jફ#Ǎ<vuG~'u$4<򐚐E +pt@eg&HjN&M-{nEϦvmkgD8fgt&\H6(@b$XΊ(iɄFg6pꙗYw,nubnscmɗ@v)OԑBgzRŨ!PƉ&R(4_ :׊C t6>BSƁ#p(45iw[ą'/+mH_ev~WQaE%E /L TqrKϪ=z 2:Ix H:7$ pm6Mmmk/\zj nÆXɆY`:ŻI^ԝw+W[gMjQWv{&Q<+co}[ԛ5` گ XuUyph+ѬXK8=Xy$/J,s+9lT6J!^ڠLiARLm eEwr~^%׽Y$C,rPvTsD,a`AV{ Vi`j9LYcU;4`k U c ct@Hm'g) 11Vٖ@:&t7顂U*#*K3U@_&xlv䯯ګ!`q[~ssUiGk.0İ RUL``3hkCI^r$QK-9+R'MYIGI}Q\^K]Ӱ(-ȡ!Zlc9sUX۳c]oRm(zW^_~U/m)Cχ*HuQ, \9vs?Jʊ2\P׼۔uٴZ<@A){-ǐͽk']ja]RJJզJE6Tk`*ҤYmT9 K*Y0LåH4Y7 bI*~rѥ2[DF)wЌ(b9t^L-$V]Е7&;%CF܀BLrtXXt"牕ntJ`ZP(wo`fmykB<^ҋ%Kg~r8d͕qG40}S\[^]^,ACdВiK+B,#HayP4[8/:bX2Wv}# !z0USE%(ja9z!J>JP5ʜ3WKuNY+=u'<̟-K3\6-7o_GqJhAZ7(Āu3R(0~0 %myRhXn)1DAc,6ʌ ;)`#*E2IIw`i[#a^g=> &V -yaMQYٲ|"0%[~̒%i3K?IfvT.Cf "̒] :!qC5ID"ZLرD%Y\DqZBtUU=P56ף駠v ob;~CaR5qg /s5z?hL/ןzԓOǺ8/wKl2@F`0`U$ͫb[jufИ' >SޓZڒ-Jm|,S܀if]eU_[Ƣc\kJ=]3[$5ɊPK>vBr_] /ǵ@e*sB*,ɈD:VWʅð,^b`g2uB >8^ ȥJʣ?IT|p1rB HJv**Q{h4Ur6#"vW\JOE\%jf*"fKW/F\I"M;RXޭQ:ВbK-.%5.jbv^qs)^8W;:ò⿫awV,휟(sRJO#VyɐP) -#ޢ,qsTEǑ A~TӅ]n>[NmTU*ľ#3n0qcZ[Xb,M:Lz,E`n3b& ?mZ7Gy6h! %g%)gÞq3{a**BÞUo8=gÞq3{a8RÞq3{a8=gÞq3{a8ٲٲT\SJ q:\\O% H{Wvad$&^KKYy7r #b`AZxA #ZYŰ$&6#p`(v q87ԭIAiǯM'ui\]?k5McAq&$ QY0dJhKY"D",,`xsyH h̑!#A0ڈ"ILbsDY+e}m%ܳ&,-%.)u |U tJv@iPW#wt4(1|T9!KpSåG8O3 Ņ8O;g0\,z֊,yq8XrOr;4$}P44 8 ?KV$OR׍uq+'%sk`nUa9g`r+BGb}d 慱[ܾuYXóm(Iz-}@h "[җ≁Fw x>sWob|,̣2*✦к'!z)*RՇEO/L:!ancZ]gzdJu !}w۴UWS;C0N4t|=!L׃ XKJ&+rd` M?O>0 ]_(>N]__dP"aJ HNJ" %Da xߟt{Oz;J/k~Bkͥ}߁N/XN|>{cTa#a{qTD7u묿lBr:<:!ioRĢꊸKCsMCbQA$Ϯ< Ǻ!r1n3#hUi?MOABa v'g6+^MRӦ TИaPԘ2C>Mg LDP$Un=Դzxy ͘1mzqE2{=.^;b6ٻwgcܿla>y. 88=H˨dHpeTD&5D'b9(X48GUD_TQKGk-sxcсlP$.S,-aAak(BDJ珘! \^+B/@/qL{Jƒ;m_Nt57wӢ^x nJ ?)?< Œ(Xԍ[L?1W^_2i '"-Tb$ ևȰD9Ǖa$yGZq̉8zl`f^q{Ex2uJ 5wRK\ެN:y<]t֫ ," F x%1Yg9+g`8˅GUڵ"[k]u, 0 rʺ/pamoodlW4ovq`V,k,}TQ$UĨԖB0yCdy8Y;wP,.~'Zֻښ3ۚZɰ؁ %\I,#U K1dtRFs`F/tHXGeOb&S&aJ;f2RʃpX0ȳuаK#:SnNv傧3QyaP˳q>"kOzvdfZ+MaәwQ C4r Q1:jN5a%X[ 4hkQDoF!h؊ k;Ylk0¼#R"%1dNF03,*͸Jb,#F£ n[~,&MqX>=nbXhj10٠Kbv"Jdx 3L 4jiF2`νhjFked8DZNcMG⅓-CfviQJ iOvCrFGiV7]z}&Osz%.ܕOЏ?ɍR+t^Ћ{ONVam29n}N5h=P|l}?\oP9FAUga*L }r5}Z"Rn^t w_L pPyDE??YOz$Hq UvکC$x&1O񶥼(и ^rxv7Qo~ow߬p7$_K̈fql=2rHA(Z|TgeUB^],PW;[dIKiެ&eVU/}Pgz{rMgӪPz;i>]Xr]VKlysm.V7iKVXVV߯b4ozT}jʛ:mvy\s tqcQޖZy?w2?)μˣ}beVVܘYywVG7R07J*Š,%7$y%(LEU9R! 8]2Y k[@KbӴ>YAK@yC$drJBL9)W#HLRA);:{ߙNLQ=za{q]wn1X%@O,57 &餄IoY4dMl$5DtSպp={y7DlX!DO*A4 SNXȇPP֪eV^wM[6P(,H>o0\%`gf6=3u7Ĺ-ͫy]:J!-DԖڨPdTƣA 1@.K 2̔EeY.Y)0%[J=梱P5^>}RcO★օ(] Nٚ $BN)g\E'i} ݁kJklKcςLؘyȜXI2&f Ii)E Ek IahMr:{[öV^Π<@u\J:c~gqIfM;ͳS"Յ'AKE4uAOa45ԗau>%9fK?M`K{ ~d(^M%들 vI&2A%vTYFg;_oK8^ի_hi Pg^:~Y|l ]zm>w?cn՗WkVu2aR6_dqӳ<:l{fWt'"6j8)wvG#狫uh>d UY]?gϓmacd;P9W۳T# &4XO}wc僞; )Jȡqd$U6O6<2't @sE`%*(Mb?R($\,*t]A3qE>1b-mEG޷@ޮ?J̮`[.~,g4i;>W|m956.s21M!s|RƄ" >'߱zWZ>E}q*y 50R1}-q5$LI#S{K =ٜ}F NR)@1KUDJd/,xؠLİ3qD?=qzֱi1PK @(N:P(QL|(R8VKpӉQXRl4< 1h2f yjQag܏QlLT/ؙ~슈1" "nxLmwRfk ^.L,8meQ)gKg0'AR1VBJřBrd#-d#[ -x+1iЙ8#Ƿ:0.Nꓯ3-mǸ\pq(Q;!4hI\ɘ9% ʮ0(%Sy \ vw_k^]##=. 8h7D?>S#s0S těo:QaZE*l!h/֚~( zmGDʡ0&#R3r> ,iaZ8TAϔ0De?; Mn&p֧  x|ɒ"- 6b"aNwQD z!dN%l\.%W\ѦP\ 24Bd%" Eʅ\Z?/E 9cA*&`,,=:hDcJ+[2D WJ!z1#:HrпZTaiq//&@;SuFJv^ ?=3815,L..\oyt{=?DT|\\ fnw%P}_Ѐs|́).́) ag@eTdrzƐyoB"X"MSf;#ePѮIA +J$6~\nuQPvzb{ @I]nɒR6:%ҁ*S 1'"ɜ3ՊؒpJhV{~jŪ%YJwAfRmeYdP( i%r2P.Օ8,p9{$~7{v8QHҗm1&M=.1.h*m u8˒ |} /dž,,}|mP }#D`LT:!m?NWj|>rwR;ǛI𱹜\|̹} =| Ud ,ꂹt!e_ Yd/Qe KjM;S׏qtot.ti3yh]RCzk:s~י'r!_ ,&R@*Q*AV rѤP+.x=:\o3k֟طyF*ڶ(x3RryYnٰ|A}* =ẇ23_f5zj{$^c\gkg,tg?.44SJܺ1&X,Y} p2< vjAC,ZA+/v?]lݻ3HXݹRSZ+jkyszr))5z-pO#Pd0vcF4CcvW|D\L5Cѡ+TT \ُמᎈfd[J3M֕6fؙvp-%cM":IF1T`&}4 4f9{ٻ0FGxt b~;z] H~.~jWOSi6YhSEK{A1#3X2|șb_|\S>ޟ7g!1fr'oj*uvNTC*)IW6սs2J u}":,ɇ%jѻ)F$Zt%9o|0 46t[ՐRƹ j%$DH,8dk]Y+CԨ*͂|^Eb@r =7)cfEʧ: VAYt7H!flshOM\Kͺ#oG0L#mTbˌBpvlp4J1vIEA)1ؽؠtzpa Z 4@sXR+ Eey0Ph Xu>: Ţ˳եcMm̭l%ρDHE`(.t-a64GV1֕5@7m0*& 1Ҳ1 k9PmPMA"׎]SPPU&ԓ)e,/ [\Ze R5Rs #  ʄWgwKYBШ(qnNS:n3J*T[!zcOl&!̂YU첊޶X uszg] R uWh%WG e2aͷ.Q%.q'b΂G ݄Bl  ` 5RHpPgU UU r ֑lC@?V!joLEwf%RpzDCBiў% (Ez@ߑP@PS(HX|F*fVaj{QQR@}J{Fyո*F_ʔ<[ԠˬDtP, 6A Z#b C$u;+aMhjE}Bq>#hҼA;˦tڦL[oT^"TYǪd 6RfH|!YZe1%OHv9:FEE ʃVs I"92)e(RȆZCLc偱P&ȜݗMd :Ynm[ q;2p䆗>U/d!T?Q y E*S ͍ .k$ S`M0Kv=/OOn̻ 2ߢn$F ,/^q5Y*qcVHm``-nzsdjlC6i/O-Q$g1A;FwH7םkFOʖa`s˿Nd4QG]Cbƽ6$I#ehPAiӄ AzQ3`QD]tR"˕-TP=`C(uFAJ@2%fdi3 A)w>֡[6b 6Ł*'V$5զSkB $\a~"o(V1Z8P1,*1"@ZʍQB=&zuK[sOuHMqQc@sZv@[ܬh^;5֢Y 6jP%wu"1KCbR{OkV#/z&D3{qU@TcUwUvAy VM@9 Z&[QZ =W&3 M F:`=[JFDh&.TS.\;mKC1ˡf-ԤgLMH28ĦyBeN Zf !KkwmT i]wTg0B>xg&ݣ^AoY|~~ux* HOL9: ՛nUG6}{yTzDU2vͳ}׍wwM892p\h<=PU:'c $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@zN  tj@ Xhҝ@_(jJH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 z@F2Wz@kj@@K;9qF''8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N q;`\qRzN <09 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@z=N>ZG|۳hvp.sG &2d\`7OWՃв硫4/gI+zqk-;pٮ0ZUUdv误NNǸ[tV/|>~G >:?CbsPOgW'kV~?MI Tc0!7/MD'}.Mn<6LieG߻SmS޵ 4m l:-ߴ ť1OmO׳2nfjVvD'e'o/=c]MuھA+\ZA&zVB\Ё`mP)tzx|EtEcǵ3^k+j^]0\BWm/)ztVؘ<7:hË2c18:jY\R)Zzt5P ] 1D&ܸ;^:] ]isW o 3gya( [jw+'tء{Z]0kpmX ] 1O1"tzjs:G|3HcCVp}@AZ MBfUb^:M/chZE˵:d,&k/nK'7Nw*jiIݟ/:~0OLdyqs$г <ۃLϮOOxЩ96#=;noǰ~}\97ӓ<"hcʸԷVOw>87wK . geתSILej<Ƚp쩖R)%ysZ?ߩp{0}f8#(ƸNz߲ؠclzx6.\d7>|^A_}߯b?ڇn]oIrW!dGC@ 6PzTϐ%%jda45=Uտzt_f=ͯP2Z,gG+\|C^21}tx0ь6WI\RLfc-,g9w& 4rڗXċ&Q0ifwqj:T[`a9k^Vi>AMYKZg]q}<0֋KCeH5Nzh4 f)2AX_wOlں6bHH_z Jkv2JԀc\2q1cץpoEMǾZ^[ àdil#Z+xv3-~ZOEKd07WiH$Dm†ϳl͈9{ypAK.77?7&"MiS1d.`3Kdy_7'4r}tKm +rG!Qi\|͉h0azXZ(M lcjXa ig>`ٛYMT7nlS2h tMг=f7Mo$D!)hQ%N;^[TL󔵲Id ѥ@g%n_Z Z)#(?u={qYݎڔlgs=f$; 6b43NK@H2jSnh>\nJmOgb#,dc~z;Nޜikͭdnhz:MUB3Y\R;ʴϑ̩=޴ ;P @ZyR\}iw]tT%pRo U`VN.,RʝOvI|ERI x L*r10ULBspI\#%^H{=d_o Z{;zw; cE/jO1 v{xK/V}uG8zt/<%Msft5Fݤіp9֟Zͻ:xOU::AmÞ.c KqL 9ri~ۧ7 wD_4tƕ-%o.\ ?]>w=cq24,o2v[`CLJ?uoӅȓa5q&C0`wHEWxxU2?۴ף{z׀n_}n;֖Z7iҲmtih(VW v1kxo:XꠀpNb!e YK^>^j};8偻R98Coў9ˤʒYagH!Tf8UXA5RL,kmgMV(JGsE :89WOy;oiW5ql+UDۧ+WݗJs-΀A}@ɘԶslSa\dl<\W]'l U>VJPŔTeb z6tʟw2L'/;diBƉ,'s s!VX)LLF' XMN2$fY81Ee$b\)"\C&d:vFΑͽ*LNV/::"ҜkJM1e̎;)LC+ !qĖџN^sܐdOJ#;I5/PE+4/:Ek3r *^OsMBn0|.g:m.)wbd1 ޑ@oO.S.[LkolT+NbJlt43$95GI/RmzQz׋8^#CA;ƴ`0.14LɢQJ!) &їRz-θT}:և'~@mG5]90]Gw>iMIp};e?rJn7_MB]W5-ZFJfbQ)bQz8<QaRE*&""WEl2+8ڎK'AwYd`dBd:. ^B^3,C7*AXevWpMnSlHFZAAJ(/3-!y`$?$ Εq!8TcSN$Kǘ1eγs]:#7Bt:*u**W B@ 뫯-} :ewHRLkQL̵29{geN}nu>Aβ o7vI>6txw(dգRtlA{NPO;i/&y s1.8;qh*fUT( 1io:gjҟQIձ)DKLNXaBDDÜA puijm_ol*iꑆ>#i?wdi]/L7_OMsNT=-|>zNӒo\uRo] EΠ,NMVKy12E&V;E'aQ T琣 S,(p\g LBfW:/RS,W_^|(mdax| \'$ϱ4um#Įn7~Kg쒱[9{5]2I mو6.&96U?/eszvnx7?+W*@BVJܙH`+fЏS?UTl]0@`&Nk,%NEA Mm34{|Wihp!9Yc?Qt$ X.G<g~RmŃ{AȆP-Q=3[/?edS?#\{I$MZ5 Ŭ?u#q/bnB+]OO9L0LH#+!rOv[*W쮔4r"@N ?P6U^2VgMޞ)^(52^skn8WA)sc_b'JXے@Ew? EAĒ b-bߝv_weW{~_DG2i FT ܆}+qiAdoe61i#ԖKt6L}֦M!suٗ6vHӉ,0b~ ATg& 9LAsJ|0 oCN d)P4D?2I #A]n?ۄؿfVfO. Ũ$ذbJ\J)j?.= bg= J{"-eRWCJUhmjB X`Scm9);]Rlɱ ZZi# = YpI2DR'aCl}~b掠(S>3y+C6 U %, [iJ5`rV@jG=D5w8^wTac#)Cp3-qəۙuN o1v[,{*A@|:QHд"Ϳ8 4<ʞ-RnmeOvfgە8=f'ҋa:ck[WդړI(3VSoHdvc d/ o4gTT77ޔ 8&oŮ`H5Lwmhi+D)a [- oZO`rO9VжhH ()Zl (E8tyoB&^GVUq f DYgdWL9ɗwۊ hOAब $I+ߚASx{[2:ڀhp:;εs,THFa)!Tj#?hg=qLM|VŚ>Ţzfz,gh Y.@z(~zOH/B4w>NhU?@UtTOy ;4mf-jsZH ~-U B-̵u6XZY#|w7wgPj5>sϯV}݃>wy6}x⡥HFO燳ؽZ+r4*0J+!43^X,:A `.q<@;! dT0T0<MGL/lh 8XH*2A0YD.xc h 1 )T|WSDK>lQ~L۰ 5s͊}(rQ>}$WR# ƾOԮw)4xY|g1&9V] P  H#+oC @#/ւ(q{@}|\!xvRZvQ Ax -YF㐬4)q$C3KJʵmgt^Ϻ(~oUKS6O7ƐlցV0䔂lR2;Š}\KGi$` kgʟ{\p}oڴHӂBmVj# K ]^j#Pڌ˂pE*W$WC)"ƍW֙Cĕ\pA1"ܖ+R+Go]ʵ֊LIrR1]uEr.W]JYq|plzI@W;g\3{>.\Wmzْp)WVqE*:D\9|TWӮ}rg)?G?Î?^KQef< 0m:dzi\oI&'76͹ۋOBpܮj*k;} nJ cnҠ Bsuq>m{\wKٞ(-%|d/3ux̋si0~cF ƵRӍm(ʷ) ׬س)>Znvx*s_=oi77ʛ~ Vydm&V("`}T+;Q Vf4'Jmhne.(@-&rHc-J5p coABbpEr +Rq*gW+%r+(G+R)\ onYABrՓɕŤP(f׮qE*UWZ9JJњ?Zǎ+P%gպ:D\- [UH)Wֱ[WRTqe+lն\Q5q匑EI.gɵ\Z`|"k}t\-޼d)Fka~r/{ʻҌ-zhs+AAABJbpEr+R;~\J͠q%4l=ab:+6fnk 4 e45LZgƎiTWL? ZpT>@"֎WV' qEkclIBbpEy+WVqE*:@\)g [ :fJ':@\&fcqEM93$ Z.F JW+ 8^PI+&}3xr%9Ι*'NrM1"NW >\-޾dhVlqKe{7V)vOl\ي6=Vr^Hu J5v\5pj|)Ũ[.ƬYǸ_@iצ.FNr sQ̎A}Zֲڲӫ6 ܠe`-GK g(HvH%]=[p%EBQ⼘ɕDBI6c-JjhC R(> [Pd\ZƎ+R9WO+h+,9++zTڊCĕ[P0\urg&cTPquRi&  ֲ\\JuP5qe5+l.W$ԪgJ+Wd%9$ؚbprRpEjak$HeʁP; UH-WV JYH<\-޽dToKz ƇqO.⪟ZU?vdr=p*\I! \`gʕB+RqE*z\ 8so~tpK󂛲{\N0-ai LZ#ǎiRDbZjε,W(X}ǃ~r.WV ;H%Ȋh+,U\ZcƎ+R9WO+^bpErE1 U4q V% kNjʵ+T;:+c4ؒfDQ\ca r=z\JQ8W2 \`UHRpEjHe]$TA"DkX1"wIW+g嚱r}pQ8?;ϺBa'I:?|j~lw5ȥ_7c)MRhlyyMNѳlY0gNv߼/O ~y2:W'޵d׿B ..&$ƓEzIԊ+9Ք(nevŦ&}VKWdߴ:6<6jcsG.a"IÿC_%Dz/ -zOoR?JQ~Ex9ȣe*/w\ +8?.SlvݭTE*J[nݍFEZ!8w0sx?2xwqjp;'fV{cddvZZj-?^z1-_ϾvV,]'] آ[m ~gg٢YW (?X7zSY+gsE?RO{zپ`uVNNwNm`6>\l|c2GEt~uuA&a6ABFsm`y4}n_vM<ӹ5?;zLmze|9>S64DN3zjI/%Oκkˠs<-e6;*۠{bTBJaP7g_blF(1dE)r⑈xt$OZeMʝd嫏/wY=FGq>pYȺ州ĐH#&5vibl._2Det!R" &rPɃK|2P?IX}t]36w?{ؽ.?1gI>(ޔ|6h3vÞuDtP8'-er\&6I Z#~@A馛&r;wzpp֪}),ޯˆ/`2",{9m˥?N7K>>{k];'pؗwp.\9 ?g1Z-O41ˣWw&u>Y;ZZ(OѬfY[;J9p=cZ qxyBDk+mkQ&r~8_Ye}| IWUcyy:]K{K]KDxϖW)p%RZ3PƧD5=-p<'ZAcRNC+5(]Ҭ\a'Z+NW@霝J3\M53LCWZ誠UvtUPj7ҕ!&xM؉j p+]ӞIW$#W]Z誠?]JOtut)`.]5tUjzW@= J9c%Fu~z.U{zJ;bzMtk.UEtj~h;]c28+δݘ"8ziΩ`%wرԚ9mǺcԓU..hiji!X/EWB]FNW@KBRN֔]Iᤢj pSZNW@$JI*[]Լ*p-BW@;]m&:S CWe}oтROtute)wUzzW1UA (5DWHWV}*l=tFUӻZGo[IMC+"*E=*h+cnxGcQlt ftsZV{/w{jWՓ֌jXj5*h\-s-zl\Bq”ίp^JvbRr3peQ;̾2nӃ8NN7Ai3L:͙>!u))f=3-pP~(5M,-UbzuBW-}#v+i45jU*h+MtutUDWUk+̌ J>st`TDW,W Y ]btV뉮Ef-*+zWVӻ*h ;]n: އu+y-tUgWNtu8tU `]O5UA;G$R09WCW1sgW}[ݻ=zzOW{i]vU}X?/ fZNW@6[x/DWn0qR=tBIfz-%R4kmiuBӂ9ai(77hphZRS]$پ{vr5"(T] ])J uEEp+U-tUgWc] ]i7*]\b(H{v@ɧ p%lM+JVCWwɱUAɧ!ҕ+^]ӻZI4vJESvutJVDWlj+D (%M}=t%;^38^wvenBK6R葍=JNtI"*Wj誠UftUP2yMte\M:6zB:bp/CWΐik|#H^ڎ|8c.KjcΑjE@Jj2Y1u2zR5Sr _rrneVd2|6;dZFcLl2eHPڦ y A&[MTb` dl<޲쫣+}uۋjqv~+|;iNptif-aw]ߜh-5۷K58O|;c‡;xn\ v0ws}M~3/bw%r&v*ֺ|s|+rlu̷Yqzzꦆ.Nڿ%g<_\to~WU ϪMt)GmhCZnk??;[DY[D|e켼jϗ+6_^/\]}.m}Z?NWsKO?l9;pf+ʨkW?V?{O[;(qznr?7CD.'Do_,\@ >A{;]"__W7ͱ (jq5?H-\5oz6aͻ4F蘒4a QʰJTc1s߰@&9EKoE+%ƭC59rlrFB"e8223 &ow; \oE Zᜐ} 9.\ fKMTL mň7*2)IlS(}fq> 0wr[z B 56k5#lT!I$Qfgڤ2-'@xĢ ,2)-Kd`Dkנ aJo 'XS Ta.K~ֆhx>55MthYȠ~lnyXa bHFcNH噏%3kģXi5 t+mC.,|bLg]w WE2,Fj|1Jxe)A۬βFk\(x(Z1h;$FGtv<,٩譇##s}Ys i":t`5 jΒ( :e!C2HcIn[j(JKdX"q0 0Eb\3`4ieH] A%KgB ޘ]Rqہa$μA H*S V1U6gjG22:-h\ ~jsy{a 3XdːB[U:TDY " (6c0mVÄ1l GlSA٠d\7d%FCH$3:EQee k0&V:4+B, hI w9!U 7Ǹ ,~l9ժب$0WK% Yd2΂`:Lpi+hmq$7$XZB%CFdb@M"!k,j,UpfL%a@Z1lF/KȄv9|@J7]6ǐ?gd(SPVQҐ`  DFndTHq5n@xHAςG1wu?Kd2K+ ٙV&m<$l;X ` "p i/H.JNH2jolD q2J8A1n+XT50#d`(ِ7_(먼J;BՕ0j{Ь IIent$XQVp`In7 a;n1:jE}lcs6VԙV1xU-k)%K̕Y#@W*FebIND( ~ed7`kay<{W<- Zp4B%]`XBnXւ'AxHƳ! /mfr҇ d*]Io ^zBTi`,6\򘄒j;`Uŗ,JZ<)xQd"%# kGv O B@ewc6b)l BZÈ"$GyyiȹG3{ է:X4XlK4Š嶍J#/tM9 qVإo,VӫHAqX(&{2 !vDDhÕ,cy׫\- cpcEZ%XN$JޚmNO?rX^wH{Ytg>M`QCPoj6Rj2+нEX5WXz w \̤"D8tUBe!Kui:X nu2i>j$1|4]suImnpL[  ;{CSv`)j,j]Э!58EdGCo =2b{u3|=n5>sX7-\s1c5G<3dshkV:)ʥP4LX@B#RD wBSn \vXUúI|_oa+Ep +E+fo 0ƒ& <28È >E 1RŢ4Zk01Uh\>Zs=Bڰ(RV'#b "׺A2aF{j Fr'ELA 8>D 鈡4. r-c|ɱ E}05o:k Pٗ6s%za@5"mH zx N0dYɌBgzz s!=?)!X#%cxk)k2#HeMC/\_vxBpi0XӦ1rIBbR竉`.ń12 &kn|AH"4D҅K(r -0 4&(?`UHw*;S70&L0j]zD&t2 kշ5XxJMt)72 nf۔N߾x}DR65*)xM#m jC>o3tWz&Qn Z[|Vvg%w4^yސV_rjc)Νvr;a%ko+4KV$|>aܖyi© 7AkO7ySQ.{O6C6ͧͬ~ht8ONOC䤶u4LO9זnsl~}pA3?U!۽ _µ@e1x#\cwŠK,Ӵ~'͘kzP A{o |h^Fi}} Cg{4u9K]Xe~_^Z) x u9>;kfhFfhFfhFfhFfhFfhFfhFfhFfhFfhFfhFfhFٝfJHvL/Bps8J|qja\="3%W\pEW\pEW\pEW\pEW\pEW\pEW\pEW\pEW\pEW\pEW\pEW\pE՝W>=& [y<uh+o3p+%+"+"+"+"+"+"+"+"+"+"+"+"W3A1~%px9Ťf֫(0?uR|7 R@gm$[ǎ[s4|X5 `0aJ'{ |'^7:"[q4rUuXZC+X)9#zr0{rG#W\mE`V)+#l̵hj]ڐ\=BYH`h䪚kf zjFQTS)6,:wUU>VMrK1yW#+k8?]fCj$Wߌ\kz)nwNu~FzGnftK J\W\`U5טcj3]`gʕ`TW+Lּ"M!1;<2xW8ot0=r_xlXћcܜwx<.kP's;߾uw.%;[Ku]9y6;Www]~]|~8O:^͆OxW{)xɷXO~PuzNff:<9)m0,(j*k)TttM6"bVZHG sVlkv75fjٜ:}Y'ER%?|ݑ[Z R)uHNc3 /V_IY7I\Zn:>[v 0{[sXn٬Q\ͅ^0x<[[Kٛbz3=/{>z?S'iE?ᦾws?2߫L$G/_ 1(|6a=r3몗E<,aOsix1Aw;q˒q+O3Mڗ;!O1M2c]v8"Dڐ1m(}5ejX!srK\"Ljq[=j Fėn|?8]5۔LaO2m~20_P V篹2^:?뮮 _(jJ\Ly~]EtuUDW]+ 4nͼ9 .G(E~l$Ly0vÔOқfڶjҮ:Nf>.OlW2':I7j?Ngj}`9{yͧG,@՟:߱Mt OwxX Wwu6_cJl6zRcՁwl\&&Mtꨫ&pL]XryD  &n[؝η.c-m.MDUv:)ƥ͙k&Y}YT\³I" /|0faxq:jp<ƴzvmh XsKيP_gڬKф%hE$"+넔o`2? ^{=8ug۵ _l6'=/7eTccL LDxnKE%ҢJ"kԆ`8ts3_X%Sg J*d"Bt87sQAzHn=Id=+ojZg!&!,>v5rq`9k.8={Gy^O vz壟oBzTp|*V(rVMrN}BMNҗ8I[ɽN/ݸr,m2_P )|KLOӲv8Xi8e2Xฺԍiw* A*Frw֠}|-6RTBٝuvcg@H˟<@!&>Ǫ0і(P_b B@6r N &k35$,_f5)b3qNMHKAZ/t`"CMgr N6CIee@!t:o]$G>t#(}-{+YoeNiߜ爅E(u{{tz~} jH2wP  &Ct{ʕ-ei Lst!(l!VlLRaL.aV|+WԹZӼ ؃/󗼅uO6= .\/=?c#@(EhePlNk? 1q6֘Gbg-Fж{Kd y}@#Sp5DKU&Rd0Q$Bc n&g}v<9? \}־%@S=qYPd)hx2k1ͺ߳M;e}"N|ȥPrJ!:HkiKHj_?i'=O_(~'ts !ŝ\̪M0BN+Θt!NMQb=b-tBLxg;HNAy;&텳UAaMPR撤wU:-$ },ֱE)ŗ"AKEh, y8u }7ҽ7-%v-}.BG +*_{WS+/*%UJ-%`Vݞmu9݇}v/~u-$BECNaXL(HA ڷI~=*,dB4N1I'9"(f ؜}.)Yq% DF鄋Q4,L*⩼flɶl9MR]ɢq?q:9냃E͵|gtqk5LBQl`ƃ)ZP: | ]P @> ڢ2MG4ŅwI+]GzxX{T|Pu7{2J*k2v΁ϕ8HaCN:%ѐABO `M pJˋ!۴҉-OilrrCpK+mBCBAWMA2QsVme(RRSW:ǞK JYei?QV|QBNg+vVJBRDP8[ul'L%[-cP;@eTx>2I-uEn8V$' 3?yM5}NoD7qc ro%O:S>^Vv *NB5zQJg2fHChJcT뼡Ki]tg^뵂&'i Ik5o@UW:yg,[62Xe.2}= Ŋ{Lg4/,4ǃ2Gq0#L@3ʺRE;eYHOҠ55=Ɛ*$l1ARԂlQ@'ib Cl}}O}N#& ՕW\F,>U_hvz1g^5]jVj4~ +돟gb;sz[ߍ@\sF)3˓q'uFgQs/4 1Q ]YdbHAL/B?<k.͖u~籨rCoK{<g-_rE v H|gSem3R 3y!\ zBgeCbn-M[}a Y;dvRD)K'd^yXJhDA[&ML'^?`tܹv0p ^Kp Sℇ7b8!w99. [bUTﶕq4 ]DIˌ d6)be!ƂECR|"HC2趽EpKH¢]|&Xa_RQ2s%~Q8e4@yDնExuq>>IW{|I IjY0^ AT^kcīc$7MP!D-lPdbGd+@]Pl=B VnPbIxrl BrlĉDTsRQF'ʹ›D,tP<ς 6DўSʭj_ߺ1>@ 1"3],5DQM^`c{dcJـ^vɿ5hk\ ҁ/Vݬz['_Aޏ-`b5t*$W&fMnѺ)؋Uq蜽[eL)bJ1.%¿"dNV~jjk$C̵c;F: J瘄iġii]8˅_ /nnϝ7vߏyP 7[ve~Ίaǿ_O}FL^o^U=rl}NNҴˁNϺ_?}h[+WEИ I:l4] l#!7 {|r:>ޏ.z0֛NiwZ/Fo1zڟvx~>m_l C>;)nMC=XUtK1Rn7_K1m u]L `.^FH᱖ DZa29$\\v3=TR! 0 ^WZԕ>_sxcT'I#@ {›7s:m=y˟/E]P<擫' ݹ?}6jSUBg|?O*>IJ実)GlndUF;8`n\yCYyCD#$u86n||>ȫ4яM3h~ŞZ ,jw4/eߙ]NΦs{}E4u7׫#d%*ۏ'o Co%,.M;-21lG: (839ak(CG7 X(yV Q*g8RS^ -օ@u^΀a ?#?Pi!'2hD_CZ%Bcީ`S$~wnmX+ԾMZ8PUjgkݪ}&ȒFd9MlQ"(R8Ԥ>证izOkTJ:'hQ)Cl]f@`87H9-ϛ}P"6vMJNډ=&t,ǸY2_*AV;ntdдE=M.xwuԽ1t=à Czkl쐾b[WA/@X?`K^_λMF-~_,n:~XF>qX<|5m~V׋.9|ցׁcg҃g.g`?Luqݤ7o}sO՛.8g>U LWMpmz@}tgFy A3v{}b:z.fAM0ڶ;4xlfc3pȦ }k\۷aDLJ\76ֺ;58ׄ!8 F4MLA8~w0wdꁡ3_prך]ew!vW#ԬT>|.D]L0cm\X?a^5Z4ة*ցir ڽ,ژςEI0u;4Q (B Uwuj*-Mͳꨚv/C}`Pa:m&j^Au] C롩k|h̡s ޠױzC\Fw󱍊Yu}l,q+n}vRJ-~n/Κw~{w-x:.OW_w4vUoOW{oաUoAWPuUU7)x@7/x{3ufenqnRI~?; wx+nחOwQ/.m*{^^m{yčR?s4w+L[zxkFۖpsUmlaJmH P g܍ g'Zt'u ^79q+]eYWL樫'HW VHZҕӕ+hL+ZZ%EWL}bJ,9*H,VRtEdu9]]ޟիUvSp'(vF2ӕOЕ/ڷ5*%銀+ JWk+=bʍ'VՌt'էswj_,닮}$/TEwBj5ȱ E8~q9kӴkC|Hh?~U.1{D@w۬5) 8.]I|5.]]n78mOh o~&vi*UhjGE׻/}QaPW*]]?D&Uqe6Z6BTem*x SeTе fQcyLf9lw93El7Zޣwʮ{W$bp?վbxsuW5.uxbq# dX(w쩲?)ovq}y:ڿ}cOu+i_[ -r"^!GځB LkCfZL{$WbtŸH±t\WL銮+=GA7Q  w]1epEW3ԕ{IA]1)bZ~(Qu#F銁7:)1c/Aʲ9 9|+^]1i+e08G]Q[I+Z׊ 3>vŔEWoFWaǪ'J0Ǯpa i~4B!AWjߪB4`/GW]1-uŔVt*2*%w-Օ NJr]1%ڢ*`17btŸ(wENuŔPt5G]!h4`qP;激)c,"8$ *9b\+f!ĮސpǪEC7}* xUx0F;U*2j0AWXtokgVtp x)bکv'QZ勮樫eOH*w^JLqc6WΕs<.?:76-)(ER-GWjYb«GNxbkWZJ7SfvZNТԄ6ujB!A9Pb"LmJha-HW+uO0eEW3x+F9+ƍZ6{WLm uHƋZ1Au33h@8YCV]1)]]QW!蘀b\+f0ȴ. S0V؛(FW btŴڙc uQ#)QrRtŴsS+z3;V}[0,FcF\kosEѕrN?GIҕVncPYAJLhq'=( 6J$tkx#HWL+kbtEFuE]QW4R(i+ƅ(EWLO,7sԕA`yfgb0U)]PWџ[btŸ&NW%([t5C]KG]1qRtŴ`)/\N++6w]1e(9*B4FR k+5Ƣ7+cLҀ'Ze&тFWi>ؕNЕ.ڷYA"`?B4\@)b)/֛V󛅲O5P9ȸm5B] ixeM3R4ʹr4QZE3Դ&-HW ]1nӫdژUu}sP]p+Jmbf+1FIS G]1ԧ%jrS涵UtAtEQ+1b\ci!)te08G]sX wŸrBL{T/SF,I"`IBԯ1v=ֺ9St5C]Eth%R4JWLk]1%foGWfǪ7' c]%GLXWi0q*ٯ*34 2EWVkIbbtEA)bZp)s|TtJ ʈI,V4٤J+ƨ7L_fǜc%I AR9J >vIm3l')(ZJ-V@b2.#X|TWi(*S0'ƠrujB#bg&G''J -0yyhĨ6F|B @: AZ׉aکt%QEW3ԕS>+Fmq>Bb4j hR^׈]1w̼\WL]՛ѕݱ O fpQ*z]9ИqIQ]1QRtŴ6{]1en:^EW8 ]pr bg?#ʔXsKP؃㢘})mY$(FWk]1W)+K]0j/FWk]1-d)]]]U'E &*LtpNl5?$ z+J7L*uŔ]QWHAIIВVaJp-I&dc0l3y'"IN3SSPZ󟥖夈JdܧX*9-w M ZteBf}V0@M)cz.íԎjo?bu}M}u67Jǭxn(%;8bk^ F{(aT+q].:1"MUEWsky>gC|ꈝZ)_X]ⷳs.1h5j*Io.~~F:/~?N}ݜh>_RO*u}^-o./ F??E[_b (p^ oYUݞ姛~2Z/}#uUW-Y?{po[)˵,jj|n^VSw{TOTMG^P?>p>`rMwUNTE?/?gZ{㸱_u-^%` @>,$_ Ԃaw[j.=ڕ:1I:$/=ScKC%m\05Y:ˑQQ6݅DP2B[|ڿX ifV ڜ'xD[K!BuP!Uɵ`0"17hFR)mv>k7]Leyqqs I5K[[jn\u ɚ %e06Ԕ1bN" =XzBuصvh춸1kcE{))wo@D;V'L#MZ)ʕ<@{eXD; 0 xB. YeMK>whEa A Q"^WU$t϶:O}d|Ht-őYnj(`ɘu!g5>h}j$W-wvZxN5bU71HGI+*oPc[aN>,lGZ$B$Jp -ڒBGZGH }5fI^TBQZvNVCJ ֪З3lm6m!jT'b>Κ)Vէ@elЪ+הOQਧn|fnJ}Q\` :`֞%E]#hdGhOPw_Kԑ#1iJ@T`##3A@7V!joLEwf%RӼON0 #<ڳ<;"JPAw/u4~ 8yৱebU.k UuA"FZ.}o=#̼BjƐ_Μ7@eR"ZJ( ev1!+auh#ǻ =sA །At;8V:G$'7**WeH;L'"Bmjy r~ZX\} ~oUA7z|oM/ L} h"X 3 o b9PT8xigM:V%S0+:XF SQd4Xg:#.c) >@E&rZed^]0P>x!kn 32]csż<0{':AJ[SF#H܎m }KEUgUS Y_"bΪڃ d-Qةi:B?$n9lT>B2&Xo,B&ȈG#A]O969(P"Q)_C݅Zc {6v51E=`ZӬYf}JhB;ڱ", kR([|PS`y&$˽eg:|lQ1% (YCn4XdЙ$ fjbREJL ~<*C VA;5\CYeXJ cF8P#YfRM9Qb5\,ԚϽ4Kct֞Ewih52+io kNv\zrDM iorAVe"H6 WC#Kp05. ڪ mry8|]Vg,|˴e\7*d>j`֣;Vlfѓ2Euh0NvIlf1hmT֚B$KBq)'ޮ 'zz4l}Ӭ2#c0pP [\%Kd]0+9]PnDV}qnQD{3D+YLwSzQSFA@%fdi  Pz]` v߁yE#8(X;1!᚛,;XYy/aUè:jUNàe6M̀| =pe㺐\?Д n FU2'ҬFp*q˱ %׆.I֠[ 1xx@A4Tm\ *w,8 b!C1+5PJ[!T$b4u2s't\- 5mT x\]1w\`]pJ{&^c#}y7/VްgT d#ZȡԅP)AVuӋ>*=6[Z=7릿߿ }!~Je)7 'GM%rϣm|4b?g]܎dn8c]8z}:9>&(qCq 7뿒Z#W{׮}{Kg%\]:?C7m\m0~i N衭Rȟh:a7*'bŊ *{0N VNx@8^z $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@:Cr9"s8N qm|rciY@ ](N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'u$I:$'P o8Հp0N u;2hqF'P䠼8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@ @  zhN J@ d5[qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8^#n~[MG7݆@KkuwT770Kp1.nTPK6/ݸ4Pk0.aQ+\6BW/M">HWN3Cث+] @鄮^#]yDW ] Mt5о ]B FyuHkWCWn<k_:] {PU4)] pӡ@^@eCWCo(sISk!R5UM JݪsuSػ澟>K>3c\vޒ y9~{ava>Y#(v/32bC##I7_DƼ{a[?8߂~*ݽbC3x8g rW_v~mrc~{Ct{}uV{jn.'|ģڭ{ N:N_>xʒ)w0 >/}pZ_ awύؤZkm#9?ue80_cTk!EI|HTS1 it}Uuu:{H]ms,o^_ܡʹΪANN_s/ݦb2WZ|1"N!4FF&b*bMָU:)"1A;G ~>~ n h0x1[0h8O~Zgl}m9ѭ46$9qxZJҜ\G#Hș"‰B y,wøx$VlRI\[P)/?2BӐCn8a`=w^n)2y[U|Wk[lLٺ}<8 B>sALl6E}H5y8jMZRBc y(.MM{)<,гzzynԒ%TL -X}shP0{e95>@#|ڞON#Y0ǓgO, UjkdttCQ~WF?'Gsq'|ׯ:K`#k-YP|JÓwz3>ϱ/~T@'UŠK*&\ϡ{=I m*xyKtY-V62u Y؉UpmʃF 4qn%!yq,ñ\cm,ZvL-(p?|e:ə W?6(`/`tALa>;.\ /κ}&佊j?q%Lt}9"wOc55"k!/ouWS_^z/ĮYhG\k+VkvZ˫>~t߷71wjvWd\K׷)Mz+߲R0S.aL9Q9qJ Es &O%^ ymZ2.RbQ;Úv;Q>z\;챃tSC8K7.yt`r\ *mJiiP>ĂK-ދ0o 8<.yLTBN#)KAR\xnz]ٖu+g_ý~i WvKo>jvmKY0_6Qd71oɦyu-˓) AKwr@8KL֩TODR]{)?N~+@b#f׆(S9rAFS2T7=) DP)>$bphJAɋHpq% "D*<6& ӖH@ ;4LS$: AB^rJHA+o3wOa76y vk~7OdpڜFE q^mJӖ{c$*Poh*kV+O.-+,EqbKQ=yvS(Pfw/kEvɃe{iH96i,8xbHdG>RO8x@-)M H'͕h˵R(I!FcDr%Y0b`tIR&|p6kq&IeE^eI4KhjNwIvW}]#'@/YDYIHID*zc8%f $ VhI)H# oMpg ǹ*D&8@4K(+Ĺ>הfRʻqnAq)=TټkH6+#,#wt\XJ#$3Lg$}FGS+ zj5BV9%-1p8L{|C*ғk\J8ๅH]6 HJ-Qk9irgBϠG]Zt (I>OD} WF>!~!/ CnZ)`bG-X L j'QM~U?_i.I29$(${pʰJQ(7J!@㑢:mJAP‹C#Ĩdbg18ACT䍚`u< %`S1U[!HN\ {0!)%9 KTbsϨ c()QmA*Ԁd4$͟B橏¡h%7)'Ӛ_:_^)qv+j<4,i)g7AuмŪ'VGRg4|jljˇ(Y]{ɝI+5a__[wGIҀ*픥!X l 4 Q <:Y-EQBH#' 'z!Yt+(R/ 28w+Tb#cO,,Jc+/kFo Оlv~.o0/#v4V;IEfNBCaѱ/x lpx̑;_2Xn}l(;яOGMW?nT!2DžrNAEY ej+IR&"rl|ʎ(;(볃n9F %l2hm-Jk>D0 (TQbhYs ƣXB&ƅ tsV1@GjQ>X&b b1hSz(}aڛ6WAK*4\wm|Mr[u5YD*$Gݯ )R),҂d2 ~{uzw~Nn OzᤝOFdY^;ϔ8"cj/@tJ"1y:TVM/&W;lOl| ӭmu9 Wb[+݂{i.Gw a&; :'=HNr rRDbeʗ{Twߡ~ V(С¤.g|12(`<3]\8#45C(#>wCl8M%Rg a(ϹZE\uJglu.yPh_ R—Z(;TNDF^Sy6f,ghbf 3:ʨ4h>a  t/InfD>; {H -/FЖhI"]*Σ#\FBf%gtP3 )ql(JxZϤAWh+:c"HX:#gC9k3ɶ}><]ǨxxWZ;,29#(*#y Sւրi1kzR~.?U'SےQYpr#B㢳р*Œ 4IP3&罟IWʃjw TB8218t$K6ID~^JmtR8:Q;\Orߍ7He1@*"luyZeJA 6bLP,*^ں mW#8;S3V:82OuhiT1i  x\䉒MBvl2Be`?oG ("7V*J^;.c@4ցEə7JeR"l9nQIj'<p!XS2:T .(Ahoѐl=/ٳ0M?d OVL҈ Z+&v1(Nĝ;Oѷ]hJ%w7A67{"MNVղ$e}=:*+80RXenWB=Rw1 F_:x9mTJ[Jh"G)(dYiaznTiA2:pγ0ۢ8 ͽAKEd@-ԭ mc9oX`& M Iy&0 =`rDm1?Q?W7 <@ѼCmMdx.2T?)s6S yg \M|wsXr9} ~ϩߛEI)4no53G5w \MWe[ 6AƔsa]z??DNl[|u9l^\6 g5#ƈY#ƫu'Հ*P&]Z uwEFbHo$R?ːc|͓k:kbllPDGG+CƷf78Y8&4SIS[_^d&P\oL#C٣ǸdZm.a"8O]¹f: (Iyw>X_Iup;EWSVzEZ)9O߇! /Cc(3y|1"c8=mc &BMiM2d=ři+j򚂻 {];/%2,5oW1To /SIlG*BG=q.S{h~ՏzQ捭ZWk*kA2 Gn=s) )A2R0τ<h]AgenK+<ِ76{59h=d(3 YJҰ$$:5fl4ohީ\x3&ܼy7z^bqs &@Ky.+z%KVuHQD4L:]p|qЦ )/^'  ч['Q ~9ԓQȕèq}94.[ Ÿ'IREHU9k9DjĄ_dr#F$tzrns[s},H涵B膩~eQstW#7eYg7MhVmk&WUo^~7uѯE6|EnԼaX}@olNh?|A9K47ܭy12EpWnzK;=ܷZ9ϼD)6.:1N d|u"}Y{߿6n P7™" !KT *BkyfxbUȓ|LVҵ+Ǝ~nꇫ|AiAobxyF1r>"s;z 2Џ8}Ttk*PT3߂QN7m]AThgp '䀾Šdy!) }L4Fh&I'׽;^I~2dQo ^wFh8kW@Cٌ?O_4"kY=|;zT;Nj g{`#zWN^5vCݱ1m%+KsށAildۇVY]{k"ג662Pf468ÿ.^hnή۳s0c9w_6gtܺ^G#_K#/V]pxyߨxNcM ?rgOftΚǟj|@c54vzϪPhaZ|瑶L3TAgtr~Tgd/9 fYI>8!7wq9UP:Ĝ *ldgWܝE2;)g&Xx)3 ׃LE@:!O SO߿0 e%'«γh{9w"y~}J\Ij&=4v"ڤhH6xWc&V&qZhx]sp7ޝM^WW;LvQnt:Y9в=Ҳ޿Nܾ-:1!Y5s"BDZF)J98m<:DC_ fn8'*'вe9d&QXQB)r2u( V ^7idWpt V3)q3P`n]x=Q`N>v6)(9̒ }Gu7v:*UҾ>oi (R|uu)F"7N75Wnjjm^>>,0jMA[c3Yf4מDMe@.kUdvh{ge*GZp\aɛmN+Fd=JࣇO1EG%w`.ߗDл^r/ KÌ{`ި+"W}QWDJ{u Օ5hG "rޠ+V]WWDGЫ磮\=RWD{\վ+V3HTY+vB{Ol?H]Z캺"*ğSo\'WW#=u^ G|vRRWjwΘkMD^p@L C&hg#߾t5|mRf4f T7=T_48 ^"W>-mR15fQCtʌķ̈́3dzQ;TP&)LǪ .Ԟv\ 㯐&GrptpSaz0G 2AJnN f5\t𯖏nUhB6oLsCX>!'Ur߼}YW_1~WC˙0^}?yX}*Q;&?]ΒmYnH[W6I[IHpOɡ7whGh ZCrƝd:Z#*3Dk[) Nc=Y38 rg0J.h:xdLy, ?mb gms# K;F`~]ʉSKܺ!xu(ilk9h I|y0yhz`fyd p c+u*w]1%]QWF{Jf6=VW 4UYA<=61:i|YQH1>fߪ$J]t5m ҕ5pQ[Έd+z+0Q(HW p1]m8tJd]etEW}ԕI9;gA9/FWkt+(!M3+G-HWPΈ(-EWLc"ʨMUu婭rM%EWL %UtC]%銁F+EWDuAm'*WZ 9WdC_WLDeé|s]%v=2@*՞Di !AWjkGA"`]1nRtEuŔG@Ut=Z1g-wjwg%ԞTyI* rŜ͓V0;*E&ݦ+2Ț.%9“NlM,t CD Tz=Pz~++P<-(@MFII-0mW[QzWR =L-X% rtE|]1mWi誏 (A"` 0.b]WLBUu:irϧ+qC+w]QW+vbtŸ^Licu,ꡮ6H]09+N +1+?wŔ]PW!ϒFPס]1-uŔt,$]SrZWk,bZ})}i]]-COUVI%]iG]2FW)T{lǘXtkuT ]V)bZ)-ꥮ3u=65ov LfP{'vf8L<(Ot;֠u&c0M0ѵW6jFiLoPx+Mf#'Z>Is۽?b ^|_^6h?F/?fCzo5&>/zz~l|gUzvǥ!걊|kd6 Ff\EԱvTϢO0ǜ=Dž$|֡4\2؁C㛣 I:4@i!0RZLm- zB+PtEZɿ0nuŔ+ux_]9zcm8n)s]tu]!oP銀-]1n+erS죮*vg%ﳁrJZٝTZ2}qT;F}).ꡮSVIZ#A嬑`\+&wŴrS죮iA"`rZWQ6f"J˒+er԰>wU Wi%]iutծר2؎( +FW ^S,:Qu6V ]<1Xy2BцYӨdR,m-͔6K1%HWq;2ςuŔXtG]W&A"`k䴮+u6w]1%>pu ^R늀btŸQe\b,:{D +zYm+l]GW>(銀6btŸFLii_WL銮,I#]1Ӻ"ZjQ)>*'kd0rrW brWL{ɇ)}i]]-CoNU`n]%:ձhMG(!֕IЕ)5:jP؂+u EWLd4Jz+Q9v)`]J*AlfK) ʈ9ly]b9NJ4IնM#񠰤 2dAEy]rmߖ0F{qRY C.+Pf0Μ|ߟ1GCtz{j)BS 6c >X[9b\i}P4yEWѕS*I"`P FWk]1-uŔuK]Bm%3qA7i]5:w]1en]EW^E4rW ,hz-f+Ģ>*hQ+9+^O3%`u]1RtŴ.T;Sz,z1[ޞ*m~IW0H$:q*(!mlծvtbtŴ]MdH誇2ƀ|CjؔrZ#H AףM3mTkE=Դ5+HWqJ2*WtC]UzA"ࠕ]1RtŴ.N0S涯IQt,ӕNu>? ׊j)uUu6 p@9bO6w]1-C }ԕDM rFD׉]1ϾuŔQ]PW?A-FW b&p0-fjg2L4btŸ(&wŴ 2e,+2pq}* Mם4\iil~H]zA '2/EWL뢫>*zڻ<3`=ٛrY,gH<^!UbzI&K`%,fesVM-Gw<%):޹D9Z׎1[sZJѕ6MoPCOF ]@PjY|iALjhS L۞%pԂE0Z {'FW+GWDkUP%G]h 8ibtŸJ8uŔmrYtu]9;A"`]1RtEx) ]PWy:ˌCoH~>͛⫓EU\V.]Pͩ>Sҝ^IT/g5VKϓ)ou {R\7崦h Qq ՂYnj_F׃`>Xm.YoM V贾Y |ء5G4ݴ~{Uh8}OA3Lge5׿Loyu벆=НE<*ܟay3/kF?;eR=z`۹/ *rzsUh|$J ޚo_p~Ţ W'YSz;q87;jūJ '^\],>nx{uEg'l݃O]kuʻE# 5ZHu\xC3ץBlt[Tx9ܬ77D=뫍d_m1Cyכh)̪]cl5D :H0(TWOjj|>ͯG*˝)~EL6dk^LJċS7XoyJ?^u"cO0^@۾8C̥qXE1M恏ru`bQ}9L^t{x!~1`vd:lnxTWto.gg;T5gg:u};@NV|O?`5U;hӄVL6* s:7mk ][Z9 Hv-븮}ڱVThi޳ÏNOk̘Fc4g{JL~2kGBCCĭjنVaC mm_x͋_Yzݳ4Xt!y|JL <ݖʝcCӆ^?0vO<+ܼWR}r_?c;unEu_ԏS[/V~ w9K^\ϖ⬩gW)7O+\>HmXql{7};\-^=ןZD'~s2ܪW߭n~}xe˛r˓n33F4qAOcI'z:VQϼ!3aQplGżi˽j3.xLOƵw&̨a41EIg3ꤐ@=ERlwZoؤp%ͦ >ْ)X0l*p UvmUzRd);5SsR8-6zز1g.vi%hͲ˼՝T26kR&\l%ٶ) ˽TO}kQwA"!YmVQ2eB"\ SSky0C`S6Bii§Q:(J gm$I^>ۻ1ݞy(:,vSLR7 $ 4`K <ˬ,5\PDvU#Cq^.՛~#:a ^bJҫf$hsY4^z#ո7B#7\ 2g$UT{{Ġ03`FJi)VdHrIh) e3Fl˘ņC7gBɇL?͡C6̓9aE՛tv/µEZG^>;Zo^J#>M&'-N$Mzp=jdx;zd>CvTֶJf ADQrIc` lDD&z0"a+aY" hG+4$ޟȢ0``{+1Yҁɽ8ȐKYHET d"-Y6,@ >,)Qt8BXQLI_\jtt"D\uY%KOo4T3&iǼIƴwgqʰha5ya8i-*aL#-%&ޛFR]\vEH4@n],@Z_TK"INV#Sp"!C[Ɍ G} T,CAJ \H<$c._d+K@ ^YZPuې?&9Y 0qq5&,WExDr>B"V"ւ>t8fdl`]dQUA ClT~_}8SI>YU'f,beyV{51 4Ә54n^.dx5bdo@A㧳ٙgYNPަۻI mMoG~6M_?&좸y6.٢e隙Ǡ-IFIXJL8՚? #L>1nn,/p{+P@!TŤaצ;O;57XF3wJjqZ/tp9;G.dB]5 0Av`.yՙ@҇kߧ;@UKds_αxϯL$Fs+~WKl1d,)Y1s ֔UwB҆xw@k^~֓"&_ƥĪўeUm$MYBdޞ+*ܖ4'Y9JvpC*% t|&SKYq݄ CP] ,ilVQHGhz(Z!6óN%;@F릑bI_._B|PT}9(z&8䉰&fEcp/\-9}'\xFQC=_|26$ _42heˠe]D[Ӕ{JbҔ,1>zFu k(kB9E)%"v@OF.d#?΃cߣ8b-OaY# -̣@O~/Fc[{5̪a>^:y$@bI"8WI IKvJI}J@m ֶZ[vmcMW3*4b=gTD xnLJ2=CzuVxCE'FS.|`- ZLkyȰVSs>_zI;żGEUenU*vHT>iX*DjdxLQI]"%J9FWSDe'JͩQQ2ZRPHk&E A4%Bj,-U LsNh/ , D,G.3e,PXGX &w(euoqXOC^]HbfPխG$Ms@a Ң-Kp( Ou,MnMAH\mbakIGש\NdAAԄE& t%PtForA Vh 6{1@XԗI- MVpbVX?]3vHי0%W95rfualRdJ֛ҍgdQH+LI.|6")Pj¯z_<lHI"ϔ>Q7>16ܾC)AGw6ϲ p6!dq]BK+Pr8!/a:ڀe&m~MxQ Jf]0ƨTz7I x;A~j_n&s/LTFL-/)oH;˲҄a-lQBhkcx,FуtʧZc?cw=V}FjE(ob.E!i%7_Հ7g[DžgfjQU\_̓th#k߳=$w@G2$4Il8pCǼm3%1c H1HqqZiA•jUk6*"H4=8[LuRnL^jBZy>T8>Z@N)~Uױr \v*)j{+Pu뷚mKJv\0;_ oLMazFSrՑf97n  8[Ӌ'6#S7&$G`%C GJTVh&NYCFLhҬkp$SmOa V]9OᩯK=n01 0zB-M?L_4@70$ՙB蹞\SMijI[vc&q$Dc˦Hes?&r$q3!Z0&wCXҮ0"JM<=@6Z|}f/PɛMԢHVMt5Mtj"4d|k{Xs ̎v l *m*e&Kt; i#,\CʁTdӢ(3f]ln#ڈLl\d%7I۾F1o/54T# HQ/rÄP /߁E˺Kn+^؈hW6{IXfI 'NS]Rq|j|uRz&m.!<6'J&aD$srԚ UY>Ķv!6 [Yn VPU\DQNsC8lAsh 1.\FB;b%żL#ˀ3Qm"MiNSۭǑ6-FHsː'Y]=tlԪuԨF5iݨ~WLbrѶ[6_|Eac(ŀZuE1j"^N71;ێ1Ns?+YmsOvs/j{1w۵V6h)\A7}ƥe6ɨe4Iuf=E7ϖcrt׸/2^jv],p⦹5V'c෿{m"pkr{7F?7BnF}'l+uTk BM{l|^iDBNz{lGFȍ+ht'u8* &|_M{ŰcP{Z{ݍׅ{4`5l^G̖hvZ'OjNZ#*I*d_L'?kDbnb 2.qqQ93wmx/Urq)q5M8ģKREj; ͔x&7/& TfO*~B!T PN8HX+QI?SxP_ڑ.DM' NPHĊqz红merzP0xHVhm8G4C9[؍#Afd/J a0Ԓ ƃ-)̈́Ek!1sc>=Jl \D(!@1\W*{dvV WJQ·o,FC@YNAB2b0UNZ4Jur\hHW(%585vU"D޴NH02e2UZ(x ٵ˴#2$i{: @2oP)H&pѬ8NO=DVC+aǒ2Zg~Ğ.R/8EF~.Aq&կB=H%X^L<ϐ})DN=(vaZrʇхra,xr%CQmj84T- c0(H)֤ ˘b>,~ILJ%RVil^B2sbKTD5\Ӟ#JWY'2+hwa&Ů]wy2N$FRN>9 rR*IR4Bpъm2;~P}wFCiO3޼uO>2Oewxp=Z,ۙ;p4xj3똺SO6&Y%QhuSz$ZU]7ϭ0ؽcN8=u'*htı ;G"fe}U͡DF_+p(x!Zo ٢/RݴssNpGh}Th47X|X>zzG,أ!M Q+SX٠4vϝj^f>=yNj܃4d**>Z=.>GtAkxp)wnm7XZC- C P@} `a|ަYbvS3\0^b%xX{2.dm3K1D"zWqK  [O12.a:0ܰ|1Uug>9_QZ<\]kk O?v8J7Y)ڂ^J?׹pL4lEe$tf2`} g:?Wh]B N8kEW1“NX-VNb.k/ˮvKR 0HJ߭3x)aLk}%!){ /Y3N\va{f*%R]YS$`ƻ#WbaAPʘt]bkŬg-}vsiDjGM {XyqTYZ>{FWUnF)k&wV$Z tgB$Oh%kFge*"z4Fef6Uilx#?΂7I`8mT':m& i4kZ)xpk] B)'tXǫ!û)D諽n(t/)9iM)oNUA_ % 8"ɯ{,˂{I|rz!NGDp~c>4<e([g%RvEc3 =?ݘ?+/gܾXJzb,D3[|S?`h|+C|m܃|B) r.eAU G)L˨mɢK+Jڐu%^gc~ (sxܗԊ*ccT*>< HfߚFƋn$k[8kہ $ 7$tw+N )c^ 9 @=pD񵛮%qȽ@ =NPpsOm9ܡ~ØSKY{TkG%5iBP:}?{q"0tCTWo؂ٲAdpg2 ]MůrxxN׏;{B^_HN`} Dd2K@/.rD®I; [ ~RnJ qܚGX MyXvT_@ Bn T\yv?N}Pt ؾ[S/"b?$ '&Jpw_jABڍ3Xϱ5Re=~##!-D10lpJ~/}S}f~2IgΙWi\/+>f]ANP7q?8:]FPIu1LDC`s{P≛熧j cld ל=E!x_^ne+~yF!,H;~U<k \+q2;V|ˆ5kk/0IcW'ob+`՘ 8٦E(#hթh=]f+nL_sL]2 Β₷]@H>\)Zi=֤G殱b`K*, jiGE*xDԁ"*vEyMn|aͲ,~Ƒ2i*5x!6z_L)9ߖŊBR+(<49 5C̪-YMCdcr xs?ps׺DN ƀۻ~Z:4&,yp$'a4-u 8k%J9[ԉ BEJ Sh*0Pj!X0Ue8Ylf㞜딃Wi^G/Sul߯q^Vʯsv2u:W*_ETke_@^rDY[Ѹ[n/0ހƐGkXױRksi=92 &ZbTg~dgvA.Vqs\.u!QחHO:4tv>XQ*O,{PAc+KbsuRleOg I=Ov[hT%HReʁĉu0œь'r[V|YHs&a@{پ9t\ XNb.k;Žv cٰ,S*$LjCHRNY>Dۗ2 bzw}ʇB,cG~Acja>ݨwZՖBD}As[B=t1G|"h` ϰB9}DCԃgJpH/&8vy-Wľ`pՖC|s0^Sr|4b_I $Q=;˂"DD}q 10Lna__ B~;!J&5%`hz>RsR84vߜ_[ :n_Ud\‹#IgJq~y)j%UȣsZ?{OH_vvZ qHhmuL'1lSEUicC##ak(Aƻ]=ϹV7B >QxN2Xt$wxs X獿xI'0Q1\ᄢRdsP2vsMfN buq;W,SQE]'sJe0PȵP[!:E AIC6^VW_ns\(G= } 0Bm&$7`x^}.+3 s\8\d\s؇oy3TZw;fFi6ܩ7jC"w άzU}\ou"HR]BT !5<%z(2i[5_KŊHI u!O1zj>5˖9NK ߀c72}ͳ|*6} Gw6#70L(>_MŵVEZ.#coA{S ă]s;7!TJ0y;wW [XumIhkB“Rpp̎:<:6⭓!{uЅ//oo>$F ZtL\tz_Kdtiy(51UЪ!tq3%?oLۼ K ( {z}7Mԋ]w9ؼ6(C# ;\M'Pޅق~}<ij )RqD%lz>;\!]OGv4#pgo+~|~3;=I 5%sPGxCNc4}pL NH[^G),px37 Fߤbxt5qu!oXv?C } G7N})HRL)0?$kDIԋߖ.TVAtP_m5Ikgǡt:Lǡt:ֳGrmOzC1Ҁ8rJAK1\Pb1+ei=](+{ͩV;.c TZ8S6gYEYtJ#PH!EӍvlLj˫ Pҵlm05Sf z3J'#'T4EF09JR.er?\T%`a q0ࡧB5|>qwz6gzf`LWO||}75J+qTwu8% -xʀ~kky2g5/WhE\ 4% ӫHDz^S˥Ainpuɱ悃)n@:<];9^*Sk%[]t{_@6I.lԑ޳n+F@7|oެYxF)$ o:ɊE5k2QJ*T,L˂u|txr{ф^XcfrXThqwF'ς;3>^ 2o%2Lg0b ,¸ TRg8$՟~tKܤ\p_?:mۣ,orZN O7sh0-r01Z&GvTG)OGИ1/@cQPj)O-4p7#" [eAmDL<=g Lmhh D4#]Ύz0I~IdysB{gp~2=N?$)ώ~ϣ?FT`/<:f)nS]^g9Q;>R2m2]S#gG?>T Hٶ}dqP\cݍA+űY\kK9I&()ȅx1֤3x,tZRdd a,19h8r10J bOVH"&W;LQ[5!bAId[OG S^76XleKgےe&uXރ?LA4p°> Q K.(Fbd(Kuux&{jf;ϐ(UJ͡Ӥz#/ۙw~WqukuŞ, iM귾S7\qtyYԛ׻+u mï7%}ywl90ԥpxCw62șH3F!$ČW$*kYXETl s(dQQ2:- ޴&3 HS1D -.a ą*tzf osf`9(Xp#&`\Hd*ߢqqh ;^G$4c`%̑X (!vg8=02io+8WG#KO$T )4RaiGϢ1C^꼶exLfmtk E01Bt,=\GAGCquhu,k@AcH(@z&EX @iLiŴ_!Ji:XMBK(y!nz唗 F$~4ڇ' ڑ@uDZt#|?8%i.|Qr;ebZ*_=Ҁ/UKJ|'}s@o#τORdOiiOy?){'_4r&m;׷ŧ湾Yp8͕b$d=K1JVyǻDq*ij=@[f*Pq.!}{QbG=SMh6~~"Jsx ҔPDO]1{,C=Fm ds)܃xqVժ?YB 'x7d:T5v;?#R uK4{6:$z>g@$+1vGC(7%\Bl*vl|]_]]OqN..V 2o/K u\>o'j9t xOђWkmN- V`T4vˬ\Odr?o> $PeU *wS_ ӈ]gy&p( DuT&@ʷoà஦֟ wҩ":9LFLZE 殟N1W}1Z;X_gG(̜t3yGi%U z_fY#2W 2΅:w®`a% <$bE84TZE3]YA܌kɄOVuVv'/2YJ.g3Ha?2iwm1_9IP(pnڏ^X,+b;)P啬)E6.r83yF4h$U];]m}~vSM9ڰ$%, ,24PIMD>L2j慏sHYmâ;Q9(P[6J70RKTJ5&it)Ȟ?fݍhSfRm;|(O&K7uymJG;txrZxP=#J0&m#tuU jJW1 q(2HbPi(n*Wp]:'f9bdUهCeW4}HKė]fM  ɐC*< x LқPğCM&6N_hH- ZSTTEgC*# >uJm:%kQ9.^hVO4h} Z)n]$x7T LMB&`S0 K`4@6\N RЗ**9EOQ \f4p#Sif&ՌZ sQB G2<؀AQ0\/q{DUC瘁U48cAS~<+xcLZWww7HͩނRR (7h?\ S|O*J3TBHz( m0nѲN[NߐNtjN+M<{xJ8Lay0<Ңs̎ggUeT@9FNtu֩9$ % omeQV/t.&V4G\;!2Fv5l9ܼ2R}F]$*,oh*7^٦ΐԂ+SlRm5$4RI(:\I#8쵺U eⴜ[$ $fN D9I52I)\Tx H c/nnH-_e~*(&їnKƀ^ _k@#&i\}ӫ5V inF6MdzQ'Ae4K2$oG@NBCcdjQg ?5vNQt3] 8\Ȭ4EB?E=r|9QwE9>]{fP% ]# p&Nb=\m(EI,Be'viʮSK6KPׯεa(بJ] &]5O/JG'B&yj{ N1!{&f++;nZ!3ts0h_О{bAǻD2 :ƌ(Yһ9t7x=% dt,s˨-Ǥ[2fƑ(^Ql`_gưSiPa1lWyeT<-Q(QỂ8;ke0MZ*{RNoxxFHm;vEZ" ]3J8&Y]^g\6e@Y['1 R>%P]4gkadhǸeXc dAy↳"i'7 ЅmD`C_#5[?j_K<`W%O_#:pr-zp|8PL%=~ፉa ǟܾ !s ~r_//6~~38MMw[=W~Fm-& VyZ~ZQ+|;xorCOwMShh:܇)g!s̷iڇ+`8̯`4˷SoAgtYA+w6sfhu1yzu眰'GtMCԓh.>㧝&t;̐.Jѫq;ko >2WQE2\ (YJʥ'29|gvK% ΃u6 |a~30>xN2xH5pR;zh:L /MV7ͳ&%^Oe-?}&_~;J?7>=kۿ@B0FVprw]텧ʥ<`SL3mB Sy&4E2oR3'y0Ia\HU"H xK(ƃ1W4)#WS9n) bo3]%3:{wQ &\U.?".^ A6gۇw 'syC0 jSγ_*u[f@if6L)_%kn&RdY/YN`e=LɕW |U *Jb,om{%> dVF}73D`LgR4?&2܀YNW`Ʉ[.}x8>n8%F3uDR8>%ddTcž'w,3X~ԗ bLsEݶ/ߤ!ĞvG6R7MOJ[cu9Lfzi@[}Y)HUmn8} 2-qq/ )GJ!O3=zΆ頚~,|1u]?e燇uS2JҐ[`PBYLьXT c}u7JpH q't#8BK2bu|V`Ԥ)'w#w.^Cif_*d aԳm?TCF驵kPu9EXȉt J6N8n-2ݽxz}8_Cub U-:z\aN[~fŘ{^03M9c=ucEhg.(ScᏞrɦXE5pO<]o$_ѽHuY=kq.E%o <8qG #m횞,JF=]!p(}Bwwˁb:bzV&)vPP-허2j9jBҹ y/%0 'Vɽ8|!Ö1q/w9b%(X d8E f0R)@* f_a&kIj%!&!9Jp`s]5bt[>ė3 ƒ0J4Z=>٧!pk]^Xe!0(=C߯$$T 9"J:]U_X G4X%-% "ycX}7jf'TԷ~16F1yA^H&"x[Fd @6>)> i2{m֓ I`@vz<}GTm*}z1Ծ_mv--g BosB\ m LȾ/_؇jӃY8lP|;t[}}C:H2ic;78etV1%ebV)ai&`@S SO@;&a4 ΟN1r_~]Vkĥm+28ԍeD F)aW^ͽZ@o xh)ճ2Z/E55~%"Q23]EDbv9o^R3L$_Y"P6[ '*z=̒bE )E*ՎR/`+2 ALce0ŔzQh,T[4nuDO"֊DHNH«zE;zV.uv#L}~94f ͜K UuZ9W*S*[!,jƪoVUP m1@ګOٻF$W |5fR m;^H,-}#KX!U,2(2+#2#⋈p6z"-X9J``RE J-HbI`N9-e\s@:p{n\\$" @ć#|"+29[)ͶhBc=>9_t >x1(!8ʤ ^WA9\CQd!1 Zt< [b)+aʠQ)",#PЍ*g Dn>ͅFi#Z2^o< 78$-ADY2#Qp_DtuQx{v8k =gg)^z}1f|t멸|8 $EqI8(Y`X:9gzQVlRA#x1KRl&ىvԄЂ/ NrimKҶ9Sap対0ʦ/JZqhZ}f[Y[{o,e9jtd)Z5p''WoQ֢N9l;e<߄"/']t@p v!"A(x1lXᴀz_ةIB-<2c&(Y5TqhvHJZkU?rè<] >E<\*:G>a yۤ:99ҮΈqMӌ|_ޢ[g-l3E8@m 5o5U0bPpnFF@{*DUn%/Q}@pV>"R*DSwn3m3NTmY ꛑ%&j(&k(ZAuZ5sر7˪냾cd[:e$IV2 ZkuHseZttu3˜=HqS^'kL?[zl2B :C}'WWIcSˍawNV?k륾MuY]Rb&ٞ L];m] w~WɯivWimb_Qym=f]bxqRw@笘4YU{=$ޝ c,IqHDarGrUIT/Ib^PH9G<,˧S<±zۂ悷 m^ݹqќ;߫z̳ÍS7JA}t]~q#>Ol6p ?;G Ul(b*(\o_Æ-6>3yƠ3O-EJwGʳ*ѤRbHj'9~%z-C2v|e_}K32l+ oNOH ao+§eVIU)R9i.rR @:!-Ŝw:2gS)/ _aLjȋfÏeXK<ُ#ֱ]2TXUv0-D\2ZZAJ*Z^^}Ҷ1t6=[tYv (A+pPMp m>gEbIS;a,wڢ(qՙ<ssws=}x ]]s"ڦ ]z|JK8+YR^ (F`Db Ǝ/|$*u8vcPX)2cr|7N@P98'с<*j[H-6~ [Hhg$ٯi5_JoTR9"M'g_ʊk<i}@:WQYdI Ɠ78҂𶍸$(|/YnԱ1ȹQPn}&\H! jP4ݔ3 "t*hXaxqM'jNO3mۍ-@qʈn2q R9Rΰ4uP%bpt)2U&b/qAM1#DXFզt5~$̳2oӀqF߫.vh}Bp֫\!@w`|2%_1{ae<&)N sf d%m;:^xB˨r+inHوeZH. 0džB5K{)$Џ`4xpm鉛yW EN sמݥkd)Bx4z]\6HpYߩ"`Y(esڐ(Y+g0U?c!>_C|(am:T[c ~:s6^*taT̈́ug΀F!Ig2V,5Xj̑Kgҙ|Z F[V]}-b43h3{2:q[z/fM:][¾lIm/ټYZ'ifI%&/ÔkJb8nۍYO{ G=//QI^-oO2egkVd$gԴa쪧7̞{`wwZFJnx!9R*\t9UhVRZDv9]:e=NkNRWF4^WswgՊPL+/I]Emh6VGS"l"H͹hB{H+ߓ'WI邊<+ ب+Uph T{aR +DkjjeĐ.NISt1o vNPZ1#S*ZH+&!j ͠%ģ/5WKڮ))t)7XVsݹ^ !IVdI3[1V][AB e=_l"ňwvQ^}Az"CyԊj-Z_/ k PF)laSKAuqv< 9vߎibG[L}IZ9;ORxqcsHJw3W%_?sDMo*kr'=zJ*q>$<<Sc C9Ί/Г|KP/ø_(Ab< -ψ '~Mn ,Qc1)fLƤ]f@<rFM O rf8:Bn_ a]ٯ)/,D . XT+_C^''5܎|G{#&TbdW:G:a byR~x]jo>E ùFjDQ!s/2I<#Frpd̢وT[l>0ayRށGZYϱkQ`YԹ%^ahSSI {fIpF‚ fjw~vW@X ^E`sj6Z{:T@SC rryύLx@;h z%ə>4,W; ޔyH@C"lG6 .S/2d𠦽c^$Z/1Qyd ĴpN[ync+Ɯ1O(\e$~v(uCveR xEHp Hȇ=;"UƑL<mby>{MhN1fw4ߢ$CZz< _T - @ВcPG XRQC` %-p$q6-a f`oH듇Z+t#zIM%V6޵$"e4Їt.{gwOK%bFEETMl"Y*VFfF|_DdDrZjLp[h~Ѹs.Z}+ܹs^XL*mn$ӱx$hF)oMW`@ȍFn߼0#VkC/38֭AWF^gh+c7 'ue: KjD*-W՜fZ;{1n7F tM6FSw}{>wFiG.9ܫSqSh`*^j(dKL q_mr}FՓqҲӧn׮k;lJJ0B⺓d2<}u}lrӝL#+xII{=x2M7>l`7=OYZ%4Y^w_~|A#C 04^@"똿\|]ړvX=IZI`LoծX^}Db "ݽS^o\B?sğ o"0ᶛ~l/Ruee;'; AN(wJ%tK7# yA|K/&ųE|v) "eɨKn|Ϥ|39^!\m7K>uyf <@y;Vc0e B2ػfv^rUGNKg0Ӛԡ j/v͑WGs|N#{<4ϛCMq7MrSx\NMNq_| ᫞Zc%X\ѤB:Xz5Yb ;rƘTQj|;IL)1F!\?huK &Pb?E|5'e*EjEN.P! L&,x]3L"]ʛπf>[/NӂZbDk X{>Œfq>*EKsA'yiN5v+sZ=2==A."*zS!QuNՒY8G[sUD']^)aƲ)?4lG_PJˈAJ3F11FA9 g [/i+XAVvl3f! 8$10B[M@7yxg*|:/,j̹!QwѼ{?;)9OL+TuguP8ĸӾ;rIيZEf-%X_%GކKqɥ/INʨ#m0!b5@ZۣvUŀxy :&!RE5LF?,x~ fN͚Dy> ߳/g'D6*=Mn%6t =ӵЧF.2V=}2a.uQ!/M{fq[E29Ba]&@*Y^~d3`:QxtY[R{m9i.7>csePuvͻ9#,Y~Nѐi)`dϪ X :j_RtdzTݻ耹q=8M5n?0hڐ8MlHym\˽ 7d[z{ܖ217Rv].5 @T<)yK۳$1&W`TK:mLB#FC,Wʤ#; 1E6e0O5(9iͤ-Cڞ8AN"BpBqFo"d s^P5"QIʁcnjq(v_}>[n1芈s5 "ĢKlQ@AF0|Y Ǝڡ`-DIUҔ3" 9^"^~HNVϾ.~]Eá;dh+5 :ḠɺIU`ʠD"/Zِ*uCBd">$Q9wf wWr hgENQ&Fpdq c7(ٖ+Ok4dq,ZfM)?P/@DpQo0EƖA0u}AVcKQԱhf:3 v!nVvܘs":؎1[_CQ^ST@5&I8lLv34hY~A`"' ,,k(׋$dܝ%ϢT!WtRQʦjr%Mw6ւ^XK K9[ C2<~^]  cEP?x*L#xIPEZj<jT1jq c`7!Om5І hpR!zJ6V=[e4 âtAPjj!Kr5ؒCOAi_f9sN_Vr0wx>?Jߗ۫VMbž {7<^=)"|ILbQzШVӌу^9%6@&;'2.O:#*kĝVVlO7?LM.`.#~#~Mw3'9=[17^Ush䏇șXe75!]-Cpﻖ`颟௹)>BksِA D`g>TwdS!4quw\]|z{~WiFݚ'em)\;'r-yOYY=za/S9ˏ&۽@yz|!9n~j:OF"j"oiG:U)~$NioLyێK\zD41Oh~ex_El}<͇9?hW+C*H@K,B &BRBΤb죈`"E:StŎmsks!Ym #zd)"q#f=C\V%ᫎZun6TneAEkrz턽' $RF]5ˡF A$[E'%ƈht!\c*CXgz_1;y+eΠkibt߷(.dINۢ!Ȫ.T@b!Jtxj%nZe7@kT$fn#n"hG+Zel(ƒ>>*S|AG 医2A *A" -xRVd *g$7]DuZ Ձ-I7cL VBJ'Pk5!jS|Kץp |  }ET, C)VX:X)zMKN ᓒ̻\X#Oqx)NiK)4Z̥8}nhkeЧ8<$,Lfn5Jf7zFR@2_u6^1$;sȢUЋd]Jlcb_.\/5-PI??.F?:`hIEό'[.m>ng%8#"臄 XNo'RF9'[(Rvq!@ as#/`,,DBG-FqMbc@@~%MX|q4P˻5ƼMDfQG{JБ\lPr3dik`TpMI6Je>TtKX""$t½)Yڇh!oi8q$u[dI],].k QLwO,h##}s SV5SRPr #D @+b 2ټ.p>J\yPǥ39J.ЍWy7#[ZtgҴG 󖰰nr>Mgt7Qs@-tt;Mu68Z[[IgWӎ%#[U1ЄEh~j8\yM TH IZnx-̲I "ۙg }FIAq2Aշ7:i%"cTܴF"!qI/MXc!$xEv"z kH>D-B䍆W$+B5O,oK˺W*WA $Odt#t/dZ#>Ǔ*g~}R21hB#k" "L`[&G]jǢHb *.JyUt1OHX 2D2n]!s(!p eʀN{Klw)ch |$/']}@g:+>vM,7D^qVpxy;84-AZաyGWтӢ qЬ\č8#hKٔj#Yg{# ga^XC2R4eK[20gTVe#׶Rd { 6p-uԂ5'Ĺ+!+,FF-gC@o*r2VxSoY1ٝ`H; Y*U-UVx.,J'QgGX#Fb"0O؆F&[DF]6־L뵈)6Ƽe#MKZP1E:KjUesGg l$zWa;,)Wl4s!y-5te( lD0G@f @P9zwƘsmqGR{dd"%_X|vٌdhjќ\biIkJY(h*IiME*Q L"a"' D Ϧ4tBۍ or"ڏg:N9ҳ1ْwV<`i7|`<ɥ\7a&w&??Fvty6W7J"g}ϳ`sf*mSn<{w<f\Ivrၐ>YI' ݸHFk)(vxhZicQAkv , pV}W/ tL%GyxETN3yMj)K?]8=GMi Bsj0 ;{ǥ'-Z{GxOhyYmG=;!ݮ~ios,xTY/G!g"Ly6 X~)Rglk/#lJg=-E$Й8y]%f,gjUP!J@7o%rg{˥V%KѴ!c&Qߘ/G8]\7k8k%HN6^\}n҂|ΧQm7\53[뤙ljjϧʎG{~8ZfG!}uVZ.㞏]UZYjZG9@a:Bes_{l my zkheGpPVC=Ѭе}좏]<(\k͒I>/'i|@kC[p ˮv :;}outr:up !tO_0,]l%}9E vr7mcwm(=S> ±5K;<AkX/30w!1^4lԳ/ާ魯;o+{ѻ)v<:a+Ld ;/,+ ggFӯD?$]qOR{ t~4>+W{_({$lvL>6h?}P/L7r.P08%dAz4y%Ju\ zd V=Ojl1㗍^3~ >RiC1T 2 pg?7$P~+,7_7/OA^kbԒwJv}c(nj#QEH@3$hBnq3^ ,Zv&XtY[]L@PRW%ztǭǭb[_;$>Сܥؽ )삮((Zg5?$DmqLGk`z ,w6q2[FM{O,1E{Uڞrv_BWD0v ؤD8*& ڦtbmTU59h_194QVΟ+u5Nwr΀yVn⭕y,ϩt#bԒw vz+yU5}.jgʜs*D.[ Dʈjl4boMta1ڽk0ط9kΛBYk%Gv.:ʹ7vfV7j<%dW]Hެb6ZҹW\8g[:(,Q&jt4[IaU<[lŜ,-ZoGd+-yǫdׇ9vA2NBSU@b C;Bn/$w2t5H n{wCj=1]53у![_[,m]X)r13J .D/>pknv2DӪQ`HXQw9Yv(ֻۨTqD[ wѻzt|79W X.f|?>Ï:״n~sjXWnyx1Nh0yӻ~p8:0ebbi:7Ddx٦*駉&O\O,«DU& w 0BrR쏑HxNkcIqLCaP~/-}Wjwky@<5owr\rO26i;{0OߌַߖVMxTCdoEH ":aoc1NQHFao{pKE;¼aCF=+ qsFS 4(x9eNV(Tڂ GSkBL) i 7~GQђb_/PT 7P>31 P`Qgm(ju\RT%4x$A  JFzȅIg6-;QT%=xiݠbiZp83Nam鴙P΢ 4|CG\kÃ=$Ls@؟:32ams?O'2ެiuf,zc]l6͜sقy8 6N;?.k)40̗3D@t:zxoFKO}2UT#>oF hl*xO974!u&-ǗT~:9!!n@ 6oƑ;S:o79c޲IօE1Yl%Z˲ǒ/;@{->-uH]x ZdYՕz0{[keڻG8v|pͅCNb3yHp-Nh)Tb-q*)BF4}Ite1ՄҲi4EATEMA~))+a.E"*^-!6}3 'R'K4^Ċ4n# `[a{oo@=$(Po?'(\Rxw»9AaNPo!8wtBBO, ^ɚ.'T{x(`7Z)l!4k"`4OZ46EZvhkMlb&HNE'V0)xIT=:k vx7"`v0Lx}R1_ݥ'dJLcq6gvfceNZsF<8; 1a~X :=.ȸL `ӃQcTx9>۫?F$*y5e]LXPӠ)P(sjcjbє$OjmP+Ź Su2g<d~F7R&/R^9{e $w+/`[(O"dB1 y٧/ J,`z#nBf r7F`&٣ |&;t68" nQKJ2DfB.7ĖnceǜO0rߔJ)Gj:`mHJX| T.YP1r150JiqF/ VW{ւ} csթ>j]PGTG/e`ZeC#X'IpY>}@IPP~=8BXZ}C=CUv9Q1l ˤ貛f__=2$Ѣ-E3DZmcwrho#5rHi1W-Uҕ1{SyT 3#6hl.!GNx>X~?W׼"NJø#,E?||]hBKs&nû/_OޯCd,y /t $;9mB+ &zQ$ZmGj'LNeBneCQ_S"berW tň*0,<p~c. ]ysj)cq! [e|asYtW+W\0"6v*GkrCnB\U2`LHCth Tԝ4`,V TjB!wXP~$#K(VTdut 5$G%|Nx9Iddjϑ/8Uwbz^`SաwTD!20A_%6;Tafr2{ F5V='GRӫSs(9, WbJ)y`wph6gKsz'VU|~{f^V/(L{stGO^3KK zO~K}4Zف 0Di(VfSaK $)O`-ja5KuV$D[ԀH }_"ɚ]8#Q&zw_q\g{D_ }Mg(H^xF}Q<ߜUYN}jZV L0cx3S =RU\gX[[jG&g15FAI8ڈ`;MyاNb,d1O-gCn$ہ XW(9nM{F^q$߽?|{vb 50~8n"ރTňUErp0ZyVin"BnjĖ/d5ƀJUĬTӥ-SQt&blq$3i:OzL7 J ʡ5 3@Kz Z/On!ڿ՚PiQŨXOZex)}vk/U%6%njRUar mLHΦ,ErL6L,[d)t]ΘJj?;}TT`-"Xo 5JHhr~ e(ߍ|-koөϙ(xD/S=bn:1Wn$KBn6eLZG r+: -Wc - zNɑgr&v¦h>Ș~lly\(BRɐGk0ň dBA@8׬7.0ŅRyQJB> H+X{~^P8=!ʮLyN_dv)ͺbei;GJR~~1=dEbثԕ|ԥqm#t>Y FpuAS_Eh^iv on~ڴi޵ҳw}G9?rpeZ4ɩ2~ km0֋zXqEd|&z?ayE3L~[]a-<OaZQ7t@@-)̰Si[YLsIpqyʜlW@piO9bNo7:KHnvܦ9U lCf ͹d4~%bck89/(L5Ѕ)s5 9+WYy2!^9 ckZRr䙤;ˁt;:=ȳrw#";g-VYl>{T!vN_r 5E* l$;YuJ݅=*$ 쮔|Ήے. ~ڧN ^zkDm>sE2yq5| zMN׭~iч Xc? *>|ܽkgO_˟Ÿ>!گ:~x=t|&<nk96qȽnGON'P,-wh y7e0 Mvҿ"a˞9t*x>]J:6Op.MUOYUpP [Lexwou+ F*q*7ױIEصƦ-4ucȬhO9Zy#dʛ 8gת!H/#QΦ%د!S_#Rwv [CrܘDNG_levOfI}[ \e(/)?=-傈>a*ziu~I5N<9L5\DfOժ>|E!on^lhy?g_Y_Gu땥3zBqzG_+ zTv> '神/t7w糙,ŴVyfm!NnH.CCklFӯ+qܜvNNBKY g x|T`.X?W%aMTa 6 U '|7ʣN02(=LVe~N.)!G0}轕Ie%f'6 j5| Ԇ-u*̶]<.J*lf[G<.2.߄K,_?4zK2`@8MӣxW8ZbE57qMԝYANl'Dxڠ!fĚݲS}ܷ@] !1X`x~ QKbhz^pNS{qSl"I!7dŐ3~ lp!˰*VŖfo3v#ւKH! dqV U a^(ݭrb=9mB3ؤ8Ct ײ< d78[Q?$FjMohc-h|R!im1Sn>~;uY-n mHЗ[e~~-YIM -RY_FHrD ,tWu?]U]TeOa"s.urs=K&GQ8lB`Xx^434s5ܧ :ңf+G % wfҀрA,g'#8F \d:2Oig7yɜH$u[ e2ITIT9N秎8!8_ +@[ :C')N/zY c}?FP*9\₨r̰X|SZPU]o*L. էU孤V OgeI:(2:Dтd|Qm^Du"4Y6sb34QŨYh+" O8#ThOPy&guQVJbޝͱ^=Vޫqy']:CBlN**2%C{ D]礢sx蓝nx(;uh>g=**z6lx>ۓFD*~u,Y2eߝ)^,:U@PD=Ӷ@=M}owZp~knRG(PY] wS=.ܝu$5B6ttА=DMZ- A/ugeS69u8iOph15eO^h7R;=GO@ԖʞU0Z w/F\Q2Z*%ҊoLny:Ĥt3h<,vŴ{ kIo\Ip`?Q`~ Z݆d)~8EV $?ՇIxc"2Uԯ5XU\.h{}kƫiuURrQ;ISs%=U΅*dⵡ:kkN9$(P6S@*`BkYJ $hrK9dM9'W zshav G>̰|HӔY)-s%{Z"%).@C#_\OҤG a{}4tV5NcG`[*Du1ϟM9wzqp2LJC}:٧w [!Tl,!X#N(lĥ)"I8`ɬ[ Iw}Wޣ'pK^3wR`/Wb?v 뤜q`z(٨[WP*Jgr+:+J4D3[OK*%)CĬQ1oheMz2z&jbuڴd*ṈzpfR5ĉE2fߗǹi?d~s6 nsޠ)('% B˶@mC6$Ze(+ѺV|q(ѺΛB5*bFFa.#g ϒ7O~]1Beǖ!A˗̛` (LyS ܻ, : FЄunbF BeB TUyB S@{ v!h$sl=Fz ʁgOyT3LQWŠb@(_uK r B1 _]yc;:aV)"AR*qjNz]qT-1svnŧ}7eQʛ B@HeLTruS.5gyp8蓝nh특-2 s3zu(7Jָ U3ibkODTh *c| F9Ra` 'F!3;qnJB8R!bPae SOK'QqMx%8Z*ɘFC,,ક+o`5WQ3#B,4%=&FPFF$څF*]sc-; fIk}Egl)CN4flV8crӇhh-fYMy?ԢN4bu5uJuӒ<˜g3,sF59#a11Ԧ$9"H3^scF3ôϣg*8g͛z>%έpm;X)j| Ȏ5![d4o/jndu[u4(XnLQ*jHP*BqzgiE8)nl7IH%xFqX9pR1ԧW0Oq"-%( if|k 7H3Wre,p0=Δѱnſ)p[]vxwvVб˓7UM/.|/?t:erXRTGSC.n ²:4oPxÿ Pt. sIfGdw7OӮ;.Mv{ :y[|ZWL1i01FEv/kլvaNT8Rnv)o1 dtIe;<(#ЗQK1PSuK09g=X sQLbڵX[Cvnڡ<*.RaP%*Ŵ{a\m:I]mGM3cZ?'oj^`NN苓ojn)=&IBؗӋ<k+T*P,JsiU0(`uU)_f(+p[ )~a3 r`F+xfپ/ɗ2Qټ#a8p9ۛϟMDGb柌Np%{3[pO}z[}x?f}A)7E:7;ټ+'˫ۡ^h~tim*hj<Έ~&|<Ȉ2aI5h+\'2;?jn+/ا:$CK B=$[pc=_Qߺ@$*gKkF"}~ީlZ&թWپ1/g\y#JUBJ{Bޝ `lMrָ3yӨܽ2tgݻGM9i-F 1w(b(F#K,ƨƹȬ vRPD$A$qz%Iw.%_"\"u joQKW%CT#k%}X$+0BD[Q[Fz\*|0}h8k$nLxLt]eItf*\R1?Fo!^9Lt>0+Xv2>X3,4/N[fE{/Jvx\=dS()6eM6@!T;1Ҥ*4B߽z h ߈64NWR.>HuMB>NG#J!Rqࣹs7]`z@Eޛt=+}>bzou% 54.L@TH.YESLdWBw]ko/m%r.@ꢄIɭj.$z~ f(˼PB%IvP~TF@@NvZ vceM\ƀ}c=}߿5_}쇿_끷WL:O7'<}}:!^7 t힄8|yuџLmMB)i֚CS :M! TM" KGoǗΎtb{=D'ưUl9=]^wX\X JnE;m6 y3~c _..AˆQVx`~o+m#gw0AƳXlM 6h"[Iv ߷حՒ,jQlbznfwуBZ*L $Fc)`+d,$ T^G{VfQyiߚEH` ÚZJ,V=XҞ,"ûS/q,HX>:oRLJ__k؃krT~`5޴`/UyF}\Aru%orŶĖlVfZIK2kbI(;a<%9W(mHhڇMQZwYZsm~.59DmNtg ]{[csU@ST8R6Ck(y%uPX1p+1#3N4RV%;kNIS XQ)R8&M*eS;@R#+u)`;% P.7WE+hɞE{,J+1гX'Xb#5Lr8F\8mm0j"I[ឡCtSxKtjET^b?FT*^jn2B6۲ARf̡#b'9MQ0L6IJy8 )ḿMsNP*椤sӬksg'#Ll{g90I&inn&3~3~t+"qbuCws$%/vY:v@S})|.i_oS˗>?RBcDFMps>Oβ@cIVRSqvׅW}g e0~*4mxF2kҋ5W0.Bz{wSc#^q8 R LJfxА=D A7 Eib&.cÂ1%SLhj aY]ǫSN `$Llc`CZޘJ%NB \:ԒvԦD+Pea6I=ѩIaGbj$a)B6+' Y^ֶoPnw kgӱ #0TdVheAci_d^h);Lܣ/",tVstk#Z SX TnF)S|:ݒBP"5Jc$RlU f5ju}*FaQW9j)c.qdh2njbϨ" aZYiSQYi08:1H q2܏/zM I0nR%PAjBZ\n%(Oׁ+ S,;ލm+ʸegMq+R`/rFϙGjMB/ VNdXZDu!0aorW7}rd)jbqE|R~ZH\.~(o%Čǽ[-D,&FEPe!!G JQ}I `|;}Q+[*AP|8w5/ -$4sՠ)- +*%{|;IQ"bRDq6#eAŨBHϛZ 7, oD:BS{#޾;[屖A }wsu( HߥMmGj'b'@3(r8U!NI%!( 3rR`4$3GIW{H G: .pIa q/P@WpaoՑYc*;D=@*hʻYifr439Sɳ?<{{>r51I: ǎ5 3I1WnDZMt$Na ` i-x@J(^87V Q 鸱 e͝hNςx6w)%Ԑ:jUq:fgW#jE)iP0qK:m:Ve <%Ws|oO~$ etIpw$wߑ1C[Z*Ʈ^K}O2z&F̮0b*2D?rC&#h(î.U&\ ]6݆yAϔvZ#_"}T5ڃ wP RJKɌU7jC`q0QPmJ7 y4)"FqjGVC+eHaeq|0B= K:A$yiS)~=[xW^"T)?끥651Z:PVSCuYbq5BpF@BY jwmj'hhҔPS\icQ"3W1&)ĤZ٤HplWarj,IQ+BQL dˬACeC#ڀ|N2]}G$||6&Uߙ;P`4uq$Kmjo_^a*KGP-aE&֠vtgXJ4!ISǘJkD dIlQYS2V!li"OvHCGᔁP"R+xs7z] µh1lR1؊4!\eW+WQDՆ;*&6rvQ&Da;ΟD tJ_7TTcWν/}'ﲢ'8X>W1dŖ‚*s~6J]Rf`s[9=?ͭ3dqGDWMt;TI\6a S՝&-lud}PpĦ_kVp4aA~AR#%,뉊.%^X' p)8]bCF ܚ{g,177I>Oǃ_F +4IuYRP3Dn5YWx֫@~Vbp }k iJ CjK0ǭqbD*X͈8w-]W@4*j 謭HbP VOS⒈µ!@/@DaaOeۇ!۞B PyVpʶ'`{r:Tp?e2bL_=aoap2֜MZ`aw䕹$qc `<'nW-T''8i o4q'~dG.`&BMj.q/9VuD&Kj0kۻu"L-?_yLԻMqB'z~ .P926ଟe܉̳W_\y-Pos Y^nƗFuNŋw4>Y< ^\LGvbি/~ ߽?c07<{;M`<<|={>sy|r;}o| mo5p.iw_:{=/h_grsí?^?vgY}/qhh047ق~yϾ<h{p~}#u9DZ"VS^iŨ&Aysi/ʞafp+&5<9˞Α^όqW7eo4z{ey& __^r//~z/|{F%9W{w 89Uw/ǿ/!,?(ݻ`rC&Lg}毺[C]Yo5?&qqL@]ڛprt;}=epXi On$_dSf1^no2=Ɵ,H02aT1!d+@ H ÒDJ`]³’)ɛĔ0FJRۺbƫ\e%cb(…8-hM Qп ^n hkЍf&/V@ž9.k8-?f؛HlW1*&6^U/pwLHfJ|Nu=m|0q~P.49V4?Iz%CDq5@wQAh*@wEۀ!6r wfŘՠ*փ-ޘ3MɡK·. OٲCeC$4>M(Ks$*3Ќ12G0HsQs]Yev2 6F!4 sM8oRYU,(UR|y6N4"lj3O 9 vH1@+F\fNq'c*Vh>3e1Na 2jdjCYFmȨ;2viBQL12()N1( ucU0VrfU9Dic8`kÒ{ю} 4X*̃!3LGQLչ\*AA+N;RFa֊X0Q>{o2*=~'\s/]#p;@ZN'Q->NT$ ~vyM;u=M C4B1Fϯ]bGf0sC4nXH~}n/P1jEsHq8</w+a.\c87kg-:{ pP%Er8xM kl#˜)m1`@X,DReKzTtm덵+riW˶៣u/p(h 3 ՒD@ nA{rk + 7tRҨjeVHO%$H2@͖tơY)4L}z>aʱ0wEfsC6yxD _x0ُ"ْ c? dgt71w4\x tQ|U?scG:3s:~?q5s+sV| r;+/ߦsAǚw*O-JK"{NBkE &:"x刘c)'⠆q,4%߉a"KknN`J3: Gq|Sb逮\KV#ɓ\?^k4o#0>\6$X=K/W 1JXa݈?0u)! Pz;(I̫(_ E­:o)n^عc/P6izu~!Sգ= `^~%ҾܕcW2{%sn%%j8x9vu໛q+>|7#'[qKVnV>NC JIC\H^.(U]8wONH{m.Q%H#&ng쉙l;S O'Dۆy,ylb!PҗGbqZ$8}E\na Yr81|GE%"Amb/ ӆRÃ.טo'4!䄿ʷD;6s{ ke@^siP3)\:,e|*M&hj}/UTWQ^E{UUZFi$%JI|P3Ɛq1!4ko8ToIvVQ2"SES\H DT&%8!R$Hf1|97"2 DU8sIk?>hg\#J]Sx=Ѝ8(t3hUIoyJ9v R虦5X&BZ_$eוCFU_A$ƀN(M  > GG @CBؤ;lCM@fQ=1J&Zg봤!By#lEۚw*=q%0g0ڒ;C1cb\/`=% ƢL`dҜhŶ.Xj_Ӗ\h:̎^B6/ ֽS9m yo%o\?+RhV=o`|;wV}n8kx]n<h;F9󸠋rů}GM ״rfW>enEl7+~e$ 2ńtO#L"$..ҁh )]cӺ10yzBq0 M %(HS $m;큉muk5V 7߾)FJ}̕:-~4}w)媼-h_ :(ۻu}~Ը&̮/ϊ^7#{>DxZfS>\3f*"ԱƕBL:8B*Eɶ;gec%ՆPelCf%NU J؋g%rտ^ P/܄g|:h Oû:Zބ +;1svhG9oUylH-:!AQod'Nj5y){XL)W cx.Z\] ƹam^BUB`4tu3LznQ5Afí $f7򘪹i*Я'<(g1벑y#2&T4+ qO5s:lR4S}=b z\ͦ>&0yo-D$g sʹ>cL)2 NbO UN Sޛ~Cdح~s+V}<wa3 cb_=ڮ1_+WI\W `t|"t9 EXB)&ՁAb·s&YwyiG=_t} Kr*d @U\ԺH(_d]ݎ"Qn9*b7SeÌﴱ8LjhbyB+yIZP]szܠy)!% \ 03Or͝5b3 6SRFKOIcc*+9 a i@E)ٻ8n%W ,v3!,^ 9a70xm$r[n=_%5=캱X$ 3LC'?@}|CZ?{@}3Ξ(dt|-AWv;1iДKFeE"T )#L/ jKܓ96K\U˻㖸.xO.qRJuK 9 P  J/Hӻ+bDHθŚu6uBGVT{[9;ZgyÛMN&X̆K;7X0|b یmU_ m Q|4=Km2ŵOT#4jߞskqg&/  Ff@sc fnOZ<`M H T5??U F;IL~ 01yn *KEXmJFdKHG<M1f0F!Dk$1*33Lr"3R |g+"H XDș΄g[vEӄ>c4  L+ydA }iΥSLbɀsTL7gL{fOP+k:1q+l6`AX h7bΉ&%85e ƨ { M/)wRGYg1I3LethY#S!k2vLHFrN1"YԾUTv@fm'u3PW<gsh*yAXk9TyBpmNݓ#`SޭU51mAP,[3m Mɦ:)5`M ig4dYrDbIq$ \y6C˒בֿ|>CH(b{ԁd3@js S9{Rdebc>L(b:k}$#:aŊxJQ~ߓ$qec+]*0Utz52&iBbAMpu.xSVYfΎLJdZd(,G2nlfz#hˌL2kdUr&z ~fP7;Q*ҒPpв{wZeWf\s,7kTlj7kCluFYIe8~mϘ]F6O]a 9lɢMQu0p ^JޥWX)L#DoL)wSjNep]|kgz\rEXnuuzqqsqSbJ>;lPvv^#/}$DK$D?S\?/hjsi-+:/%#yk\}^ً9j;wj/ *fq.,<\af"$寬Y U,6H'}jE$Zbٖ3ȳr.OetIzI&$g67R#.eN_%vxҊnq/ي%_`Eہ2.Q:[L*SRmfh2m zJpH85fãp]3fim$fT54fdî M~0BJN^cSQ!++Z+%/m1S s2'LL9ֆ A.VX}Bv<$JG"9l!%,3˥%^+@C=Z`FU'Ōg԰D/V^3X.B dvaYH#GUArqe4oȚT64LN$'K<ƌ4]эW{6T%lY6d+hLj'g[,So~;y5{SSQ" YV%d燍 6lZ@4>|u[P d'fwӮ=ɘ 7FˉStOԌ0r0y`g|ы˙\hy:ޱkHl(X>}ECY#)ЋلPJgUnCؒ Q`0.Bs1G29P(h,\M)=gR/! Fh Y&Ԑe0%mVd$QM2O@d^1lr " V*ufBhzKp utq6)ht-~2p#J#+y\r$ w2(D.`E6Fd&nkOrW9=lٚW15jM3 &B-38,E/׏[GbBZZi IJʹ{O>bZЎ'aN @5ָH+n1`?Vov -uN)HZ WI,QlQlH[W\#R )ihƔe-',(&U)x2 GNs7x#тN׉&EsKa kk/(Hq͗ ֌/H*-X rk(3_hyQ5 Jᎂ0%1n- s/1xiCS|ᔇJ6zifv׊Yk+&D#N57GL r.1vUYP؈5֗{i iO OC.3##W~6U};`ۓYMZN9PoJڏ}V#Y]VY r@= 1)֕V8 %5[Ɲlƴh+@vK+8L:hvxکfk'U”:a͕fŒo~æ;޽I }*G+nvy?ưb9fc4x/|Պp[re+3NDfj e eDe&І'GdJ!HcaCi"(Cn!ѪQe#'9iLRGUφٔ}fUNIHv}Unu9Cˋvs7Ut;㞱h%3v#r%$cƎrT#wȑsChu'~UEj<)fFkKik>^^Ow=T/[Idž+zyhqm)=(v_ I5*ŵO$q NP_iKTymimro8j1H:4ҏr Ӎ[%}>7khMhZ 8I6M@҄?>EsL0hL!{؛t}ooA2Ņί0'啕sTșv䝬mk9Ը\?U*x[ Lc>{$ZHHf2)#6~oRQsM)mlua>$EvsUE`!pKa_=Īv&U}ͤ3b#j}hS?IQ&WW7M~o]&@~9Zf$gedPJu'3 KǫVΕ<$t(v}WZw[y~(Wfm( ˋI2 ~[bſ.kŇ&Gd mʁØȂq,(5&R9 "K!QiWW/~(y̻⭭/zb*I99e 6mл/?Ե*)ȍ- Mς᜛*MπExҊ&rI [˫ixAiARER*/)YNY6V%c ql֩9C)Zs Z@mgUӧ> ^m~mw*{@?X)z\Vf^&U/ːjvbcxz,jPE Ӈ#8S0U5ӳ*Skʂ]>LowQ' &#_e$Bb6@^u%z8Wg Bӑ+}SIYs.y[stۧu|W6#͙NeۮMFAU/#4'{"W9N-hi!4v3*BIWA8^b?oTn<ZBp y-ʓ?ll&y49|&ύ#(Bʆ 3-t,/w.&ݎf "[ڕ@kS cڍ3IsBX% Ixͭ$l?z-A @7 Ă PiIذp[`>э P@ G͌9إDoC/(Ƨ&Padŏv9jU|D)fqeJKDQnV'( 5HH3 ]$Wf/x]m;Мgw2[j ٟ{9'U[@媭7'OxN] P޹%;#Ǖt_j 'Z!^!&PG/\=V.k^Dk3c=|6yW 0ɖ_D <jV [q~Q#Ea 5f DcӺF?1ƨY.EuN]#DBb/濙Y/j3rԀPiYʳR5j 8 kV+(c| '4[OF':b6M4/3!M"}e\ܞFĘcF#Hj 庂QjЀ=OaOyu8k(s;la2<#f-o3y2_mr_lr_=|*鼔 VLVH]t"'Q=/o$dyAL[\vLҙ :,ג1f3'%-G16px jfddG9,ZϷk'{ 6ٺ uz a=#! S:^@ "U3ߍfiuyMsD1ᑯimVa}Y`6E7ST; ,QXb؝4I>uvdX+Q>X5:we~+]!ꂼrK Z9Yv?LWn![c:L}nܢXp?j'W!jBr܂Uoxe@>lTug>_ٿy*]I$ 9E*)wIk[ndL%_QV=cf4aWшwKEw71Fr.x vOl-]ڷ9h[COh 92IЎ Ǧp/PKT"(@W8eNCk4I !!ubOG~S1,򨽅" 8|'9Ƚ3H<+HMFNqd ir tyL8!ěGRiDp1ɀNTEb05"f`ij1nq/pwu%+7[daۮ2Nmc`0}mO/Q=:̽{ʬ~XMcHHEАlȻِt!+,tro7PP }BW/H'T+*ᩨRA= [*_~)bP0Ȫ' ^L~RESR1QS1ԌA푠K|Ą4aWm)]y,myƞI,6äX?5{6H#NW\,zg}{D0?_'5#9NO;$o6®"!HpxN(j!IhW >zrE"_usTqQ?<R]ArP%(Wg ֛ep\]2w\au ,X+ɈYC-0S,'e%a:Rra1Ns|Z+dGA8Ꟛ̍TU,ux|=vhj5)S䕬Ww:K5Yu~wʍ[{]Rԋ›'ٺ :bWAZ d;7'fƫ4s}|A|hw`cT|#uׅywcQ?U|P,6lv5s+1Veǻomn/㒠o6j]wnXN}PDCN1S ;87L2fϏ@mͪ~bsmqF,/'U0CDvu&ԉLvu "槚Wmf/^I\*azլ-5](K< vQNHHK4](TA 2yp3.$`ĠTr@Hca@'ZR`sԦ;f3_{qwwRg)XhO$R8"BH*G@$k.5qe:9$46$*0D5M(_&{ݚM՝ Dz7DvJ@R)\ςzC P]׵9w g^SUBc_G&skbHaXݰ!5v7cb49)DMɝXY8d +DC3RZ*$210cLEZf1<ӼC!JX&3BDѐHbN:Ib`vEHR0ab4a"KR77ЛD!5)no-)uwclg 8 _}5@ )[9$BPN މ% :ȓQ u5BIo?C 7mKw߾ o}[:D-P{}0Zգ$ۥ:ޟ"I)~<:YN~n=1 &0N"j7Б$W*ةqNHZTbFAw-B1e< SA"R$J{4FhCD),϶L0VaIYיL3f0<{6Xf7zv`@4#Sϖ800EGpW :13˛f\i,C"NIvq9܈yLej7-yf caYLdY;J2ЎOp1;/bejzl80Lh. W^38ϧ>G9?9W.zeCNg^@Ip%!InWZ8Ӵ6wm.-Qol' ipVLR){xy\vZpN<y0)JmX1I,L TX !8U,AHdnV4YWJG`ۂ2 ⽘Ζ)'}NCvdR֟5#[ϴ5Uta$̍esHY+PL3mV DJ S XX[. X1Ѭeft/-ָD/MYdseDƑ7ҡ˻S:)nظw-Fȸ9]gskə?ݼ>b1ʙ[^ۗg *>2v~0>eVH}owi|mvSY >My־h]>p)=(م^(˞$\DdPͺ9zg-8tcc bYNUl*1Mwm͍Xٱd/ܝLLW:Δ @[۲!_-"RblVlY;b헧T6xP1EcTSVN6-S޲ y"zL9"pT-JYnU^zhN>_^aZuӸ?JFj9*W˱ GEc߄6?{S-}} !IB87GCBn"Qa90#sG:YkT߅$.t&=E?#gLts{gU Ǐv<;?- a*,A8x@/[p󟤟`Jv`&# ;:?Np n#rmaVXԘG7YfWtB@k5/UkkC&!:5t23^ .QMaq?WdBz {# 9ft1 $tSQ%ox49PN4}D!Ry%$n \8I"[P9иMb5 rqr7\ z"tgl:L>wCDTN=^ .ԓpy}R3S{>q #  ^s7r}ͱus@<pDUCݭ\~Hrz(QE.ʵb=.lpCp }L [ ߖ' ^[bq2&iq34s Q|dR7gZLɫ= 54LHpO6h nWMgնLЇ&d@m=;6p<˂@b!yV2Bv^~sT0ٯÖ'_e'gU6r w!` +Fc4C` .O s\ 6VY 4G Ԓ-6'0}vxP8F'ĉ\&b DC ]4,1JIpPp ^ qf<=ܗ#486R @I(H q"[a"}"`yZ#P5D#cĭaHḰmO Kے"!&0hCN6*+ܴ>A1qfjDza Lc 11ip(H'CY>B|F4ja5 [CT!MW ]qHuyV$̅ۉ>,2K&=ر:Zse,O]ddnK3ꉥΙ+5) "c@y$r6"''X=3L-d~xoJcˇQ t \к4MoC\Z#m(JU\ط07WL}I?廱,9X&6cme86ۉy}/&ҫܡw4s|?/ [ :K0ȦVf_6Fwm//\`LsShwO BYH&rĔ\lv2N˗NXfRX% KBVC "U;_[O8ΞֿYX~u0;mݾw/G#~;G tΒ%W]M?z¥ #~8^܋hq75+CAU0&{*^sr3XQwr/"~%oǡ>luo-d9FX7S}UםE.?V]jF 4.?$B|xC* +B aY!ezRt'ԋ U-nܱW^;fº3Ҽv>_722u]oV4=D,!F:*rkbBƽeWt_:M_J(:&K0=hY:K[r )h}g˺!X[(ʈNU*sSYx״u &uCB^I&-eD't*yÔκZֺ!!\DdJ.-D'pJ|(Eƭ[([ E4H=[Zܲn.mn<(#:cTng[;L g4m݂ -kꐐW.A2Ey럅l[7[aByPFtBǨb|pFOf݂ -kꐐW.G Ʊ۲ܲ!vm%5?oo[lr27 i4Ik|yaoc_-]89c9f߮M8t"!g[̔%)sBx%3qm2r$nw3Yl1d/eBw+ϰHn=~+Ie?]Qٝ.ӝ.ӝ.ӝ.eHe\zZs]$w&M''؉REgvn Ɤi/. K?{)xi$ FyE˩UAT٘ԫӒlAc '1F:-޵5+鿢Y9ue٧KtGwDaJ /P#Y,NWa&'X5a/9{SsW+;^+I|Gg97qt%y7@y7"i7AG>VpRk@_xϼiT]1e N!GbXUwg1~Zk|ݩl2 S@@ZPh´I?1Ic#Y^h'kŽX ta鏬\fuuk#, jk5&EbT |6#x)] ~č_xHx?lmvTwW$J{Jȥfo;s]se bMR3Ґ0.uD0'Agm/De &Ret"1\ 44 H\EjT*ȨP#8m OMj/Jcnf% ׆ؗc*I,4'8(g:IIRX>%D&erc_4in$љqQ2>ɕ \ 5%o7<" \B^dP(\X֣hahYM'`VXAP ZZ8'k(%4J)Uwz4ƞ4X%K*F`[io7_zX,/˛s3 u_S+v~ŗ1@=6µ<0'^4`{KY/m,=\[Lnn0OR@^& N)WCX2AYd`w(*kY8y(d=MCjN rHE!q}֪bP cHD t!vxLʭed֟r33}?Fdvʹ6 fmznTR STXXodI9ɐ1<8U'# 'ʟ!}V N=R`*/ğrN +כEU37qMDf~W͆lY`1\L,ϡNYOLu9\fyiyL x s%~%]r,_&l\rWiXeui/0 ě>w9EV#j$5]8QHǧEvA_'m{򶭲 GOhUQAwŻ~hwEi>CZ QIdAc Q0jnʴH(_̒}ɝ?=vБ8x-$g sLq7>.4zǓS&4/CTb~%q7}!WsPo:߼ZK k:a!ΓSz^6Tbw\L4+JNs[xlR A/#gڎ찟S@VIBJV>7e]n1mX3jΛOӥ̳iށ p6d%5d%b.o;^׃f"nߞš룫[mYJu}dؙ-RęOPs"k;_r (nBXGEB.- !r hYr5o[K 7qK%ٰI &\g?!p =r/h`Ńԍdn&_YKe55u>+w!IFc?ȇ#nǺ=x@Ȗ J s7]ɴF—0w)B 琺m.@%~|q?(%( :c1HdBo?̕['Q0B]8+! LҦ[+mj cZ.Owϩ x[~>.5qD*MD:2D"HNZnYh!@eWM+rZodQ ii͍*,V]㗄 c)HX1!_CE?GZ!sS>+c֔BHFOn4T<[ H̀t­"M1 ԓzK _,;!)FJܫڽq{cÜ ZG/9akhYqrsKZYuE7#1|M@e"\_-{[:vm*{\D@jZ^Fn\)=';9%P:o*9l*ɹ4@G:ȽF{MdJ5hZG|dշ2.i0?N )8HGu兏E( j\UANR.; ʕ~MRup$&|Q]` GRE@GaTe:XA\Ë}\K;63%Epvj^/ e1됚~jm>mOneI0wݸ&e!5(X$"$ƮK크@F/  ['H9.w٤{Os V%ܵ'#j9^41`7ɗh ݯZF_ JijiV$s_nz=qJV&ϿqQl]a1lRQ@o0(yI'3)ә_ !CaĘ:#_cʸ'e2Upm`Al!Y {3XO1/+s_rW9"^.$*^=W 0N^(F7Z/|6Mۓd\s:78cK~C>U{Z,aq2.@dcƝ1w.$\8/ NI@H-./}KF2B)yj3kR@(e2C&Wd;O 7WLVSNMo^|ɤ8~T6uK^?pJdk@Wʁ*^ N75k0$|L-j!"Sw!f2,J ,2$+ ~&\~Bo? Y<.}\ ue[IY̍lO"p=@p6:F`΁tL't4*w 462{ bt 8D܅\ət>OAiYMkB."zEq;97O׫ĝU q~M+IȪ9sOFuMU~}<[':l'.Ug_w}/hU(Wy/?QV 9͖oa8&l%JVӐ)"cA\P`I(_af_ xN"6<U䆱[=%jI;Eli w9;h4k*'H=@ t Y:0N;P!' [{䜠Cbl}s\5]M\FSj|G,(7/8 ö =P+Iӟ]HA+#j*CZ3M uaﶃ[[&K6_:~Z@<2I5 ]x.\sPa%rh㊥]ۓ(o[_?&VR5ʁs= @)Cu&pKA;WeW6B2{k)/9kDQPK|"D/HP*6p\ rI7;0@*TQ{Z zq;EAƸl f-ry]xRЮ|baw?Er][*DW\Tc~MOI쎬F475Fo'Oddm#쟙,rC r&JgĤ y& C{ŎV{=άӻwyj  P &c$pA\jĜBL aT7}4!\`6ziʒnS)W[eR( DJR*bȳTr,T$x;,SGceAݸ/^?ɹ9Wy\wqspf 5.eφwnĹɪKNKQW#EHhd>G#Ӏr9J?n%Uƌ=3޿Dx۫_~(,e Prj:Ý̗쥝ؗhIe/O~2/nzA.{.Sa gN4\*Etw{W)(>IjOF͍sv߬8W)v|HWC~,џ007q,o %PԛrY‰7ӎ6nINRT \_Rh˳FM>ETKa^􅞛=tMQ!RSICq*~[HZ7y0Qi)aYڪoJgo9-Qy2Li_9[6eܑ6fڿ;fDK\lmzU(L,Fs7 r`Gp:F7Pzv]^ZPX ׂiN]9 ~Ed6x1%x )Də=n̬DeQh! E4@\?'mуFZ~S5}䉤[LmfB(r@}vd+vEp:>bu$ChZ!lh-W[FߑE& U89#af JhwΌ~Mo|nc+b>* NW7J]ߤ_qV= zẠfZ!s3ʩpq0v{(.Qɪ}~NtAC+k[io1Xkx4 @I7grWzX#wF;ݕqw#>זb́-㱥fu0'(+oK+1I),fD`#! hW؛fO{a1E  (831#DCK8< Peжp(GӄK5H5A"J=%0KC $Af1I3PrA'/@0~U%QLTàx k/]jhx1R3e?[R |g\x@w!xS~ r#SL5 TJ,hh @4` !DGrPQNnD)A'b9p@v4`aڲE0BKƑ65;q&{Bb66럯ԛq[a錧",FmnOv)M*ϯ^ԹN;=kM'j*RcɱIBXJ4cCģbÑpO,݁^ߛ||enޘY.ޥ%i8hNB-et[Yr#)}ӞD sJȖ:V Si[ƓU1hk>LP䐶pRX[K)/4sh:m?M sZ%.&ti"Ih Bg(II h'Iq4t:߄"iIddo]*׮{v@Ț  8if+(,@@"AB͔eE4 1c(5dQ<VRIL`eTe{K0 a :Sꯗ cgkx0Yo*z9X#l'dOQ`d$"fHㄙ,\$!5? Eg4ݬHnetA&ȓ d5E$ʵڈ j -)7taO%גǂd[O$';i]tjE,iizѡЏ ϨTl.:$묅En9walSэULdWgjv1g(LL8vpUbv3?ϙhaXfH "Hj=&CbɀU`4$D@iJP_#`%DNTkHsH b(j-B14d{|+z] -V\XŢDڑV{;=C/,d¢پv^B.ګm}pj^F!=)?ğ ]rO!L^ ?G@=iQo!_ѥ-"^$;-{=\]SdW7Be_z;!hی6(VRZiS4+g3ucWrYzsJme"r & UOwNn< EҞ1ijr9jrR~W+85#FDo{MvO.Æ)ƲrxJZw<9;+0% Xe]4.nw 6<]vnx2z/={0B% i!0:V4P$HФ %~j})X@) \ae gu%[_xkI0CmP | 7Q&㴴_][4|'-ڄ"zL٭þj7WcTRC) XI8 MXS;.‹-Up0#Va 4#z eͭ+;e=sӂUKed4ξ>CJloo 8(*I!+PHITE }aPj+Pg[0)زf ӂR3hDPVVPe${k%1sn>b bB5W1󭢉J_!ť׬GzDVlL)G#AoGqJm"5ú!$%#<5Ax}N&Vf!L&e1aoN4MҔ GeTY,N(I +Q FiWHkTq0"9\DC0h"8 52 2( H#gT"P3IU•2*ꁡ‰")S gʔ RjMATJXIsYyߢdpm*eXPcB<^ߥdUq<0N&sCrAu8甑Wc.u G)ϧɝYGS,H,"/y4#ѸͽMEzm&=gPxKONs%8CNDncKڜv!rӬj) 1Yuڼ' KyePc8!: =xrH.f47|14(o1Gmg7Ńy)B/n/>w)^u狶C/FP2ZHiVG9l6zA/1oJX+’ `xY F÷ Kz!KM#z3-M,/R}jz);6-vP <9n^oo!pKYnk9-&#W`fOqB}yu,`f5݂4 G>CkWCgwu`hr:['iO,+y:/+y=\]itSIŐlF vnBcH ;r>&o!Bξw3B$R"2)=#e  5B#H]E+H$$ ƞ 3a dGVܞM#"Hӝ"m7eiu&De  c d-R98#"3" OP]}=V AX xDYp (;L*2Dg'~eZqae*C@%"W>>dW>.~fNH|qGh8׿v/+1,<SkG#͟4 Q!:٦Ex;z[Bzn{QM:wx<|G> Ow.I{zk CIP |ꭄQ$(-]('ʳ$PhL۹nG\B16ndz]:NKqmn]ݚ\DCdJ@{+M’CWP |D':n(D8-x[h&!Qkun_@f.g/UlGW ݹ?Ƙ5O]א>+݉=r'FLj" ʋ>\mm? 6/+3{cSv evtm)fsN%lzr.c-a`:3MnTzw.ǦQ J%_Z7UNH2w:1k}0o?X nPD-ND?1c89ivJ!y&ýj3DUgNu`dY䯿y3f­^ڣvaApd07gn+9C LnOmez@ ] a?ɦEo7hDBeRekđhTϔ=ou-o %j~Pu_Z.5>Gm!E& 0)ɄB|ng_F9}V<3z\NQ#חwwwpݵM̖ ogpU j=аГ l:Fo(m0J :R 4&IBy#P JC߬i?LԆz $с[@"$9.Id֘x{oSF)' a8=qR F)cCt%2*y/F Q$TW֞\x#2 =4ȃvR:~i}0q=Q!>w9]]?7T04p3t}!:̝oRÁ˗?||<43| H;q黸 *~{xsȟwN՟oRo?UU֠ZZu7T䎜T:<_?˷jZ\EvSMTIV80+] E8"XtJ2E>qsn-RfD!BN>{z'HAB|uJ384j lb2\cImbTpP~X [Ҡpg9;Ͻ3vٍǐ.CdvzyuNZ6-3}V^od`D[W1$䅋 nՍ^nJ>c4Tnm]Ɛ.dj-ؘh3hJ01M6KS {KXbHL0K7j' FV;#4m']xhKgA'7WSACx-lZ'"DN@ʱy##ISrX_2 `\j_K"rGA h IMUin7*B"FJ '8dI)YƋsT_rmr*)$epp+Ґ^"tq3ߺ,#U·0Р@[Hnѻ*^ȻB,d2`GM{TKu*x.JZǁAѰe%kBXk9zyxwSy9V껓u}~pvqanY^xP{nUu>9|xur^>U]f}ȷuO{ ;aFQ0F !r1Bs!rI=ܥ?}s[>lV x~o /?\e;5֦~~b-sŧ9κ4o_ Uxߝӛr~-ApT4ً|ﵒs\oSX2ej$b:F`KzUTdH,Iɖ6rUR8Cqfx,3u|CpLK:-Wwm]F` Զ+lLH]`cDkU6#Ũ:n~DR{cKFإ5B׺kXt\a ܛhe BXe ۹5M!Pn׈6ܽM%R.,ʘ(t[0g\QF=SHEӵo06{nh0J;7 ^CiV:Hñqn C_]ҹN瓭\ )ό+9&9|Rd E!ZUW);g ăz}p{]^8\?n_}ȋgwo,J(rdeY8_r cCN^Yʐ1)"J"60$#%2RG "54Ys}~OtOJCH# TDUlyl}lh:5YE 6G@fm{7]|ꃌ'-$z MrP>μY)[œa:.y[]4cz -Ϗ!V6~0ls "qa Ut'CP0$?߿vFZLk,3^ `EH&l3kZ_3dCVQkc"MY=keݎO (55}y#$[.)Wlǩߗ dGOE~Ӂ6crHkܪPR QE*cT L̀ʓ6$Pj. H-) qH)Ѷ0 DQbyŊ9" uP %dy(cHB> `R!C-OM_htUEQnZߌhEzac`4ui˷Z9(o8YTklܴ= ~鞶+x s@bsK% ;̮9,k- %b`+s9XI#9zUoIK~ F¶mCl!2̾N zFLW7ݡE kVc-iӎlqngNQ¢5:=-ZZz,RO86[ A G)AՐgvD)[C^F樜MNZHN[ >$UWQ@ {–T_&=A4D,i P-Yv\Ld풬sEu ]8PJ$(v$龂)Cdz+pkq!J!RmO# !Chhw'^?t \'*<$_imJ-o~[zw;M; @:^@uf& ڒmh]0(H(X )A%GgM dbdFh/ -EkYh|05?!I,iIfH1eӒr&(TPE, }T$GUv$ "M\ 18GxGو~D0F}11VFJ%dr)]"֒ڻ}!,,Q4Qf$,M=؋hאޥ֡ E+,R$r1eC9-B]&IZBĤzkHS DL kCp@A,27 i&(cgh 5J+疨nN9W7jW̙ vNQ0g7(w+Ǭ&࠭^nn9,˴-cerwrqCUdR'4HZt5;N=nR&KrHk 8b旹?9TZR>d|`u8}ʿE1G=hlm<jsAf60ucߴhCQrn2Z^dqhx iGi8Ddl4]qO]=3v@7JsNBY;58 Gjo}cGPRȞ IsodqP2hVwS$=Ȩ9.̉{ړ-Z?W)C]_M$)g4lY>fG$+ײvh~ѪuޘE3`~ iꞮrZ-*i:IJ$T`vH(Y8<_'zt,ϙxs8|I_[uO>|viۚQ##n$olD6DT:v2 ʩ45S#S~Y#MoG9f \#j2 BS(ASUJ_]g.JZOƝYG70b {dAXIyY ?^ :թZ=Xϒϧggx჻1w>1u;:|r}ήίW:ᇫׯ^ ɾׄ^_crp^1O~ ]*MsY8K 쎇'Z2}x+ 5a+b%c?ԯUhfv2͂eVIL-5om#G9l {qn9l0Os0H6hǖIf1,dt- H$X,U߼O~pL,9o.a͒*>A)Cdu-?[].,FͼpK]䪱J~4O.67ο{ 5X>c=B#ǨFq(,,c&AQ$IhD]ݳ#lųoÛ|ILэ-YS(pZi>AA3Í՞h>H?Nc(\q3%fD-~%m'q5!P z+JH;KW@) _/JBZ}ӂ*lQ rpD`=ǚM1 jiUy=!T%(^ø(4ž+dB̒0i-%ֶ#0)Y*Gޛw/?RV5#ⰅzǩPdI0$xEd-wAdңO CH\8,wc8'xꇳtcXxGӡY~u᜿N>.~wp7cU)bU]ywl`*r (ju OD"B(w)ၢ L cјw[ țYq1__tEExW X'6"bⅴ$ߠ @ .v[-oFؒ1_6!M|(6B05 .~GHp ='GW>wV*M5붗* AG 곂߽_6lb {ԼOZh9`D5lhᖁ6_?BCD/EhgMq<َ3q1s`d\Ʊw(,2(\*%=XN1(zR"ɏ^^2dfۄP1%D9@J #>C.V^]Pb8WIyٽO"˳0N{;|4K2.|*^^@s ev%Tx%JqZRRDG>EtԄ H_?@vA\uA!DgRthJ ?[`ZWmOL FkVymxC!N .IERe 'fGU;q^*(% Y80Ӱ$,B P K *-$ -91eZ&y%nrRKk$wz?]a nOf}(f0`|qɯyJA_w/GL&!sٹ|B_bw!4yJ[]]}3a[*/4,PR!&сŔFZc,1z's U!4E gJl&Z+>M5qNSB۹2fp31qF0[3rDv`blϓݚj@g3 zrc&_r(ŋ{ b9w#čӨ01aW֥Ŷ!:&-51 eY"/ke>e 3F:e,^k)ξpΗL{ rF٤\V#{JgIFe<Ӧuڻ{ 1^W`sf\|ka:Ir^gy*SޫZy)Vܧ:uwem=vsݓnS 8m<:N;9 S9_|J)svRŇ\*֌ycnMqd%q)'q77Ih+k~[\-*2ϟL&ϣ[?+*?!{<J7PI(9Zd!lD:7M ҢNh0+ԧ-w41bJLTWMRJݲj:'dW$v~clJ^PO)-s^Q&zWtj$]PmqGsĻ7~g ˷f7_.ZfaVwԵNv Եcf& LYI0҇MG8zHP6Ll}-)ni^f k2Jdל$pJU˵_l «rqZM `qMS!+p@/-gkMcҼE9wf&2lutO+jJqPJ+@ݟP4/><+qɅv]]t76tcCL`Ep΀Чg: }jPT9fc$Ia~<94:Ucs@(̀LI ؚ*OYbdEX98Ei%Z(=B^4*Qg%%Ou%jv>Jvl\hX]hxh#یHS+Qc|#*bQ2Zdؒ(9Q-rrHQJl**vl"I*Ѳ^1CyXE[gJ#Q/W%L[nsQ",V];J6e&Ϭ|Mk e@GɉpRB=JV)##v-ڕ+[_/*-%jm*dwU+lSGMn{!l1"qU{v1r>ρ?wXaWqGh >< }+@zլ0 HeRE3Y2KfCd8:-MigN04"v offhcV(mY-4+6><<;247KM O㐀u H#ꢶn~v $i*Ͽ| ЁnUֲр!@thpTNw(0@ksfh5ˤeq8^Ȋ đOFV*FFOգ@LRHP{$.Fե* ނ?[KUИ'?~JPz3IAh:z@ron_=LK0o^=y.n1$ywÎQӰ,j30'Kf܄Lt>BnW硍\u gvH{(Yot=},y]F.os=ջW9w<s]69Lz$\$ɵ҆C%Nim2J! Y$%99 ?9M3f:8ԟ35[E{efN[ De.?yΐh:n)^E=F]* ZK4L}"䤔<'u]_()JQ7wg󿇣$^7I < Û?Pz/a| ^BwW[{…U_DSHR#}.2@˥(fafRIc?8?Ð8+S,,%&p<`TZd5\S EkQ0'=qNP sHd3|6r?/A~4}& W?T'8?+y?Ot0brE RzV{r:H'R& trY| ĔQBPL\%d`8o#PP0 `PZ|-{S|kaE И5on?Db+Lh"؝٨_$j]sӈZbHIFYHD/h8PBxP=03aLr, Xb iͨujI-UT }j)?+*O(g “i J1Ն:Na K#FJ 92B$+Y4;ʉ́nޡgz9_!YJyDfF6 ,ݚfݨoA5 SYPmDde2'q-!p֡0Z 'LʃT[PiB 1V}HA$W y</geh(Y8(P^%֎ip2 gNisE>o0>(Ű9zP*/iZQd d! Su>8%= ޭ @M27ҩ`vcޭ5u*m)ɠƫToѻAwުIѩ2nV=h+LYխ'M(ZʠrFV,Ioλ/P7v[B0O[nmeP:mhݶdr`ٺ[kFzU d! S;w+;9ʠtڎĻm hiº[kFzU d! SwS隸wk+:i;FﶽE̴5u*m)7O1xfVuv&m{;H~AnV=h+L ]zO0ʠtڎĻmq%7Z3Z׻!{GšC4Tf̝- bD"7Id3$)sUsѣ}R,-9.\Ӓf"T],d )9;Y [DQ twg~n֡j|5\Ѽ{x4; i+5k'ӛ.8eaK%!2<uߙ~X)pl7%q#d B:a iIUzV"ϥ(rWq_~kn{yZ5"4?;?~[Yz 3Iфncq묵Y,0q5 Jmdyu@{\lj<\Hb~eI5~=eʔ+辟7R)T]rQ.tJn(MULBwy5`"M/&B0 DCǯ)U"'-)%]3yLϥ.'^evqW8c3U__~7?2D O.G_wݗfI3}4t'O8X#|\ ŏu?7\{FpGӫ%]1 3|륧?t̆I^Knz; oYnvdrpᨲ9\ѫxt8o7|!'s<1,+J qB7Zx#`Sx^Fx7Uſn\oѭT.*ifaB&M&,YOgWWq%'`E~SG_P$̡E7a#t~X\ѲF`ftWVG&-KAmz_~ęay-( 0p &3cu5Wr:C6QK95>H-E(JgB)dD/ ]}Er(M]DS$/N:ZIKQaJ,g+NHR]ˑZ[3!hcyT.GH\k985g_k|C>RZ sHEwku$|羡d_j'uKSҒЕ3VkgE\-%`2(U ZK. ॔e}TIF=,*'\2errV1l7)#sK6-ut¦ +&׺BjQ&HߊW%Ht=E3[#?| rXDH*Z=8'ZGZFf Bd6i9DA,@:OSQrPg,!6I8 #"[\_},].×ӱ+mfSͱ7W⍫Z3oooslfhm?'o?\0l~X:>L!/ ~ͬgp>?)߀od׳˳iߌnFI%]_\@[ц-2q_!7nN};1.ZY[B/ECMP\ERwdp4"U#ye}v\Vvd#vw ZYLGE ŧD7bǚ]lڄEPh_9ZP6܄h+1[מ4b%{ pP۞t ƻռ Cs kuYSXpevuq!}>5n`l LHу 8W~ A"tm]5LR2}1 ѷ<~ -_hlx5W3xýB4K31挐Y;dv {Z (. ?KTL@@#c  1,6;lNypAtd@@I"r `j ^)j gPuFg9*O:.93\Ν0[cRWyQc1y1CB{;t!:ύl^ح",u:vHحkJ!c+V@ݸ7t#iPB26"H[ NSP }lz9<l2ʨ7#܉r8/_WQyB >uFgL j??PunH5 fݫ:8+Mեrnrt) g87atˡ8=[|]ɕ +omϩϛ]`t=WFOӫGG\CFH(?:M|zZjdTT#]@{Fn$=fB"q*B^1c<=&T"JERՇb<4J8rq9 ȝ GɲF@̙ҫOV+/7>Fy7ˑNr~ OYcFiZ\bq]^A[2!}xX.\<\-ܩŮ>Ts1<.Y❫̿/t6jJۯvwGuu>( '|;uz= dC,@D 7 Xyi4 YKg+H,sz̙=-bϻH'8F~5(dhntZ^mAnOOF#h2+ c޶a#Ag.)LRyMИ<94iN06K+ 53#~2 ^ PhD79PѭGK ϥ?BE#6*]RZ9:tc/|$UwRu7)UwRu7V-&{# Y @f6 G!QEHى"[ ڕ4hm)t~'7u,sƃ hmk 4&,ν1k;>4|\ɩ?A!lgxĤ#&%1)I7ei1z0 "'z2щPXMf*G8,dN4NhH.څpꎍY1briQФ_}UeX*װXrd}i==AT( QT(ehdzJX2+MҲGx+8kWo$t{jn֘Ƥ5&%156D h)e⾤soIQl̲" 5兄wɩ(y d>ˠC`9C$U<BȖtB\b3(L Ren-{ElgT.ݺc Ѥr%_*c*]0ƺYgC׃ph햂k[<]Y!Q%EhI8&)c2#!\ F\h'JwpZ6QӡyN 3EP8#dŲ$WK5a9)F3sa&f aWGD=.,m*0cF[nSLiDr7a/&'n¯٥ ?- vM%ˇۣ]ÖLئH_[pdwo?劈ӋxIino?15@~b}s}rBwL ۣ?iH:P)+]͖YRN.:_NX4)~}vFC6 WPfX nd͝ί\vÐ$҇tZ??Ƽ_g̖ūc*gSh?#Ǝ?ldSc$Tve5o+,ZU[c'nƒ őL{I섏t$sɫ" *ԘX8tdϨֆ[Y+#Ml(m\f%n7)64=લucͤ5B+d Ď4-  s hZRDJn7hZC Hl.7ɬbp15}8i aO1-PB@N$LL>E,%YH'ir欽-LMpo_::Z@Z^* f5.)>{ns2rU\=<@BzȖ$_09–V5Mƣ y< ie˔OZha Ray[Ww1?ei͝/7ɩ ÷ŽM˒??ZܖM6.4`ߝMuksv<=?$U.n0yhD6Y9m83'T]_ޕYFI떺z?嬞zr%wM&F&}swЯf*/OH9QOAV$X#cFHmƚTs)*};tnYZD)̓Z%k&^lݒMn-r ^vNho?y稅8 KO)ݙ}w1&|FDrnՎILM ;Fz"Q2xf=opL3)'<_b(aR,Kͺ*t5KmvR?&ELtON0.Ep [CqQsBN6bwBXrI]uЉ),,ηU՘a?:4ȸTc2 D1l)唶O1/wjbAQrpUxdw1Yc֕ >0YvOlΣ6c .XYƘtG1 F3qNr Xfh,3pnlܓ6uXEMfʢ[ fx.5{+f7& ]]84w\=~l׳L*m/zvq=;>2|0]cFhk?$w2PCǩ;WJ˔JwTksC:n 8Vߴ-;·O ,w3>En SMSr+aylJP2"d&om~Ƒ{>ۖeAAPLj,P so$` 7it璱#T-9Iy$% |(`% R4EGƸ9Qg)d*IVf ؏oEhZ?#LIq4A43/Yna"ijźK#fuswZB`0śX1BʒZ(ERoO{5 c%7D"yE#V2󘎇; S讴= ~:)Jz$zITda?m>aLJk%}KnSp2Ba+AFE6E .{qfcEv$Q֑% G\:KL`#$fX%Rfb \!ԟULA`+c96\+P+l'+N2g *E:?I Ln<2jbl]x4 ,[uc8k޿ GfL*Yf%osqϚHYB\7YgY1O'loEBGizD:u}%dl,ғ͢Ė6k =OQTMwD7VKr7yz o7:<:ɤ:f7]&gHstaP[v+oŔhܨdIaIaIaI5 6Z6jRQs.`dcwJ]2ځ!{M+N4Ra++QX#U#tV6> 58Hˬ=dE1YRs&;FfO͆^ 5T}z8\OxXټS`^ S >3 G8 #-Jb8blfzDVbLVY ~ Qec4W"0H# 8ᤍ0(& 3 )D~HY#hQᝊtDZU(0]~(dEJHq;iG6{B0<떊4x:$8%j7i%#|Uƕ + vY9CJy/ ͻ PP@ ֏u#p1ew}246x0`٠xٱ5Ry/Z mo)H%a8/mF$lM{̡\ޒ_F#IE~@q9-o!k̾fXڹ4s`ф<ŻlqQd~/ϓ/rf_Ɓ_/L?nXeTZ oAVS6,ǰ)$³v<-|_IieÃ9'4; $.'r=yKF8Cy/B Hՙ)}Tޛ3|<}vz_QffFCz-r2lK>T~{2OO粦>oQru夯" ݗ_Ng7׋39&1uH $qdLpp $ .v|2`Ψ(trjeA΅Q3ٳ}JEr4'(U :W˟=ޓtF \UeṵYJ`:Y-h9GG3F\igUfD9 ei rBYZcHӰ_?5bAwY_Y_Ug "S, `xW2R^Kb KU.J3bK¸^ 6ijO;Te*;b9y;giB9k=KHD]L(QIͿH*="vo3b'XOU>]_hy+kV>DhTA{Er SaEӿo樂_ow$7K?/8YɤUB6R=ա'!Z|.ܯ&<ق[2}s ʹOd<;T:(J㭼?78 ""Ode/F\MGSW =pEɇs;@ր   a Ad|ZL%0쟈qԩҰ\ ̂LX`4(bW[cH0ltPj˔db(f&CP{STҌЁkQyI+TNv۬ɔښ$>.hɃ1JQf`Ҝp7N*)<ϻYϚr.FI=ZCJx_E:l{àWEg]@EnK7Ő-jǙT,1쯦_]s+iifJ˜ލYNj[pbnzZɄO|e+PIEMS-^֩FZHGTubBQUau[=Z{E8Q \89ϲ2!:-ΏNF?ME[3r~ͥ+∋{sk >_4m%qBo/KXe¸fkMb(;sq⩣zZ(ukI `A #*',: +˚WI V2U9:TKP!!hN1hy_hʾ[iڭ y"Fd%P=MR8߭-)v&mIq8߭}Dօp&NT,7|kWY bio׀^v"HA:i "⥣5nm]R0m]Vvm$M~sL;Y_pT$ZvQ?z>gkv:CaӚr֋Kof6g_GűzpkdX dC9[$}~7b4BZy}tswR0oӉEsJٵ~d\<hYP$Tf. % .e2%C=H'sѠcE,1Tf)fhÔ;!j*sȅo 4z2 .`Ƹ41"p7660IP-Ds3ai+ AEQ'k9f!0̵͐lPI{մ,8[M۪*-f~鑻)wۄ/|Zo=aM?w>"-W 67~EjvrpL`\mjW>(hbBMQd XgGW' ),߭#o* $x*B0:*jb$H4ڄwNJ;.Ճ(tTt/X͖& uHap!) u1#E %5yX:<&!eMN}':H @b5L;mMM Op*kbG;R^;w/N|_NU.Wuag0frZ e^3gJ5]hGҿ {ZѥY*jY*8k}dJ$_k^>CZ,.ί!~>wga}ߐ %#e;I;eAuZRcGV ; ,F%+ѺgҊ%`S>},/Ԋ))L"5`?ao=cigg T]9bƿ}6+i9/Y"',~ĖˍGP?уPtׄf%YvslRa-ɓoE<i<LŤDa aFja0Qf,cct( ao61Ԕ78ۧ5 <]u}d+3Kcn/b^>jEh'(_L}sś?:@bH*&CH?ğwFA:9xKn^vH g\!o\Nos'. ft_i2prGo!zD ڳyzTn/C)<:Kj8 N dτ1Rf9a qAP.Dc  EuiD$zv4Fa wCREFwcN+! $j$lSmhUvc)޸YƊ;SvĚ u _htIl2g(R ګ>K{j &@;g@*h΅jZQ7\x 3cӞ/^5RM:9 \g D-7%_Mx!p*A8 6 HxDr0f湲*thYMy>DN+ep;l^̐~^$2u],MeDI` 7P 4uwXUD" %T$B^bOJ1ԩu .3!q2akBحv;*XJ&|xbRʠM/V82晴{6h8!)K)ayA@%ׂT)E=nk1ڇ[I=U5QJ#\k \p m`NX@(JfFpVYܝ:]1'Ŷw?:[֗m>d#yn]乜b'PV>jfVMݒ~[9tGqKPõR]i'R,訕bƨ_Twl87 |UZ#h HNC ^nu^'`x-5,؁tn=kr9uݤ)oQSJsT* T3K {,JeFКf& f%(9HJ(啒YnT(*/*߭&hA5'zo稈pUlfKih6GIc*1Df# A9e?l$[9N5Buó&*bҔ~>&V#tTi!OLv^îR4'\!8ǍSJ }B׌?S-Dvp ՖΫ4 .6'GYVgGM9 &&Awm']3<.[`dvM[{#MB8Iao0*CJ 7}0"!%XZBJfVI߄[SE﮴#˧f2:oQ>'+m$9Ћ;#Ӹm, v`ņgKn]#Q}%f련AU_Y-뺇Cq7nCh{d䣬 r J \>VڌLwsoׯF*]hBFmT^ۈ{Sߓ r#2bMvr fOhݡh o^>Q j˃k4:M͘pÆ .$w.mdԮU?f݌`oz-JDMc/anJ[֭ hR\sպIB̛nmyP":mhbEHBkG}[ ] [;*,e?*,y–OYyi0=LCHcHVGN%n}dm2VG~[%^Kymgʿ"jOR\ζ[V ZJ.3u, ~/ha姛{كmb,Ve(pPH jA;cZ2&aє\8M!W-lÛєN w^O:Vu(H'Z.@H:*\ 2rPz7(6NW/N~eTHcXTP/tKXӠ[+͛;9= 1ǫYnT>U_^6dEH?cLVwY,倖BD-畖WBoVYUq% [==:%+k%y8-C`g_\|α-kЯBw胮bZepAI>ZPBE_(-ጋ0}ҁu5xo-Hݗ.tkGO'|D*鑞5ک);F2\";X"+OTr;Q4x(ₗ\Iŭ:B0!mF`cnvXw]k&k}mPI&c`RdGiS)-Sm\F%lQ @mkffS#5Hq(  PC!wz`s3>YIs+Lf7bV55WrO2r`asPÔK,*G@$9c֜)+ *Rqk Ŵ@ 8akUaSAlx# Y7XwdSe9P !h{Qd] g:oZj>\\ڸjE2gWUmvҦH-o=}zp7Ifb፩]?8<^UW8w4,j)s-)u묽\j}hO;?ykiNjmަR3z sry+'wjs*L)-/S-7ş/+e7>W   O*ο%m6 A+~p/nm@7'}=^Zo|4[kƧ垃I}_)-a}`T.?vkgAik/S߆hfNA*]ab'-k<'a/"B >R s'F=T)Ɍ1VgȔR2 eX*P8L(hFeQ/1g"J^0^`tJʍ9$KPHhusnVmx#z } Ѓy^~x_g] q.oB9RTV$Qʪhh69Ş%Ş=ȜfuVտFwq{B* >]ejK>Y}؆bZ)ad}- يٗ6k'`2n4\{0!S 0ԩD CPI>-=ULJ۫X+ +5$/>Q}4hKk;0=XA=V4}}gߜmHdiq$/aSK&jETa;_o]q}E,Sr3K)D0PmYc,`5_F(=D_g7,1UZVxeHUѪִڲ*D;ZɪE.FAlHxzxۇY|WGSmv0wde] k 6?]rw`=d|O0r/c HJ*\Ҿ)Hsǝtm;ҪU\Zݟ\g؀NP;VG&)= s|/ !gKi0Õj#1-CQ5cI1}И{LLW諧GG:cʹ;ܺrc~%їC_q/@Q9hxvs]& v1 sc1mSAETF׻RR;q\&VF~a-*Yf-f맨q'm7k;*ulO㋋8pc0HgGBT/ӳXGbZ͂0U*vW7F㼭'rGfʿD. ,H+}Or1Yܪ VmV%KlT Qcd>zH4?]H[8T1z]-jh6-pëQ5g ;`Tx&'oEmY㸁ZO#b.ּzBqjF[dt^,3Z=pPݿϒkm}D=11BɎǨ%g Q(=Pw!Hߊ0L7h=IEj&̰5 &rT E+\)Qh&B}d{͖k3.)U染?[y,\O;#l1'ʜ\Oy#Ӈ+ \^=n]ŭߨhBǾ]y%WSF9 mBMY ݚs,sdF '12%-pܢx멢D"C ){'9'we%F1ݨdph$.) *TaRK@^dɄvO/>V-8@8uk:ɒ+V%l$תM0utnǚ)#J`qkXq]Zr63gs00fF!z@6 i-BQӜ^e't*D& I# !yb4"N951F#̃ 9 tݎ5a!lsT"[b&Q.dv30:vwbSŴNL@ ~tVb\, ?&$J!q 8IiqtZL3H-wsz0&gDb,a6$r׿Tz K765kJMlcP#tT1͆ A5SB(N#晘0r+3 !SLC5fFGB>N淄+S@ -0Lj6b~!(BPjv[YR@23I`މD[{A4 E"( +0lD7 "72IQ4Q<v39՚m!ć&QdS40[eFƤ3  WcBt36LuF)m sA^$*3ʋi,ϼLE8UB49dy7ls0'HR8JFH'8NDT0( kX9'jf;o71n{ef5~:q ԘJQKlPc&(F)&ʹ\ S-8GHԍfj\g, VԘJ: 󗾷:h@F亯ĥftK{T?\D{5>$Exk 9b%U#[<?}%#-@$oՊQ(#"7cAJ( *ϻldp.ҹ{p;_RWۛJ__\ג͟XwM|lKr͜gV%{yx'ه')EoG=mpl.d&˟8D?{ :<^! dNf ˰]9?009ry.aj)|X5Np˂7ث6 CSW^Xa*+A3Mv Sk_Ϗ-`$P A;u0 $J[0B&GКD=C9o>P7%Fs]ea17hNpwDj\Lއ$Q}d(Zd8wL[c]KWOJB)w(B|{ӷ$BL5frHys''!& yPýO/_/| `];Up9‌6UF*_ ml^y$kcYDzVsRH<3=ߋ?a9Q<1R흣W*+qع:ݙGLQ~Ôkˋ5 dy+쉕NBB@ 47t0cg4\7CďAQF|1JF>Љsf:bͺ=P!*d^ {J!_e@̓fIZbC {;NrH4ȻԨ' 6#*ʵRܻ+ʝ$">m NIx4S5]e[O\$W2a9љ7]˾(/ܬ|;pmm`GlߍGi>C/:*Uo3'1R4Ee܎Gg9ʫo {xlp6ItboY=pF4 2Wf~oJߓ~0:r$,*,Zve`>&c ; #Ab]=xY<x#>"}{{sJ%S\A~·4}KS9"u模h0,:{eʖ j#{-pXmY6Fi$ura*U9Ksr枠K?xvGc43d^B?oeNXԏUWT)=i,ғxѻUMjIs_v%wإbV׻/ XB& meH-t nS=8uwB=:R=&H(eOPV!ߍngwXQv5k V5Pue_,_]Pw׈WH1Gu,?[8gNa8B1ob.a$LAȄZ3 %燵@guki4 X1rR rp,~ ȹU9\Pqg9(pƀk*ȗ+l 81o  DQ%f\9;V*ʑwe!}[/w 2؍#\>E}XRAƑe$zrfL zx_ZL@C0X.r4ǫ4Y{ز[^_ί/8M%%Aw6D!'>~|'&F4[NIc A?- >Rz9]_|7#]B%;80sTttjo~qզekY΀gVdqE a4 Ro}zw?YF0/R+Еg&Vt\IyЛ^{5_}2ԷY*sz`AN, ]Y0wRDX~:?X-ʁ/NWrYf|;?l JzBڙ#aqNT#@Dn(uR.^\=쯻u!B^eREqu} ISªt#62oUƅ) ,vEYWi^L!=6ʫ%!QFRX˝ l :_z 󦐗W)uv2ɇcj=ϓйnf\Q y3qyMr~KbƉ;IeYa??J3=sL+I; .& Xѧ L4d}zl܈YL>oZZ̝QNQxr$MX.iJ!B#FQYNc)sʹ'lDS E%t*a$)Ҭq0M1-6X3QӘ~vωv;>>V-vBgsm'6y ueX~>Z ?|%rT\hS{?S~\y(L2ʤ,HgW Ac<72C,s,3JX01Tei1~Լ/r^)EZѨ.GRB[Ќ,eR/SId'e< 8&P!MEOCRb:qck]W!H @J "^I?5IfxOQWj|jgƓ+V\y`((]_ZUj]fWf:ZwE]0I.4dY9.4`=흐=ĚE,A]NU&L,6)aݺsJ -kH̴Myl4!B,0W;Ý`nRArm(,KmqFjrR~^^6u"^JI 8ӕ;HD&&m'Z>[[le9ӣ`] BP c ^!UU^@1/K0aHIEAgӏ.6K~:maBl~{t/ɱ&#L5M|j?pb}Jn }V' Jv ʓ{8H/n 珟|K&c_J5QJQ 2+Sk%`(P$H"GLƸ@5 )0(w6R {V2-6ڠ*vu ZCC2[U V )Gc OPIBcGi,)Rnj,5Z -5]̹:c…X]#ޜg^J4`=&PD6F$()2:G1 4XaȳJ[ ځ:{KѼP1+xjߔT :DX#O|ـZ#RQaCmAjo#B!%qY#q9.1LW"k/yԲҴ:|5 K{$ɳt~>?WŦ ^0~^shZBB@t _  E9Tx%jW㰯D;}|$쮀 J+V-a6AX#3X„9I5E !bHfhAҜ"m%8kR#¿9rM*Rj}2d}sx)235Tg #r+(i#k4A%:>]!%!D>oY'S+Dܵ@ dPq މem2fsnm23u d jK%j\u6$}j<):MVג{/绌u&x$=Tϒ@Z>XZ υN3@C?aP*Y `F-}I.Xר5fSGŗ!B[\<Ӈ߻h{B<4Ӈ́wqt榷&s&B,u$ &LP&` $ɲd3*Wf{W3۟ ݓE,Cbt]𵑟 =P1J= z'@4II6Q$#Gy2ָ?hVL{(Vê#gz^aʢClت6s L!,'yĺ(>O?p~)=5ڰ.bAG~* 2zqmCB)# d_s.Y g!n I ,DL&,a>$n(<ø&< C|An0UE9\I$MTD@Ie(4y+^":cNO{N{c btXeG ;o- yۄlJl~f Ai |ӽnGfDL -+%\ ٝ+Jf!novQ%n=b_ ʷXяGEOwѻnv}. DHm\Rzdiϫ{ws@{6]ϿuF.GۥC =[J^]Vr$v0 kAUCFf'b 5/h<Ě\:Z۶rG+]"2T~FC*}g6:C~'*JjFDLjҟY[3yh\!/i|[wsV YѹXssbsO;§b+h$LYd|5nh 1;ϑ*HHZLXN hAJS>U-Zmۇe}Z,XvE]v_Ƿ@Db/ӫԭL cn '޷x~}Iշ_ήneL1Ϭႛ! L&9f0fahg`@XX w 1e=tcˈ9Vw kƑރkref֮M\;}`N52 L42GV!̽甑qlLG|:K^vjI*?lٞ<%­mndI#ю=hRM4T$d |87r 0D'4ӿb&r` HVkC53oR L=Z9 hh<&&%Nӿ-%A(°Ν:!pO<\pr|ˎc+O|(p~-$?/c5n!^_ssYsvʿz}3_͇sR_ҷ0i>ayw7fe[\.='inD DK1rw+ة<]3y ) ֤Vn9$!9 >BIh׉ J2H_艿JlDHl]@Œ8Č"2% Q#ځOϳ/#:pIz!b`WUuLBfPW轉fE~Hg$~Oy u. VYQv~nʘE#bLJ'jCp//cz PpW{Z*=z_l"@V *6kc%i% .aZ%FjeIsQ5+bKrJ5a>Pb[vØh"iW2-,܀H$ʹHȐe"ZaL@0kM@]Z` ACaZ3`9f a?\gIɋ)C,;ԆJ7E&U` MHp`CR&*Za4: {A= _ ; z~KӐӕS&L{|$'s(əY$QFsEw0AF^In fȐzd#ǔ'hi\ DOGI2 5Tb&>WY d1r-&lo("t%G22b%f?@̢`mf`2$"e$K)5 o [{6dw F;Iƥ eHwHɀ^n,$QYkBTK>N&uQ&EA$% Nmxe f.Xm? :N@~3Rt:m&瀚 ܚ||s@Ki#78HMH\oH D5cXͣ 9\{@݁iPPZ䠂՝ w![8tz928YĚ]^<[l2- w#L{W=t찎eXPu[6ֹ\D:4o'$~{_ cwY˶e%Kkθp.;Xy=&lIA&;LNNlm).}GD/rF+18kTo H_F|19brM!A(C` Ba}V r.@i<'8JWc5i4M&jTu*` (Mab4;(] #,ZelX94 J(Z!^5Ek`L%)5 iÁIx//rnԠdFFY؆rN5֝fIǨYB; O,1*)ymM 31qMU1y#+s~ $C?E}!֢{Z]=2#]Y \>xg}cܯ݋g||ϖ^=@s+8$} ӗ=7gj h:C>Vm€lP Q\:(MlCG2\]V Fy&i`;@kl(R毌$[E;X bҰkT]}U ꘄg-lK95 y'_DpmBs<zuRcŠ$H}Uv8z0yߢ5{J蕃v@+B36w'P6oѕ=UҒ)) i=e篥%Q3Jg\Xb*JA4>2AQ &xcs1^yGl!o:Yv+?N^\e[/W5Ҽ={/:-'/qws3{{񾽘=I8EhTDed1L,5D1 D} HdJU$FC/NMs*(]:ұz5jK͌՝VXx6),-T瓗9 f~!.A?@V¾9eiiNcj܇$^м|:^ULZ[0zGT.W$=Rvy*Qܑ eD:C%6!ݲUx %NZ?դ2BV&d9\t\F5L%[x%Rã52o"5a. >5]{)}F O=?S&q{1۟.Z;-}neP8MD@;Bb^+,HAkH([D@ !$,1Nˡp^ <=N<__jx+=`ԡVMF A# ±f=`X4,S{Z'wNȎzO5 8 ͓˳!pr9 r&Wyq n=d,XIH?Gփ?8mB<"mOD(HrHD(pI:=uac3np562ӜF%}5 4r7\ YMI\^ZTSJ3JQ:a/Kc0kKfek}0zx|{5-NL,"8slY)VPq,)g6轣z_U5gB; /;s#疟#G&KV OS!.$vw>d~lwO`M}S&ٔXamC&oෟz gQ\o?t+[ݙ4{i{촽mϟ<6g|k ϟN1tBgFwYWױ̄WAI'GbI6 Es|=NkLڛ  "`<~bn,m%+;MzGc6Mxƍ~OaOF{m4ĝ mWp BE5AP%J] }"ضAp,z73&iRZl#LAfXfZ}U(tiWN"ODž Ƴ힞c h$+xjkTEޜ}"(L24yϞ\-j2UYVO_M8[XVjSwX >:Msr1r?9[&`,MǧMW`߿&Z~0Ojë>OoUާZ:,v6{w~o?)+ io`y{_Mw_6t=_lfڏ S%(E (y0IJOwștGJ`T@(" OT#{xLjj"FC[>P Ss:o8^_Ǔ ˲tpc̘'SGމr5.jP-g8j:]{1-fH@R"vV^ E-#$*);R@@!"2l0R V) %=cWi?\/ѲP4sdpS5<PҤ#mey3] У˕˪X^m%:9_>3CLpG6[lZyq.\uf-Ԙ<~tn~tw`T1JE/A(8|zNS,M1e#i9Hـ 5f~^X"( 26R)7=f *z󵲗ww7dU>*C`#0-f04I'̪F~^ Ywݢ JQ Q3 >-d_)kRՕ0ĖB &_SJ&op -kuM;FAGAb&6hahaͩ8aPؔ׋xMŜ,"TH-9ŠK3ƫD6M`J<Ṕf0S!HAO^ nF&G0AoЛ5@ ww+wX(,;XBBir Nq;W~ .Me}$hb&G y5=Vki$OOC\8ͶMS]%t $IP,4Tg,^2ƖL35kL KLkhFYLiGb4<11!s%OKj>5.\tt:JB nኄ (e$K^xݮ-Ӧm5g4(6(3n!%~rv' N\)IStbA/Ch\jJsFd%3$4Ewg/c C4GGB yZ2@5V4p!yM ۇ*J#Oa"FvkLhKkj 8:Ic7XѲ.ilP?mn/ X4>`ӌX؛< XdYAp$`mP+'Ȟ1؀1oв0o ͥ |jRiO莦 ʴ7MEZsmlHac% ('wҦF ǖk |()Q|{#\4$I*#>wtBZWr"M wI!xA6(FfW&G-gM3 SoZ'8x j'*Pfצ~cMpkp+07D(FqLBxMP ;"9wVIVh)!{+ c^UJ<5I\|°NLgL"F$d"# pV^xU5]ɍ53~J͏nO ܌m#T p9H5S,q-]$kH-r@ R=d4^ J|fL.b6sB2cF,M`3 M bE%JMA:1qT16va Jᭁ՝E~n|f q,)cr rtTt̫WӍI%On`L@ \~cE< Y̾fQFxgus6;W[,f GK18F1|:Qs{= FK^r9 7+,|92tXb؆qLpogG< jY3YIX͖ROtl?׃֔/Wz/N@\Z%)ߌ7I U0EmPY(*Q9$V:+ d%j^1SUY^J:9uQ*pVRt,$, bOJj+diynԛ\Zkkhni$ɛ.FՆZ b9!o 405Ww|ʍVxOdXLW6r4ǧ#^޿rBѓXZq%&uG]Qq: !u-b eOgE1ex.K{jZKwXmkr{Ź"> EeqN0ΔX<^⁅Z[^7S,eU" /էTu6ԫ;m90絮9j0ݺ$Adz>u-^t:a h\: < Ds z ]2qq`c 9G_ ((9!S"Qw`B8nyH jQ# r~U9C)+'|ON<OVQ܂928:Ok tP:r,i1[ S">j*OUD6^^,auhA;KJ2"}V1`Z qR9=T1W{ns[=x )C@9͛k<L>p$zfGlF3DZJZkڡ yZ5e_9 etٹT -_j~#L: Hjqh<@-!}WFXv+ڏ `^愥"r6--w}*0%8Z -]RjOœTJ] hc)jK}%+^<5BFR*[GtdٜK^:yM챝=:){lg>{B@X4hN9{BK07HE0a(+P0H+V벩&? Im4 μɠ?M'a-ЏkW#r "WGO9KU?]?CChEadHs!ƈ&3e4sggZ,^\2:pӭ?`3z ̐mda0alCfAߒ l)0b^@xӯXFwj `mR.f G3Kll靄E7?G~/pyn]}g==^̿7rCZIh?1f8~'0yͽ5pplrxcP`Ꞇ˖g0vY^\䯿dؚpAƭW7.|1Tsj]Fn_FVsƴyU~9DB sS|IgNwdwJͻ*/M;:g_*dq k/WlCY9| `z1 1b9(p'qM1=>OS@/Y݇\.uZ6wvc>=jt77~,TOV,kS2y s݌pFFTQi!2%9)DCJ˩\r-QhDs+9+9+x4g$H'!971VE`wAv V"bzd`E3z,'*X_!_+$B+$B+$xH\V@G J*3 LxF #ŋKO/EP1愺얛Kia;kwQY<YN7I:TgW@`VQzZH%^J­qΆy R̃ @PhU`kJH"3>6AH((\"s%ʃtH-"7VH`4AFNMphgW|{yy]]}<>[׭dH¥ % @p@ !Pɣs+X@%,QqTzQ(K)~< _%X\"=DDLU} S\}~k֪GQg䖌!GImE ks =\kRPjFi=]igJ4؊:OvQ0ͤET; ,]J̬j]ęÁeʨZJ=aJ"f{ks p?; |OO>}''''+x<x=g g g g89$CŸ%iUQN ʯŭxP~u>+'"oD)So˯wHE,Ku[,_Y$[Ud(_YH .O0&1Y|xḇPj~ڳN %Äd3%,MYM&pet*ZFs-5\X\b820.0H)0ZSICøaRCRCŽaX%r!ܲLbEjBW\SYi{t/Jα$uPz.rm.b? a #m2BH{&T(?kj퍒HYVC e-wQS*]YPE[uGi%bE}TjQXgc 23Ā|1 2E>pS8E``%bK"JI ܁pfᖪŵ(_"ti Q2L̐H8@3d1Rc$G1hAdD3R&*K1uȷCl -$[VMQK]8D./QH4Z\Ӳك̦P fC#AJt$TZ<)Jq#~xAoQ*^)|kaJ/ʗK& f_3 YY$0h H+s @8%#CǠÀЪZ?r[Ju(~Kz)L3{-V`lrzah:%ib "I)⑀CuOpis>1B {p@8 RfK H!$@ Kr6TqdРSh)uLsldTp6*rIdY$!j4zhD&=IeH(|Ge3^_B煥CaBKV ȆSz/קyNdSIpJ`Za1-HPBRD; Kp 2B% ^(Ahg@UyPJ,)2K[ xpPੰLS]o)@#U">BUBQ zf~8qޯʦ8󗡺wT֓)/"dD(CP]1Q$d& K\ٻ9n,a&,tO' 4iW\W5]vzJ[UTcb"u{M4Keg jz^pkƁL}MS6?W@Dȕhd^#%UT[=znߓ7Pu=&Z?j;|s0tx:>zW/wp1ف9ͽk]kr&~Kmym7 Lݖ$!wNm]+pkܷ].PIs֓==}jGdk}d?OOoY0hk}$ǏD[L4WY-$4}^sr/UVͻ9Y?[St^u,Z. R?L/w)lM*x}rEqqa݇vBB@>$Ft^8%DE/DH_n`]+;ѥW)@FH˱8ZU¹?|xAsəϩi'_oCm.G)yK?Q oid{{8@/Q#\ꔟNڑ21~6u.V)}؇:eZ;ߴYK|dIR|fu-.` %)d:ZQ9W[HZNɝ݉v֥A^znm4,J=)RI?=b)#䲊m%͍s#B `WO*J}WR@5U$ǚPE/1)@z}\zC}Ͱ~q/O}{ao_X3-pZXbX !8@0ظ}H i #73Y!/9JKl9%{NnKFL2ۿ>Tcq 1Er UZq,J$gxF1qΟ95׀ƺ-ϳLpAW ӐB&5P0\UV]TPS$CFRywC~_㥲Ag!9ûz&$ȫ\1l|KsWѥa` +WQpUF@sdRsIcUI \ylS'~b_8h¡JCXBƜ9YBWSFP&AG[ AZ=y'E1ʞ!TɝFF],c uP^S 6B%A?a4fa&p\ 9F3q_踵(DnLatT*k% V1a,2+wB=C1GݐWoz=;ݦ( ]ޔʦ\᧯l6(߿ŷ~& _c/ CxyԿZl^~~Nem_\nn?wQl.R\{`17__灑jQ`F#yD;}qbk} 6;G ~ Ȓ@᭟osCASUgA:M;JK] ilƊmt6+ cӄX]򡆪>;nid=/PwW"_ {5mrahHHDvmV0yrr~zH 硻~>-?#Ͼ#ÏH0>uuMՍ l/SLQi|bDh'@} u 7&ӗQ4M F /nMeHh~w#.&Y 9J3LvvTeh*; k},n3zseih@HluP&=i&TZ&]I˒Ip8Amh}iީnK< q4j3F*R|FY4 E,F%+"ecA}Pѡ[t\Ǵ-3ޔL7 ߖirє-#JL6JL[ Q`NR`ۖݫwb K2<>(74Gk&}hc.9"hBP]z zu%2<a4/{?.S8EqHྂc+N[ːcc- btҰg$ݍ{NRCB@shxCˠ=pJ}YjtIw(5PCJA-.{@K&#c ʝ|(:R@2,vQu/|dH Qb ,ѱ]TTj0uv6AN=Ą@DS,SqA_x\+e䬸<l,Ee'cK cZj+%OnDӐ:I4 $`L\fքR/,;Pb΀yXUٓnr4U%|Z7 ,ͺV }t"LjnNKMYrvڝ|htj8q9B=)X bFW$Þ\ \ -Og6W;Iæ.Go-;dWL3pI1$<%iH R%cpP>'CvNYHpF-\q^S,:qn\ޫwh)QRwȼ.hz%٩T0? i O|Wh9e3-ZTXHJ|H'H&&RS9JRujԪ9J/6;PpX %G5)#K+")(&i@F &]9HPBߡuҼ!䡵D=LtLJ =z6 B\?=Cڙ P(3w~v77w{iof?Ԅ[߼y=և/WC~S~c_{WA_}eն+ XΕ&c7 ,gI4 0O&7wvo>׫/Mg^-N6tyu륛Ư r}Ep?/ ܾd 0)}љv*s 5Ls aiBb|zGkf=qq1 x>Ιt@#Q:OA_Y RWNAq2qtySCiL\)-Zs_4+G8pZǵ)YAϞ1 !YF8O;;T.YȔ"1[!ΙOh.)"8ܔƛd;xdh!N@0#ĭql6Ys\6* vpzNA# B(zi'F쎢p5u!ĐZSu8\4a9?np7\i:@?Kn8pbmϑJziEEB/&Mn4aJ蝬)JrRH"}5(Y&1%A 6`DbH.BytOysgȅۭI_bCFM7D.G}|+`Q0`\"B jf܎.8!iWB/HTj(⡹ YnY^ݒ 1qe5W0x4gZl!"b0(#%H{U}Hap:^ 9鯫̓T˥p޽;PA0*گ|JBA!P Cb#FxnڢCC8$RdtBM~i׏K欛qgUYVJS ylFjԢ14sQȘ$Pj P\Fh7n#E/wu~R;VeRKv]$fd[#L?@M"G̤f"Oݲ.or l<Ws|1p5:Bv]YMWɿP(wy^{UXW[.7ȭ\@M1wnq 2!\+K 8ùt,ܸj}lJ ?iɫX;j q6ΘE޶!Zb,7gG OH[5BKok*7jvY7ƫ!,Ҥv 3R6lB_jsikЍљjWA7wX?/80Ѕ9KiK5#XĨ^y5Pz)Ñps{u* L-UklZ4Q~zR|0X`N ;K*;~ b50?77.ѝ̛f߄ʹO>bo3\eg0bP\o㛢]#,Π.P2O#3ɦBF럸yU̻W 5w 7_/kX)^rx>]O(lggdY>;cN#6#HU؀[04`V+do}g≙qlpYBI öoq0*ײ ӎ4*c!Zk ~WmuJ_Qad ̖FH)0xjl0xǀ19o wPyp}p fun3.Νs>ZGU'\r[6}.L߭3=/+֒L x@|% ٗFŕf }Qo?;w ZRrsHʶC'_^twE7{/Xrphklv̬8l4E 6rGr|1vxHy*FQ~L0}|3~ ƚ8fvD1i]Hc1Za>β H)h},u d4V?ľRA(8Ou өiJRnr 4ɬPm)@1iS1IK\< H%*B>vH|<Z}9^y=%bfoܾGTGq7sl2p$ A(JGpD5VȑI7J3?-q ߍ7ADzZt5p7VɁs)S%0aWZSؗGUCF!!mԽXD! ".,v$W ('!̵ elUIv|O)_KG}Ɇ ^2\(:@"$gPHyCU1LRh! o~SyΧݵW&XB]&aW%4)Or7uY#/ bZzVݙIg.{O*$I!h7=I* DSoE??ί+gerP( @05ZP.w<go+[dj5yJjRUpRbX{roEbhx!?O-JS}_#5vV|f26^#R$bG %]J-==O$W ܹC~&O؋~TûD:z+Նpڎ VwZF| ֢<{ g$Vy}fdRKaP}|o1Ye?Ne {dŴǿ7tqwHx1 ᄦ)Vomnj)MeLn_-7!_f8Us܂6Kx8`-T=D2N:\GTV'\Z&z A[@Z M%mq!ϐH>8>%U= |c$OA}mah&2ԋU:PYeJћ(3K?p|WϏfY>NbO,4-YA*,em5aسYpϞ?f$?:9Guq-9HL ,1DFBE0X ̅!'00>ϦK~] K?s rx3Ao6Ɲ#E*cqdpd'ԛKD4]tXHA*Rhz)UkоV~Ym--LѤw/MRfQĂv$n%7# HuXnRHL_4'RW ny=c1(j1D+D¥QC7_,CY;m 95{ͬU51 ;S09`${ZK:&n}ʘ7+G uS{BEi6xG{C<:X~bѮ{csE2'{0❪>; OYOÿw Diemqr9 .i~0!O?`]?r֍1Y:bSqiaO"u*-uA3C-b)Lq -Bp's.Bgf* *`Pv>J!ىZS%3VyĞQ~@HywzLةuCrI3+o/5a%x}Ju6srK`-٦s@HQ$۫⟿߿ԑ/ i0U=5Ud%d5o@Re[EXJHl]Z*v}_z>: *Yoɝ5@g cj/%eWO?È\("2Ʌۦ}nv6go݈T A-5R$1 LXF5f>ŽWQ"a'u~QQ<47bt!tx~s=x魉?ӯ4J)űFv)RR6&uֶ@MU1pW%dS oNgv9$t.]faR` ٽտxNTAF^0َ7Nm834Ca7*Bž lJe]*6p|QƸ\ag a2Kef1r2ǡ6,w:ăqz si R$!wi'X ARA( QceH͚ٙJ^v"-ݐ~Iɇ+$RnzId3RTJ"ٿcza -QhgEDԄryO~,mBKZcUD98#T2 vfbiKý-) B$E3c_ȡ}H~$&Ʈ /v'}{Z0s"&& Nh&Sr?k((cpJ9esq//yB-jMŜE^gU8JHYvmn^RSBRw4qTRqOJuAo)pfb,S &ZE< v3 U0N*Sv[Ol:QpBvBZʣϛKR3߀^x%J=LTQm 8 #BjT,cm4ZCm8sPOEf GGg44vTUKSWȜhԿ +i!GI7)ԙuT$ iCX1dxE|%버`dWZQo}|})?ݩi]#: s#}v[N]Ew>-nohut$%S@‡;7p~K7YװPy 6Gme V3Bv-;4ATH.\h%t. »6"5s0,AƈAR"`M(LvK%H)^RF;)'$?.FϢ-ngLQW?{~_$W?8 6tf"3:!`y]'\~x2- xX\l2}C1{(.?/򛢘]#,Π.PfLг'.w^|`={XxV;ߛu@.‹`P?-޹n2gstdY>k1'K,TarnpC bV+Aq_>Iؿ3̸ j6\3f|S5(K덀b>ы-YLUdC\Gy"!r(dّɸs[Bf!귫i祌/Sc"˜EQJ̳(Q"Ow_v>/. {{qwo82nIMw~z~0/nFx1Ϙ.*dg+<[z5_:+mI`<v߇ 6; ra$ii#K(_5)ɔ,ٔyHū}Uͪp 倀)Pxu.|s7a`ZFƲFOY.\]}=  $/oE<[l g3ϖX<;:[g\\Xx|ՃVQu 4w0F / yE[<%$Ka㭩cpB0ybQPG/Cy䆅u,2+GTt7A_q*-sާҠ^}dH*wS/lYao;S'2, ~]UhBdݲ3@ &_mAօuTx҉RU"=Az&c8^3.O=c6 evb;G]+֊L~RO]ă8&$?Rq-^%PeOi}]k铓B/L]tfVun!&6YW NէۏO7SND#i}+L`8 !bHRj|}N8V_E)}z~/쨮5;a%AD黃6C6kd6ߪrr!$ӕZ(}7ftB˝ɖ&8( cDKE\&ʎ~p7n#$V벼ՂT?[N~`vF^++$beWf@o"DUEvv  Ɉ 8ubҲj! or\H\d;fg/6P8|*V%ě:ʌy \9{trnC. Ա/pTK'NszY_p̃' UC*ō* } nd*}oxᐽ<ٱ3k 4(vqa^wMTzRS1Lsk&UMHN Ғ6>Sv#Yefݺ$C5R7_ VY!Yf[mCs3|)R?u[s3Kn B`\'7BX @'1Fo/ā,; U~lUcg[3#S?Kw1×mVϯ6Q8aB44'GRH Cırcц ʲuPc};d N]U2L/娂,#)wwy ĝxMdʣ|Wx#H[1nC}/ŅgXwX鹑h3 DV()3*ܞz%*&QZ ^?2#([/6iR17xpBՙ NiĐ2z"4 J !"kHKV#V(PjS2 IR\g2}|7}\c j"xϘ B%3yBlh#B`1$Oy8)hu -/t5u1R2>a8{ YE}%Ti+4loZ\gg6Zu'Hn/s.G.JԱ?]E+(&ϻNnvӌju '9ewrf6+=NQ5ruo3\_ 5}7~ ߕ„])y,Jo,%z˻=I1hا̲6Pm݈brYNY$X !Ҋ$\ Ik?ďt%}I *jW5} K$8ڭMEwcUql/#К2%|Qr,8tE"-iOz7MOW"۳eGgGa~ FV80`W0S_c}GiώMI'nԅ߇{ AO(x};}/NF#Ǜ=ZI7}ܽ~Lz&L&A0|~y1yw8]\)XMw?]eky^jE~hG7kԎ]'߽?ĶQD#ՑyGWp^ :~I Bd:BctG!U#?wPPJ?EGoiͯKq"\ͩp;IoD޼ڞ`/(MÐo9ܾɵǓM|9i?fҨ̉gٯۤ;Ф㛷[h77! 4{E^:.g7tK?wC&;/UOPo{?'AzI]pqzh0/8!!4znܽRXȳl]~]/sjAQMegv(\<~ ' <䬻xm7Ᏸ}n?pTxpOǷ\M5zGMDu/"Kڿ,y1%{t܀r޺oڅ5)vN8}Iyplr~/=Ǡa0K8N2\ϫz~/E6-wPv@1LJ}:2K<"(1!s|ж?K*utPr0Z1h&N8O3D0Tώ<-,۳Y iLnQ&/B KHAc16M 8X`?;;ٙX eñk-X!Y`*prpEHt>94@e3ObN(1TF$D,ё4P>BD@ET\E&#kVTэus2Bsr8g1Ļ|:*S?6RwV2`48<Xkz_Z |຿:8qhÔYb_OZoo+>iOxӳk踕>{$y"<)4]NmM˵4w $MihMJYsA\54b T+|}@+حX%K4jRqVv Zb grIwfޅ"#Cs4~sɍ9 %raޅ1xpK+8hEם(µ@Z̗̈>Eadyw)X;8oqi/D[T wZ3hR6Kv01~",C 36Y>|Bj?{Wƍ sCoUqRbgeR n}HQjI&~,.8R}^BcJy 'a L(k"inNs2K`soT7 gJ޹ H(44/`+zqYzG܍ H$VN@XK%/0+S .38\K$^`%o܁6`/oAU6`[/6np}R/ҩiC:c9̐ ,먎bQ:uTG:Z0{3  P)q%3 pA`T2- 7@Y?w͗.  UZf6lsv^$j S9*Vk^*`k,DWrRrѓs%s|uɾŧ;p5n!*l"8Uڙьu8)ڛ>S cmI$jIJWo~xKRj矇߿mgh2G_jg×vfe-wxYFe RL[i!G)5J?`(:9 >-k^/^bܵA6x:*܋馬T̙@ T 4rQv"B 0(2Qߐa5o#tsb2OzvNQ!xsX;'6{^"ǏIbC'GIs?&(&ҢHb"-&Үۅ%r;,)NιkkL@b#es>}.8QLD1NTML=71tI8nQ8Wbu,Y*%ô9H[{[#Uj8)Gk͋S,h c< 3r)uI1 f%#2DBuI+d3k]D&Q (F%PJjT"q~šzo ?(FQ5$,2Y,rT*Ԣ_jڎuI4ѵ]ڂ ϼ%(W`RF Dj(S%1@cSJ4sQQx } C[ZP0s )n誠Z "֒ϹǞhu.wB$Űޭ208C5E#::̩,<)Th䨙3Xf֙&Ŀ,K>IKî4/08IcZ)˂qT& 1݉P(%K^PK7 XMPSJ@U4d2 L.3PBL>e0YָIUZdɺa7ȋy-ȇb;sI36ߺ?Hw[n?o7ڹ&]ۤ/_?c߽{u<śg~#%Yq/N/Ͼ¨z⒄Odz=~YOoc@g@]̖/god!ysi6:;A;/j8To>eAd; D,蛰M-c [ji|<3#g\ srƻWaزuj0c }P;e$ EZCox7}!׆?>Ƴ'" {-ńx$yч?gxeçujgI_Y.e"L [yH“s6f W!j2Pܑs4}C7c}e)$a]Y0'>pNۋ `R' Bzd4os [ʊQENSnKp 0LhS3*њ1NuE)e7u'"RuhL2  ڢd;h'{]9+}K!5nҠUd!3tIvz/|"hkűn*^"^s'4`s/3V存k'\`\w|/'g阝܅ŀls* fYgB1-ۑ-IÕ$|kbJP6xLjFZ38Qi-RJ%h8zKOM _=Zt`O_EJ|QNiXQ1zd*%ӣe<5m{8m;eh;7>dv&ǡj-m$kKuOJCE6ߪ#ݍ=&5P-f?P~7coo" 3] Re*aӎ pmBaH{&1YOĒӀr;oLk.JcoNO)ޯ.: W"-Wd ԡRXLd Tq3!}j\vZs 0{w `JUG)+oI:NV!i2<~}-d4[DTݭ8Jڥ ()6 { Mkn&9rZ j.+84 "2" )^#pܔ(6Aa"0#fᵓɘ Ttf9uoLw%=mt)ZNM΢WW?Ǯy3]j9DnskD^'骄#%q+'xd&?(TI,Z3ƽ:RQ%RNW%ڐdZmo85$h|h*C\=]IȮxSP흽٫aXpaN:D 1֌wOKYs$]E`ٴɃOaRLڔc>}~)1(6]Z:u#i;95r4uF׈6vF' ]Z)Z֨:Uy8hɖj2iIL:0q7 O(s悱N czsS)W `>~p9 o:|.F2(R4xip[ѩC8Epk:sݢTqGU;}T%(V> U >^ȺS:̨#ĵQۻ;Z֔ǟ^ ށ$ &i {;ޛwolqH.j+"ĭ{_BK;ڣ?Td7.WB7Gzn^J4&}^ #'Vba(mXk W^;FSW8l]7H$1K$4IYs8-pa|jNYR&r~^,j"X?,{0pPZX1K l2- plb*:}ӗ vo ~~bpopy %|.]|O_ ^M&gݭ o+[;Pƫ.im;iۺIZrFu>, M>l;VB.ޑ->=厨}{k)#gۇ\;ND%2UKdzë_ l?w;o$TPxK$ }yᐕ9(qFI&RZ{6|鼀O2H;5vAw^ڙwtBzp%f)QiwuB( 1E0C~5o,GW| ' 0t]³a=^nN3'91N:Mll;w`qQ\Ӄ_s ]_W/t4ͧgލO T\A9LgR`/&Zv:t<<beBu\k:]2SxV؊EQJvqHؙ%wW>¤9N5ecf~ZǍSSo/Jx*/Kd V\dIBRl+hK QTX;7M)-p~p0sAq1;RR@` 80!Z4 ,H9_P[ AwNzJ 8pǥ(poIt,Ggө#8 ,a@ "KKLp*IQRLj/kkTQ`Pb\1Z.1u8ۜS>( 2c;;,jJ__q~x0[5VLǾ3%0| mpxđ\ OO [ǼO A%\K(lS܁ _q>oy c%h܁o1 ]àm1QQl43vQ<;AQ=:A)G4w#c#HY:ALnͺBD"eLjS$7hG;\;[33}Z7?XCE<8็X.a42 w14{p>cRs$@B '4$\ENꕘ+Fz.VMRaCBԃy#D\]^&c6z/bb#<^VNNg.JgtM$Ǧ%YTј7cۜjͺJnmdM﯅BF2UýWs5!e|Ms:9[ ;Ј5^hrh _ LZ ߬ ޯ, 9L5XFtV`U&d^D|Pg њ1 \o9NK푠(޿AL6谁mpP+BS"j?$@;XIead[;#ݍڴø9@I%0Uˤ=А"GfCT.g(rfYS9l>$ Z{S޸KWjuwhc,pk%OWTDm7< L[[`& ^ XJzk{q )yթJѩ E:*轮ua.KbLtY{ޕ$"Cvgִ>C6qĈ33Xl}"+c俿$ur" &fwS}T=]^Ax*}OۻՉwN壌uBs!";ʴCjxЄw`rBwȧ*G_zQhpD֦_sh1%"h{DP vS"_[uɭe穙DW*u[i6Dg2ZAEaЬd0^T6y*[IU%5UcjF ٭xMNJLs>JiN\ĝ]Xɧ}hCCl[L/| J =k)qyU}MZA mI\/vTuK A3{j=jxϊCg[iđ`+NIN6?vj%fյ#×P%mK`%ѵ \,((E}ثuf;ԿLXsi?jz^!%وG{)&q{1tWVjeYTJe*5+i%HMgsFk9xBNS*9T@V"Ro%H^GϞ[ DԖQ]|d;QBVJ5f`vu7[.Tfd[u#@dmsxibx.jUjO`7\כ:o7IU_1D׿nDJkWauj ma횾0VmIl۳ks0Wާv~ʴC vj'4Ud(;&F[RtQNפ^ddQǀdEë/{nLYmʣ1q+rM2M>^<)4±'/A OJKj昍 ZY mSf107y%3JJ~!F*"FFD!sƉ1ȑ t5(WE!D Bbf^ic0{X9QFb'*~9. ~V1cQ0+*¾m@(|EB\bikˊ fnȷ>ۊe쬁)EIİt,dLV EUZ1b0 mb0e,D,I.QdO6ři3LBJM}gÏnQ]kqB}]µ.y`HB2xa}Ɨ?uizŐ0Dv7)^tDDذ~REEa4 /mvIsd@iƁqB uڳqϺ-<39ƍV|E rì_7imN:Ors2>7l? 2pAW27Oy\pQ¿O^߽z7ƭothrH݄yMxhYzDž\ 35̃ߠm |"d)!E=X4C#.!B:6 c ]YޗIQ6ðڏs&A+rw_jC\w5Q= i=OZϛāEίF^?o5\o#6A\v{ JN*3UY7V~`S,s&鋫,x}nCQWק^I7 }r|xr?89/'/Jq¤ƭ_Ez5G5?& N9}5-(d?K!p+8M-@bꤛ r(Otno*AEll}oܔj?Er><4Ym6yׅ.3:V- gƳW/^lh̙gWWTK荍=?>8|s ^ð<GwAw·nw=+F7zyf`&FVm\QlJum>pWMQՐ."zv?}Li߃'S^،=[in]ݨB)Vˀ{=wrc肉W\_ii{?xFP-K9L%.寖qN0Mls4qdp))No^ w38߳p=ފl!#pL>.mRL|$si 6&"pQ,(Xm&YL4d,qLs)tЛ$ X'R9je1I+F+@ ?]vŎ[Y%_2-;=aVf֢0?Ԇ{#7hCٷt`u2.!]!/ ]nOȲ/"$vTY PJr2fMJ R@ID԰2n9YkgN>w=K럗ˡ&s6ivg{|)I6dxp`뤏ffo{ һg_O~{h.QKAt74)O(s3*3r[ˈ)VŌy+hdSENXm,UY L\]7|׆" .n^F- ٯe_^JwoЭ}U3^|ڛEn ќC >Ixrb0:9zŦ6epI֌YQLP0DM9_Fn!⛗[_|1E>|7w OLIRPb}i\_`N|XϵWc{0Ul䘯Ÿ6P-w2r7!4 f?L ep0pY8 ,w6}^Z(QxBЖQKЀ)0303i x(VQbnr3S Q,r-e jN`CհI0cY4TyM҅?x*a|z*gA~`S{˼l j-[REߟh˃/*r^!f7j\X />}>Z?4L Yc!/ BtMk9.sdgEњa蟵(@pgU^ƍQ#1Btmw=O mCwoqnag;MĿW{LbxwD<+TOQfQkjFx+ ,8Q^mlFM' Za jBm)dX *9Us|(&>q`aԝ!6MA]cgb\dIŞ|KI"n oS5okH :S~KzpSUZx; NigF C(0 Q`iJڲ>EdH5x+7 'BRHJ#f46:u`@e& eZq$$&1X'4N$I1_ Nh|upE,z'{01idX#ŕDQ% s (#|I &p+0*Q1ӱˡG){KK ukĪrkADGV%0'%_po*g!;On^_ۢnxRH Ox dCU2 N1H(e`V^Z(Auq ~R(QbE ʼnF0K8|O\̱.!/ NQ PnY2yT녃 ž?wYR2l(lsQ57,-Z|Uvz~!AZ Q=ʂJMT ³.CADDDb\Wpk|| #2ޝzrb ka]4aX'αr9։G&ނWHITmIuNv<1Kƙ 0vK#v8u۲dMzY8;~v瀂^רԽzL} t6q9{Fra9yCDC~}j3fnPwZkg;=j>{z=V[{h#װz?,ݓA5uMb_֣46o?npPO$F:AG5{e# rF?c)FxWx+5HeD#\tF?{o'z)!2!c6Z"(Zt򑉊K@V?c2Knfl-,>25[lb$͵uH;̀|$h]mRXDY$I1'I`qExrwdhLEhDΕ`D g%Ej8}5Iҙ H+u1hw01+0^---u?>?t܇`Zfpd;hS@h|~r)yr|C,,gm" 0(AnmJ^|zsՇ_|_%)\fkA٧o}ˋKN_O0b4-,gc)o uX'g%>wvkǀJJ,p;gϽ qBDH& Z5TS:1bC : N)L舋`BATx 4cEJ$-3$ E~*bZ%:*8`IC,VxDIEӞFv@zeD01' D_ da[{d*tlB4I {Y}bj(X5<rEDYBQ;@~9&c,DmL݈[jtPfhƷEuBu\IVNcK,b S|>X Mƻ1~{1#>"L>|VdؙW=΃-|ܐ8] )qLˍB] 8zUJ! U.VIT\RYa2zF{03>'RS,?c0oܿ[cw_!p0 * z&e%*\SAk K`APά' Yڀ{KXgzezx>հ7AR,a߹@bqyvÏǭ< ,2z)ZU%Z;dW.jxx=dAT8x;-j]!1n2Np+TIC7fOuJ_ݟ?3\4.|L[ !BZʮ32{T~mʭ1[WbE{ixْ~+wv68MF3񯐔s}]8cnvܾEuF :=AzBU(nH!lA цjgO1&d \qi!G/G/K(zET8WsEtkl!E]AHQGdbK(۶qxrNb{q<^6~~G=`A {#+oH η; #4@֠̑ƞH\ۤ;QVVzo}w-A{t*|y Q wt-iZUS-H`xXo@/G, UzR0v%a{+AOv -̊a6m|ef=F3wQ1* QVs^E3ENEW:$0\p:o(|lHif3rʖtm{Mt6ǖy`ً{ۻm[{c`iE Q_oVQkexZ2"?Q p_3 ,HAo9ؙGw䄙FfDzwֱbѼulVWC=JVԑ)ʠ= W[TlE#ө tj?:v\=`+Fodboߢuŕ-rlK1K+v+s)8TtgQ:4o]o6 n{lswāƐV⯑ˋ=30ޭ·Ka٪]9G,g3SFgզ(x[πqId *㽋.$@"JU-a=v#3Э2E|({)^!rL<@J܂Pjݥ^ ՜LŽ3"&sVF>Cl1etCDI^Χ ,un8/WVe\1,Wj9eFu_f;Ӑ:d}j'ʒb"փ{W c'$1+оco;x5  <0 w{໫͈'vb Rӆ&";ˇfG]"_}J4`o{mV[tM7ÏگCz|E+7wU+Ό pvM,RٙWaŐ{Y.SnZv!B;}jja%KaW<]5Ўl+ NNJ 0m=|w̸?C,症X#z rƚ=x果Q>5HyƳWoZpG&"esuv9㗽xbzΝl[1jչ{Y`r̯J+ b_"3?0̰'ru`I_,Qq*c2}Mॏ^"`2J3%ʻgyXgaї"6q9AgI?rC9 _$}Θ\CmUE][oG+Y~8Vu==27%#?CJC3Ë"s8]Uuuuu]L%P1AvVIż{kM`OGH{JpK[ QY3ƉJv7#xLy*KwPWS"eGvcꌧΖҎ5O:w5TyFu軻ckUN@lD"]YJ:5ԒP|P.ŋj)u ?%, @T;y~bW()d`MM3?$ߟ!!_Fw)D Kif,il 2d+HLy& AB0\h=&MJ>X.h c&Ͼ4`9ƃ7l)]roXlzXH8Tcs *- ({ᑣx's6#̬ӑJ(Jrza(Rºm"*anþRqOK=X&CQc}p hD]e.b8;.h& Cs ]m~?jRaڇw S8ٝLA-Zu[/c[!p~IOn0$QaR0Ml=`/d?p^t0\tyYa5U`SAOd*9cꯟڙڋD_zNbM3(a %5NM愵{l: 'xe]/T3 LS`חTXBlx]a#O%oY9!XDJ[2@EW cIjqlEv wpL"2ur#\/#K{qz=T N p7~gG2)/94"Yu{%̽ۈRfXO7a$x1gG>;BiRQu@Mm}IdOT~lFc8| fux/]ԏoӀ<,0!tBG5?|mradO $ f_?_篒Iub-- _^Ok?j;VN^ T^[/ӾofމB*y~@O>?_?k?~ubͽk&~K>>H۽^$4[; 0(F83M6@໗_r8.w`ÇhrYN*Igrʌ\H΍Ti\{' $S11җ7û }BH"t,]*mGY]E̕m>;v"Xr ̋y+@o0-SY!Pfә%&&l`TN ω #%[BXX#$i6iZ pðpxq8MgՌ[˴H<,:{becXhl0CzA~W Fep>yˍoj^po՛l@T(ג\oGʀg,/`4|MTwdkP(*NȬQ.g܂teFgDdT))J o-oPԺ]z}7(/ 6Y[Uu GIJKHn)ΑP$ssm1#4 s`/' 1=akKڣDh,@ˡض&C4'ʳ" /U !%t G'qpd֙Q%L Fqަ ;a.c{0>BZs o +m^#Ck8i&U326j 3J/=iiWxs(@e+zҌ ^²XV3QԇM: n{}TsK2)GUwZTe}&+6S !)+9k'pWYg=\z]=Q ~{}W PىG$ʉs&l]vuu=O@팿3NQ8uhKNiNZ5m>ec'c) ϰ ~e]G;]c8%C9WJ=5~8 rbp_qQVo+O@ u <8RdOP l \W?n[o!ڴ6[%58RFu޴1lk{|rMAG$Q1KQ;:r @^6&o|47X1Y' u;VGrcwPQNYJLFr]ܣiFe8J;xEƹ7, śQm Py0->=9f6Dzhud/TrmܯfOQbt1_+Ϣ\!A?+Ӭ:#ZgO=\ܣwv~Sv\çε7y!Oײi޹4LbrLaF2i$'VҢr-ZSJzd+lżQ4 ]s y%-[:VX@8p⟩nt1;wXǺapϗ[5kNФ1;ֻ@xp`wiv{VN%:` 3`2sfB!gJ׍65vk멖(g )UMP6!Y ~d~EGRiXnvׅNAG<8 M azتrȤ 5_>Q Yk1L!+ʪ,̘-b;9$8!$T:i bjI5!$1?j׍Vi@@҃5w #ԡnU7Rļ}l )nV; l'0Al(ʟGPRhHMiYOR){!{xp`Tz]/01a؀3!r*+*KR0)50W诫y|P*j:U8$ Pd% տM1g_qb8pYi0k8[{5\m]δ(;-@Ҋa*c`)bΫks 4,$G l&z;iw2dΚ I~_FHn\l@nlL D Wyn=Äc[i튲QZ}&.WjO ΋>wyȓ>K=b7^(IǵkZ&W$Ų~$_8dmٽXsbyxC*{fw ջ*^:0DOo"2 .y)rQpO4P?h~@*G-e.$͔ZsXfD0l9&S"Kc3EHF5Q\weFp930yfԅL*g<͔>t.7.J5Ƽrkl[Ks^`fdF岆l\|r"Mv*-pxi4t|jc<@ )$ohP6FR=9J}#'DIzVgw9K(8K | 濒9Z[ES1j'|j)qJqJG& ƠIIsYJaiAPKƣ I܇!C 3̴U190O,y;RzS`LC B;d !H!)we ;{N22I"Vjy6l"Dun UXp_u.F ZVD"|k q@@W!0CT͚)൚+&zXt.VRpOaWEaH=TQ/Vb".v45yVg;X.6v80R gF4SuCŬyY)0݂VSvMn p\I/ضy $83`2s`lvod2-Jݲnb #SůȺUR"9N8TXn[\®œ&o{x>D;BL_r]?`^ϾiF }֮dnO6ɺ I,̤Ls@ %Ve~s::sJx)3Bڙ`U)q)֖`}y"X0ːΟ}QV\/`5Zt ]fB+5,9yPGTcpމ+#P!, &['GaQ]:؂)_l2aCoq Ғ>rnϦKR0iK*P#Up{ij|Y:@Eɠ̈́g߽͑%ׁxgtygC@) H:GcزKRԟ:*V}[pCG$_H`15c";JuT/ءҧ.u>Bk97C?l6 Z]VGFQcCJvj^z վd"·%;);8՟"¥ ?{V^9 cR|a6B1^ +E}vAr?o)8L=0|J N8&$ѱqG=取C(ǣ`)գ=WQgUΈMPX/SF)߻Pd9$D8QYL%1^y8a!Ի @rQHhKYy+1RQre$7a)1Vi,+A*}[[.(c\U)fLZqIEDaN,4͸$"9es+ ys2}SW(dky3l֍0@۬y5; 7EdEAZoEo`w\y$'tD+Æ8@ -WƩ H#vVƁb 'tV 7Q6E{&RinT mJXmQQ=sR,* veNV!":_\zs v^P2F(ѩ]%UL [M%Q:Q)| 2DleU:}y%LPJ~<\Sey 4J^6̠5 ,~u-Arzuzwɛ+f jC. #Mbٕ})o6+8S^e֞ndFvh$47o\G XU?~Y@nn]= 4da \қl1D%= lpZIjuI (+γB80^7ۃGOLz3ϱ\8ZRKU΂k,sI1SM8:]bjp{^]/95`p}u-:78΂|Wz{=>) +dxkͷan@~o[]|wp#7fN) [(8;ywyqru u|?&FfQ-p,B4q8J(u$ Qv6/4l%KL6^T3ìoW|u U68{[AF]Ә4#xK#ÑBZì2X [#MK+7s^:;LR7]9D(bTNPtRxˤ㢲%O>6hPQk%FHlI8 y{Y [/^{ | &:|?f~S7{v_vJXwvEs. 5b}t"Ff,á.Iw 㶵ٮՓ bM0bG}DA6F}$D}刣+\1 XW'65=T f7e%):܄ 7"4-9Shbo!aOe2LbǑkmp+δ9 eX> =i`q}' <VػR A9C2he%R`VBi3Sp#^[P8 K̽Ű ƅւ6|2 iQ+ eJ@8g Yr^EV#rFӥ;̅X@疔Typ ^3f{PϬ,P~89Ø:Nb8pP 85z.$5bT~)S#JyLTYëP Tp{ +o.sF.7)9u*eqmfxh=y_ pEҗ8DZ?>jZ H[Łb`Z^]y2/.o>O%=ؠ x :ӿ ZnJD%Y)_r̍1TlEv F0"TpC \֜* 7Ke=q,MaTZN0f-b=d}tøJEQ'Ϲn`"Or;4sGf}d,dhĐ:;FKZI8-8CDҹ>IJ]KM^/eϮr|1)L=¥(堟'0]JNai+Rh==-U[r3(, 9oﴧ6cC^/w` S;R,s"c&\K$OǰRE%,ER1=ť\:CMHv>gTY+ 2R 6ޠk]}/L)j^TSXS,| a+r/A(()+Jpk>XXW3걮49cƠLw|@ */=ustfL@Lk ű-q9ƀVL3Ǡ1_ Er~ P w~Y 'Z#A:-TpV;V;;T?ʹxsIhxǽyFM Q?rIGhZElZhAP 8=FZ=BؐŞZd0ʔy3 .&]z+>{I5:!QV6Ujx-5A[6dt_Cd ^ajmv0Z:UO+&m'9fG3Cף*3Dr$٥LڪLN=xa3 ɞR$Nå-/'NOpF^ͯp77b۠Q.z䉰Nй"~TvO`a>֑FU-e9 bT gB9IEeKDO+ Xni8Q-{v9DBgl$r@ҦLʇZA hC;B=0G ҅چfzjg-,E-XΞ] RoC;vIm5JAZ{aP j-r1u+ DZ:oZ"e[-&kuXٳk=9Qf[&w]̏'S%bf:TInOX5Z3HevE[:?@F 8E(Bh\TKg~-D!YPTTN}Imxl uaJ;MA] c@/XJϜD<{(Vޯ)[ b zʼn}#5x`+9؁),tvqxYt_~R4bPxTCVKzVMZ 3~;_Q݋q@RM)E3,E'd_WPyv-{M{V xʴ= 5eBholα[rz|`k;N.:g!9/XfK˴&c g"9Kq#48.|D\GQ A*W<9_(\ XB^ZlòTNմS&Xa`خB ݳl0, N~ ZL<֢Zy=_X?"Y:G]Ȼyw][g)SO;BBTpH \.?ilNSE&FoEnr5z;M;GŇ8he0ɪzK9;u CQ (DPH!|b9ic\Hq@b.ϋeR"p O Q+#re"z1٫U(#oZ[gGne95]72ܕnxK~syh&F[Vۏcܮ-G"Vџ 3ۏ__kJ|7Ş8URq-ݼ3gFr5/.F;_|޳koe#PmgEreno<]!%S \g +]qjH \8gL vk5z˩d}UVVmCQvFL/j2e54砻?]wu&Z9ϠL1jp0YA8&Z[8,r tzx_Ǐ?~zd=UGi@!8"F0BΈ(Ȕn<00uN]Q@{ 4w|0F' T: T&ck-gS@QCk@+Yl4( P  ڰgYq^Ⱥۊqp״쬚L"?L*WJ S 4dQB \/,t^c!ɳy/ 9kpQD=}Hf+jvwӊzhwDNpi|[Dϝ, jY1+20!s#vRpvupbbDP@\aKy5/Ϻ?oF]L߈'j}j?1L#Zp%:3}AS`=UKa_-Rp;xЭ(OG:ޡ Jzј.@4=wKb<}~o}>)dJI5Hf٦q*哠\ MOb"Wp0ج\m j/x@kZ: h\)X)zx_tT ܭ+hV@ $ l2l{0:*؀LC!umo92A`9F.M|Z҉䷌O3OeD{ro6J u<9*l>7^}%+IS .y .&׌re$KY7b5{# invޑ5O[rLjzdN|=҄8'uIs#0! Ir<`"I-Xޡ4Qm$CYLmЖ*PD4oJ*q$Q ptҬ;Q!o_+eıj`>oç|:ˑc57Ȝ0uNF\|~r/iߨlk<<anty:u:/@fάڮ˹GjCmrKmWַi7w5AS{Yւk1M]/Ÿ00׳b@G߮` d 6V-7@1ieWɨߟ_|uyM)s0&9܅L MF8b5N~yW JlL *]-wI#N8ы;$,7Xئ֝_ ?<]|Q Ɵ.]@J%z"QB O,I !#_)E "/fP-w\qy )%}Ehab>\T(Ɯͅ r{p ;NCV贂PC[[+r[q[)yvYSeM3D "2凣E!73|}PSQTi\\ƹNz'AmReI{(s pǏ7Hc}0I4Uݫz,0N_鉓Df'WCCs)K|Y*DmMQ7*<i~C"^@I)|Խ>Ceo}>DE!*VWs !V@Nȋp5pm49oh~ ?NuQh]c!Dgh{IWd-$7 }ui{]n7(Ձ9 >G_} Tai豗_i K3;@ӗ/}'zJ^:>/Byr]O \9!rR-r;I\ҞEbV2JA{#Bu޵+b{`~)&^>E*ֱ]~(+$d5ƑD3G"a$<4ݙ%վk1kbBv\uKŠ#moRxjy5߲+0;lR]T, $\oxb-sD B.*-/Y͇bq@ /%A# c󬖓Y1Ip#GDj \u:{cӾ_!Rr#X#Hi's`ØWVjJ ʆJ,iّQ9椛uqZVac0Aŏ+]paj9+gb ~ (E!7v (;HZmJ ߂?{ɳvѻ7>%8VAm^ƚyw_qv&B@yEf5[n w]D<L \_{bw>:jA!ESD8jXP* JfD,Sk"2P`n($D j1̪PH v'5WTK2@y*{1P]E3+<Fr`Wo˴&dt;y!*1gd]J{zn3uU gLdN2Lyr`Ny<\g?G b8fV2/IB:0E'\yPEt\8D5hXLMUn>$䉋Lӷ7+h})vs |HQ':6FzCe)h7WT1nMH)wn|HQ'BmҖjYb4^W';H?s%PXK:{%,2L;?aZVe-̹䀳O+A .k`^Usyꪼ%=iE*yvGT"_~)*!mo_./k"jD d|Ц@Tf ŕ}򨃡߾簐iujR ދYOΚ& P څO&cw%wOi~6\6v~ 5zOo'4d#h?MHpeCYHx+46;>lg|槳3)'\̋qȉU6#Q15x#p"I:0C3gEvyjif|ӑJD+b.h,PGY$LC9nuq]b Ѕv UvD;$&P/΅A^ ṅ<~>.2LEb3EU.qS.<ňӇ4èR`/+p%+[ }4~Ų/y%]|yh.A{( "AjYT TBڋs!4;IY4\#S>hYf"w*ت0R񨺡U޷0k\WZ>!<\Ŵ($6LoSar)-ǽX$ ؀aFs+Y+*=nZEZcý%4̭ \SgmmI-BO%qngc%iOc D )܎[xԙbƄYg{<4ryuFQ6Z:.$;hLH(L:[n `:ùg}t4ȢI˰83;x(̺xQ̛_Jς9]skAם`#Q j[9bJВ~|0Ӣ DKDQ;/YI>.l^bg>*bԧE98 ǹ$…z~drB`ą).rt8VRXu~\;;7T% K +AiS6zg4mӇ߼Xٷyǩ}Ye·ϝ&!Jm/呼ߵi>нYqQg{qO.}up'b<bpjRQdži)RT0@2%"6XudLX?TȋHŃ9}յ@8*ED?…xKW: KwXuoC]"Lr %mc9"v+Fۧ{yOU10qVDvMƟf`O._{Ϸ؋#=򆋇&aY[H}TkaY_o_]YYoQr}?O%86HAF"*;wCU%2 ~^t(6)%_.YlF~;Q*c)!ձ*pd_aR'XJ4O E1C. ca@#@Ij}7 še" @x L: gTrMȰB"EKF3bvJAP)©@&4BHU rp9U I[0  YRP[`\RƔ 'L*ycjT0gvyTg_,ٵjg,^,5ZDqL@Dӑ( %@\jkUXZ-Z ȩ@dAZhlFx‰3=o>[ar0 Ɗ[FhL*&)DH&vG1 ʼnRFF_vs2֐@u9ZZP$`$2&8#ώ1Z  DDZܓ*wCm[Ct:u|{=uwۜ߼r/:win"Ȧؾ'z'#K C*nX7eI4FVv!.X͋7 R^±`k7Sd+b(FׁvfoYBgl2nVb1N帬ֆO`lg߉QUmfv؜󛈿zKIR#(d$G6RKahKw=VzA;-Q "*6up ]8"6S+i=Mʊܦ p?{v&\ۓyeongfnljtu8\罷-_v^}1']0ڋF0[ fZBcv\a'^׎b}8-' ڪHRAhE VS,G TG;}kc*Wu, يuU [R[`. Ex0dt:1:A)m? 1*Vk s91 Bn't׵3[V ŎzOz gYl7麟Ȓ8Mǟsٳ3 "vJ"iDF~@gᅪ}Ţܖ k=;cHPb0I@1IlE}?CXiR̥b&9Щn-+]ca,UuUR SXl,;'ũ(eXb@LbdFA%SԑlJ,7>% "?錗V+K{,_*0M{˯MV\&'f8 FY@a}1.#rXR48?G0@YM+;u_ Lt{#lKO0@v" B)m+&A)@ J`o쳹F礟aPJ$'J+MϱH+%s/^]3<ѴHݑS%(~fi% YzM@.y| o] (4z@DB=84^pB0P}oa*f,IҚi8㉁TAͼ¤q"a"DRB8@Ԉà$rb1a9N",v)x]߬ThTDKJfcZʍ"KhCrHȧlZ,p/vm?- NƑ]INJ~i<,&Z!393X =ѡ&C(ZU`߄ [) hTEY2 waHDġЂ=*t xU B%Vid~]r%ƋUQAK%@`"XDT|bl5}KW6$ 3?>bP5~O>opww+7y<>?s}_o'jklD Ɏw̎H!#@zbç;iW0zFB5qs $=e $zTzI"*$BhDP{0xYvk pXlې%GJpS?/tJ"'h5h iN'lZ;Hxi0_~4Keg!߾0mX7א~"aL7*4#::ɱ?tV/QbVyV#dx#^ TeJ~vk'/w9z \]8t=bfP肻J 'jg%X%iEtJd8dURs齎FZ` `/ L$'8E+niBBaJ "y#%}g]u!aD(Cd(RQjz0uLX|raKlDT8%kw[6o1hdTW$|2B#̧b4 @TDhwLr6J5ߐU{}rV7 ㆬNEgUTVϜ"褸 bیFg>yl aWIͪF iZ!\ב,PdAXNC/$EύFJbȝ\"ḫ} gdH\` XPJŃO\*,9CFld"#6edyҬd 119MHD.@Y!Y.zδYK!IDZD%B\I Ms$u|0b3E<A&""cCEX rjm5ӲU@H&x)@rrȾF)I\䖸E1ϥ%9x ,H\{5ʽfvbw- þl5x %@--+[i0BC6%;%Jyp^eV^D *tp:1NPz1r≺v-,i0]ry=[0f*M,s=Gw q<]@U$7W7}s5o^&g)ݻL+u," cEt," 4cdwdiW{=รSRnU>h;}C8w ^d hL碜o}ƹq_d)l8J.N2kdl/TNN"KI8nwjeV*i$E>3tЅUth&jO}e!s9dr5dnNo #Nx*27!=<jTifUt|?L!Qsh$k}qZ=\?m=;3SV)sg{HJ~Ks%DVR-o0R'uP̕9[ah<-E%B'tPi Y{N^ -1b̓9C dO؈{sn;i=/ngXmz=s9ېm mKnIQ:4msi0\cp9PrQ` /+fHeO?|~;+[́z*\ٝU=~|[f3\K"{qZJkY CwݷUyI&p;!/cM!3ȏh "ВH䄈aL #i /M6]]oi|7),ɴE; (5n4a2Ž?]~O_OeGǿ[jوz制s%kj,EO욄m(tݺwbS}@e"?oStٺw {S2U~$w y֚pS' 903!b&DY;UM"!CeUm[1-͟Fd0\5uOn>!8hZ-N1gؼ!9Zw 7;/B?({ןіo )jG^M̚^bW$@U4ngC97*h CC sd0RJ`! f;%wUbiu:#x6xZOd~?\ƌཌ ! w?voу74-?z7?Dz;$1>~qNA6p;n)cf[ xEh8I-bIAjS&=5oR7!GS8(Hy}B^Լ){ ${]jwQ#/Vr吗 >V /kr<<"?Aqٷo5$xSV$yHS"ښŚ*ђO@z~sG4; M퍄GIXV㎄EOZ~)T?,} ]I'J@$mbyk ¢v)Psuaov%/[wZ{;]]IP5zf1E{w>v(j8)zi1E{v9(j(](=)jHh?z=5Qϙx1~ 4c;Pq#PG݂~y؝cN9TX[Isp(KG3чB9 {,Z(F%싸+EH*BD7p:h!J  `1'^DdIC5QDqbƔ:~r5fٜpֽA$MCo,te;TRBJIYcl4RF9ee08u>ѯ˅SnR5۩1x ˸ yUUڠ#|9OR !҂1nKxHJ'Al.ON'\0LJ 1UPXf2n G%R yq$!XF17g|ޕ="3)P%*`5Ya}]͙&`lyi/_;҉GWϞP,~iI%Ȕ6Y%3%O~r~}3U \'CݤҲe-GCW?[iKi.=svE01rj-|P S™?d ?{0Dȸx0>_hYc0EGi -q+Q)p #qF`1m 9hF"UkwP+Q@y@h"o ̷s'86 0Q'JLoFH`ōᾅX3/p0Jn<ZjmQ˥GAOL>4ܐM}a |a\kd'|c|$<4thQ񈖁*nkA[kIkTjoKu0Jl|(Z2I ug9URH@Fh2؇h:Tj4t%|l^ˆJ.mcd_T\R_5 4=fbJ#i4#5T R Wt7:%I垜ftCIK$e뭢ȇh1H륞NV<0:e(QLhlu+Qխȇh1Ehfl=3nl3qQ{dl YLA {Q(vl$[|?<[#bXrW^JVEA1/pղZx=k9j81rj;ܑ<=2<3ɏ EfR[wfbwhvNLsOvy]?e]CǑK}x(F9MLt!PX:\&߻sq&jM u;3 ɚ,{Nzj]y )itP@fg%9Tn݅W q=ͿF&gBTT3)%_Ɍ[e*vsrߒ ^FFY-Vc9QQWN$Hv݇PzXbVmf,1g(%*!n(j# P' $(IpMVqhqA%6u!0D+7J5*Pz6.LU8v/N$69[~̲N,.? N{xnG>S  %[~2t_g؝u0p=WoxAP8Xb%;sW|Hau5bdby8PMALZyO揞<q:vSύ{O΢wfMw,-D gW] @~v2.dd%1g2 'u% 7@ElM?25 @>_xӫOeÆo,0YV50x2%?Fj[3{ HDQ_OEë{r-O>.o*o7/_׳_O0=JX#=e=*d_(ICUWwR<}#1I8VtptM v1 .F*}CQJP=@L„ Ypd@vwg_2/@x d_ߌ55iWNg{'ݟ۶h9sy/w@J`thjݻ}a'z!N=N|hi RVHh2S'A@oȑВhf%lfC5?Lv;k:\},'X-T+AR'hTǸ݂2r9b[[dڜនN?÷'ٷ? 2288S5|%©EqXPZŝl:\vaQoD#̐JCQD4mYS743f[nEl[2qybE(֙u+Y%/& ヌm"Kr۳Lr^zx,LŢEPx5 Uf)O*=XM2XSfS!\Sj6raB~fǃN NE۪4yn2ֽh |FA؂6W(-h}sU.F9/oZ$x۴ΑZڐyo. js8ᮛgk-ǹhkٲr6cnonhY yRY9i62ߦ4,l/_W[<!P)3vMG8&a#Hͦ3./d~MMmURS;"۟u 5Yr,% 4%jV1I6z\I17MaIQ84Ăf+CWwtvH1)'{!dG`vѭz#YeeJi?Y'L0!qApޝlD1j6 ܶ؋Gt` .O:I]yQ.[򣝻{ݎ8U$x@)e6yч7hԺxˠ!`zmUQ Eplc$`V[֞Q{E%qJȺ1a.\#/޵6n,"e78Twu0̜ @NeF_cmlٱoQʖl$%j$`-,~_wuUuu0C7bEq̊]Yv^  ƨT>b+{ i$a4*5G%,*䈤a0+ bף q5]* 4jdj[h&_. F]μ`a*Vk:?qCTOLsc6kZ%I%mQг~Rn_DYEe~mU~,!׭ĖRc" Vt%oY]zR}:ޡa{ZO &ȳcp*xfQp)W'wllr¯i1m.LcIIO 4Xy?}rc#jRu00 xTCjY\SSRKwRFe%.o1"JG6JLC cYx1*x?([Gc4 Jy?CI[p34`bTxy+vZ #p"\@pö&bkIϳB3"3мK,I0| .19 3FbҒ.ݰ'Gm:Fk-q>10rs`@7;.Mƻ5O5d°gq}3ܚJu"qI"7h Q&Ap(Ђ`(}7jfLu^,f\>Q 0wNs,p@+{fḷ ڂܩQ6 :FVdtRv %&/nRSv"2E%>+<-| 5bh)pJ$Уw:h#0J= h tRv Ъ6bL̶Zd֘UnfL &)=eqd*R@t.ᜉl 9j3NW껾itG!#b+[~bYi 6J!ڤ&Ms\ϯi+"{Q@J0֚i4O|ƝRU/仴>W6^ ~l _TMKF yq5!or;sN>v8*-V,Bb2 `\*,pWDTI$uQOzmQ,%RfCUy[c &):a`yvjLǯʿ| ?$f h4, Z }fH 8di3>vK7P9d/'mf|=Z|<3c.5C]s(KaNϣn^}#.:%|;>]:dN>Zq]ҕȓLޤy7yZف ]mdu]dCi5T4g 194.oo[>w$j]lnl[pO1zJZcYGdY $#$+Mh Iv>OE221B;FYntvX%ayB6Ǔl ʭMRB8TZNϏ,g, uOQa)txqT"@D 0Z{ӻےmf\-z6,ßm%qD8;=r5M\+ޔ4ޒҷdПB$ ea G=zDk8z>a2{ٞ5u.PaW;R6-+8AByw#tY]7ѣx5s.sno~QpΏ/Ƴ?hF4;}u|}=2|Kx*eeT0, `F %ꣻyu1-6WN#ʀɄ̕>хTnbg+ $i\yw1& ӪsHy6;x5~{.5zCj99?ȉ_]ͦ3zv Zw EvL "B%r8t\ RQ(Dd~832oCHc2c ^t$+F{yܭJǂ/auo2:{։srq#N#ZMI ׁ p,$Ls;,ZL,97 %~q;)$Ǿ[}|]:ˣdY[5 ldJJUX\9r`QS՝Z)\B Ov ܜI9"!$vxEkHARH}-p$ȝ zZ$^[ ˄-^ 2a.[&l!˄I+tǎr/>@X쁫&j"HmeEfUƚ԰h]9`U=G; ZLk;/ZC;$qݐ适s1~ d7|V#-Qbgz@, #Z?0J/bR!0na/(lĮJqeU6~ͻx=Jdv2xi7>RF.c9D (tNZvNtuY/ɍ1}.,E0$eBAI`&$h7 iNlѶCnPF/]`@tA5&$42JAKRrǤΠ"磂N)!0+%i5FfeiA?oơVK9BdI{(,GJe&&IIY` .y1Gi41?sDFC塟4Ʊ_#X Bj957%-Ȝ9IjG<'!eȗ;ۤ0Hϗ#'p ȳ#ЩL2Ϭz"\J̘m29gIAe۹h3LHhHhL&2޷jmO6uΉKF7\P5k2'=" A25Y2qM j,?i3p풗OcEsXhF7 @ RpXmA:j e-EN4})ѴL:IC"Pj&Se 9)R댈Q2 {4:"Ѓ ,]zȯe;T s6b6<}d4椌DLIM՛FLD:"8B(D*XgbRW09)%*5<1T|igdt$,(^#i,XWN!nHv^Vރ e (E-Z-C-BVVuv*|uԂ9hA#]d*NDCː9i#x:XAN;Pاsm_"4ڐ8(! H3h,F59FvS/TV/_PjaYtS"I"\,TRBa"Ӄ(ZMT&"zǑun-irڤ,wCZU blwmlv*ʐVQ2UTCZlZF"c@Ɔ]ɤ=ݡݳYyh<{V=[ &s ɳ"KʇdZpKwi@ i歪vmVa׶0>xqO챱|HRlj.՗ɕCYp7QC[T/<}fMB3[^-} Ȏ_NW_ڿ|b 97MܑǤ4N(W҂n'I)KZ4i~e6l^Ykzv痷_'~ext;hèISg0p`O?99),6.RXjXMt6\J=7yT kd9T B =CLva+e_{&*Ӯ6ۋ䮪2dSIن Wa'|p߅K.WAyXa,^ a%Ȅ5>"@l>b-P,C(j玈E%1B;s晘Y*+jkL$ igYָϜHR'[xZf"eI8RQ5v;tiĩӘ0ӠZE\˹j)W&xv .+jYB~Xl Luhcıȥx8厷Q')tH*@1-u])UE\UqJc%j\2Zy|> qmWڮŴkؘ<5W۞~ 0⸿\X'woKV.( \]ĐgXh|a#8 P;WR*, 2 4A|7Hڜ!#V: .FF>fpXR磁%t4Vp\bR득eeRT=n.U/Iɻ7$Tb]ƵD\$/]k`[O"M b6.9 u0Gi" z}F"kЙyзQ|~Dq"idJ'6>ObM#@$Hx\i{iY;;JQT)<.ZO1`E&Se$v݊b}F-T RΨ!ϼS5v[Z!!a/s$rˤs$g0'hD+{<_֤mEXUVgJ3bO0X]epci^d3pT_vxnnrb˕0:ϗ3gmz:y֍B!;I3;}^wavю6ю`B~c%K|,SܚدYO8ի=֪'Jָ̓>4haԌ-B*4}A 1?ĄDCbՑ2,|}m$`>ZS~0{hn!s='\zNa A!c&-Kbp9fE20T  =)RC Ggg>) { o%QPNJO fhh !Vƛ@"ILfvQ5Q4tEkQnqK*+(:[H՘`vnݾpZZRo-<,!RW05m_+ S۾PAۂRBE3o,&Sʩ"Ns U:b,wre lVSR[we[hrwւ4;gByiwn$RV!^@ByHgͫnfp:6G,:g' h0eIY,D5QC`\gܶo fu6SBdȤdh L׶kLv3݇% -3u%QM:_Q)m[Y {vƤvJؙN^U {uu-P ;[0^ITcO=#k+>YIL0U =>Z"1gœR ؕ-?v)*%3J)MҊJԮ}1Q}oK];8lՀ7jp^Ǣ$bs̕}%ĝ *XjԐ\xwzT.Z\&YnvH=f8&Jp8Ξtk[ :I݀cJ]qrV!zxWrҦMkJ!08I5(mcU<h1K_pH5Mμ6nU.dDbKVJ f={h z9CA!,nȒP˟'4cv˥9ӔW>lϹZf  +efU,?BJ4V 17&0N7#~@6L.\ra.,mƹn{ےz 1Dg(;e"}e)(obP&%5Fm2B⽿vyQ 6 K+P%-(ga'KDI>?L|!mp jg@}㾷B֓$Y,an2^DzѸ;i` 0Z#3'St)t:sG6Qfɍ;q*bf!wwqIߌ_;t⏑>h80Cd luV`#M0ɥ SMבSyGϏ>vFZۃRGtX{Ѕ:̌C"`&꩜09#*pM} 2S?LǷx tqYy ?߃ٟ`Ҟ^.:mZmxOn׻gǿpW||e &z__yq\l4OЉzq|._Ϸw/cr?n/^~=;A{wEg0F&s~{Չ-z{o9eioqk&cf{݇ Xn>kwq':Z`>|Ӈ%,*g^7z"Rw^U)#s,0rr]}"6@Hs&_N|Fw;}RϯWW7SN)hޡ/4|{ 6f[0ܻgda3.0V2zǯ࠳b[">:5f^uBkL7;IWpD~9~sΉws~c~ _AT#I"s]>t@"ϫ___y3|^{g:끓'JZ7]Dɳa$^{oascL#HOn&*3}t V?A315;O =|繡0cߌ?_A1O0SGu8oѼa|l+>G{!ް3;C{oou(gS!ZZ IRb3]lqRŴ.wP6LR`j`ZR.R@JiKf/A=G0}r*^i_;.yI3o~B8rLK:t}!LDB,*N"E a)o6yP~S6z5-HwkMjh%OO^j8DrǐяyW>9`\׽S궄W(}|s(sao<}!fcKIZL.;0 v89B5ZiL_MX.44."B_ɳWIt&j$QnUpUUƎx(imlwy%Qh s!]RIS9eB<%{׳-]O 0codZ\tMqHdlLWr8rw}j ׵Ho:rs.\۝27wY ؏s,l}Rns->Cz {TNH$[.? - y%:6êl l[9yl1꺇]ƶ\? cr\%6/\4廃Yr,|Z(qi %MSXQ@.@rZ+TRY^O.n&kzA>D0-$mӑ+-+ ;a%D7K|gq|BM2+ sn U8k`9Rc\T,;-Lȍ&ѱ"|L ߍPf2&%) [Ja^ĘT|2³ ʚִVCi,tbՕ.O֥L)مKy| \&\`稠K;жISp.YSNT +)F^gu6BK V B%79:Icf '&Y2Xj-=g2)5d+CRXCB8$2:a70%6KjҦc`Ta3+vOϕ{md{ fZ驰ԎMۏAޠ E !]`uZj%=;(~*J,WPbV~"''U9C "/z$DJQ^oK`3_ }]z(cEm;\.X!UjIF3j.˕(A_r8j&ϽΆ}{qErr+ϥ2< ٬g?7}JjbL`&fвl(}EG}\} Ö,\v\֜.;1A^_b"K2!t)b+#` L>/.6EQm. ?lƏ&&;9=Yli!,/%\͙>M'VN|:JI JIԹ}JZṞR+ɧĦ^^le|}GCW촓A+ ˪7pۡ6BHU&M^>u<]<-4QTj~P` E2x~.1FJHE5V:JXyDi{n6G!n䱳Br/ q r1 +cLzN!^n Z @ qL* Q|1ZS !޶y1쭘sB`CG'lf쾧i1YU3[^[ie{܃_|Ŷc>x8vɰ-" 6<-jK#&'I*۵wڼ%OMʟMNAz0c@mЫu^+mʑ% ~mΉۡ{emY mt|ᥲYnnzZz޹ӧ=I;>qKeAnDPY~o7i^ X6PSf-<ᭆӸ"_ʳjꇸr~܇-q'̪)."&\Ei;O[e"rL쉹MkS[F&ƵRGz̏nԄZڱ|{pu{I05e0P~myhD~7ƴ0A ౼vw HD9+r%XrSpaF.׉$nsLRQ!D%w\$&ҜoJ{b@@*7ßh6; }R pܽY`TNrp?,v8a0zN;yn;ɠnzF̓.?`n<~?8]NI wls}7Λo.~}'4;a~{ƛՅNgH>%3!D<ݜu3T=~ߙUQv|vzzt姣ؗ>䊞_pm޼<{G~GXO 9=6~< oiL?v[~rjBk.^:f?$*9RB>ԿUg"@_ԅTΒwBRxa쉜~5J~<9Z}<0ev|8m' |ʼQNQ<3}||4 ӫK݆$_ j;쏟Y~ѽλ{h~ ~킨:6ׅ*'sx0 vgfK//~ɘm~eYte6u׏&s|ÿ_ȩ (Y I۵OT%h%꛺u3?h*T@(ek}4٨5kehPA|;VRos(5_u:(2~3k vNݗ7KJosw/0&UzG#"Zk'Qzo&'6&Yb9C{IH3:y95Xǝa͓z7s ?X qe-Jc̱Ggh8Wz!~m\^=z+jDBR9q6o;QO 4Op-jzUe{ ²i{d[_/sHkzCz5BgK_fwO7Tl w{ޫ)1Q ,F 6CMsyc*DUMKoGN 0[`.?6% +dgftfL'u}W.?]/T3ċ,_w\G:2z[`-G''j'` 3TGSv|Xvξ\ߥ[˚/dq{Љn=A@WkNlʭ~zU\-ƾFm=¤%S}$-5CĮniO]% FޮgZdţZ!kS(.(\py H||`(&SDzBBKYjfemRcgU>ԓv 9 gSP*R` Wgodc5~x,׷?zctnUkiA&[ro`M)pL4 aq2GgE(>M$ 9`ʑJ#sz vQ;!-* ;N9FG0weopu%q:heTS|lUl=g_u^:[vȁCu^o S5U(.7o}:t&ݫW zGH7S~.wa'»{sq]ɣK]Kjlt%| WeѾl>}NF 2 JX+*SRNzH}sW Y*@J)WO:;՗_dOfLJ7'H\Z @9} $O n/.RcNJfP2׉[-vfh*>9xep7:w7'-#T^9r f<*@"C .h˶RyUpyue ڞf"BH YlT0%Hv TsfS󍯀*v0!UNI*/32'ZEԵKj01цOFNT~G&=d+1bHg2*1U%CozuXՍلnË'^xMЌΖtlƙ鰳ۘћF{f!f,7%dijG;Cˎq?Zؓ6LՁ7Kn6yz;Ocy4e~v_H.m_|1w/.g:EsAwBXƱly8o( T1 SX7\."F%}kɺ{φgc8byvA"j8/sPx>fw9 W"IFY/;!LʝSf6?|;4ՃTas1/+L?Ϻob/4\w]"@w7h!DZ>y|ҁN2>2?sy鯆Ȯ8])꫃OBSa%X`sx$%9ES*oq 6̀txԅyzB`GlPs+or"#UyⳢUsZ|#zlPՊ0BZ(B  M1nFdOeQ%f2mlq d׬aC u}FSkG /0."ɝK[/rZHmyV9TQb`?4HEc\O$Y]2BidsP0FL˥;w^eΫNj1ҁc[yy+?ZK}TQ¦oaӃY-aSAm#r*쉎*@6*U,LJfKR.!B(!GC IړCHBx\L(Py4zxFo| c/]V]_IuxOfc2ߺztQI^¤QFTH Zg :)0eP!fm)(u+cJ])}7zQ}k&Ee+;hdN5o뵫[`C*a~\GK EvSlv/9ъ[{Ap:7nV?҆T -'fsl;2bq}Ү߸;?Z6dBM^e Q|8j%a6\ z^\ECfjuONJ3Sض ᦕVZ e*Ua 4g}CI,|VьRtT0,SD my2B">vYc럐 ޟ?__ûޗðv;S s> ʗ4=~SP iFƕbJI*ǖlJ#h.<ϕ[ c{8 ;LY(M(k=(v tQ9n2eγ~ e-$%IS2Rr1g?ipؿeϦ1D)vCm[[/==(_Hr4!9NmٕdF^48Fp[.):g2_J0Qؖl#78#k7tS5~s*yykaKN%L]@ڐQ+V-(X_Fc`F;bcᯰ`*4$!>]M??.C@ om-^Xe=J13N7,Ri9.N%J"(7 H|́z=z?A&SV}Դףb#A(j9Z\Rh7~ +$kx3Ȧ o 1.l6nkt|rvubN@GFC/2A8S.CKUʴ4[3 iwI U!b3H IiJw~/KɅu6Io$nEZb*~(i?`S\[aɌ>G̤\_}uswPL E> ʹRkn.(zׯ^ 9F Y!Z!( d%mƉ6B[/`d nRý bLs 2pHy |*Y)$穇Z4NJRR~cFy[nxj,0 $r a[Yw Brû(L¥ۼ[K 1ͻ q86'٘ Uwz)qdk#B*v`m/Ě*H#BM@-8-N?# ߩ=߉B$&ՆnmYXRE8t# G9{C93S%\U4΅!S}szw ҅ c4ƂO$jb0P~}<H$)> %00ו7q}S7U$O/}tRi1GM@@ 0db| j 7`VLu93fΉyGCsWd dlЛ/2t߽-<A\mI!h ?}Vs:?~VBSjxd&[nȱK T.on~2:UN갞  Oo]QK= fq`]{,nW=ۻW 񒡯hХBL7]AyZFe$#ɦqN6=MdJ[we-=X7z3v8xp0In{Mm%S`tKtf+Y 'Gtai)6iF8mq+pMY090#3?FT ̈́fVՃNtֈ>EUO`NI`"0˚tC)M]Yd${$CMFOFS<2:{22oJ2B'T`vjCJusrqh*lon ) VL&*=k@TSR+) ZmaD 1gX-xd{d,dW7S)uQ tbS:ReH0D83WXiP RHK3ŹI9)xbQ/;QR0Jܨjp耹QQnUǔ| 1 z(]±Z*-,EևZ3`O8(2tN+#IQ&u~Cjf(;e U6I. j/?oD %ǐh@!.X-_*rpᷴ`x;1:b #16Ĥ )|H^d5|2s.Y5+Ȕ*j0CZY Gsa2NA߶ٲ*h1 aQjSOV%e lq:$oLaTqqg A@x f;2AX1 *2iJTgiӒSN^/ 0iQu?-Eݾqd5H&(>'uw)w@$Wg3MYC ReR*Rum-BKQmVYbMlHz&"q.1c+ROiot jc4 vQ$A*,gRk`=qw6Q+dD'y?~՚?S,Rɤ u6%RQFkϠ-Rq}MZifdawneqP86K7]Ε/GW)-i.C 2Dr)WnBW(Y1}SqQ8+DH?WmQ,x# ߭cJ:g5$l<$XNE8h% I P"nPۧ*`Q*`1ͶEאBKџcizӽ;[G( E3ݩtO «!P4 VC956nUTDG]Z#DEBC狝*EʄQ܁wݻ}V;̮'E0D(SaC<@QNK_ޘT0F1:oOC.)w[Mii'6卿y]P x?~􋑙'79xP c*e4RbįPEiMUl+.;VL'f(--WC*?Aи;GD=@v%J7D7БB!v= ^ވD[J9f>Yj1mB2zșs>Tp+y(ތn0Mx} S2y tX{BgMT[9m-7τ0#-N&[ e2W=Uu[rrMY_KK,5XjS+G[k|j$!Wgplcv{ _vQ~isޝ}s2b &͚YBV-ò6iN`gc{޷_ͪ׳7ejV]jp[L^OtFMU^ûo_fKjN6UA}~K{ )l?=kDF0zE]IUJ߶hާ{\+-lϙk9Sc$4Ūrݕr;jjMX2X 'Q &)W>qJ[)8ގ^VYm==\OsW\6Ki5Hp!OA4aIO)e4˒͗Jx!FzꌫTC$۪Q}"~.mʇ]s!|([dxvٌGEvS3W@0Y}&f4UNCOlCKOQ823̩0ϔ~I3{5`[f|8cG";r zIH '.d}HϜ6gxG:`ҙG4[^QA[ 漏Ωw/~~> 9FQ%u OgZ&&z!y#U\-])tBS9oAEJjٲΉ}7ҔJ=4^ ʕhWJ9ӀjAV*ΜxpJ{3g;lu ~"mD|ăB) He 53e%҃v~axgaQak;ڹ (0Qtg{Y j-%=C1gTsg,͜Vgs<[c `C(iA2k<tTe^\ހ{tQ<߿x7oz7 ] $&Po]X % _jeҮS5-jZ3o-Zq'ǻ?Y-F*֣`wu_T4@J|*c aA0PXa:(4k78JvQz4/}59D9,Gg~{E0k3oۺo,yҿ &%`x R$I3Z[ʰb댺pFBe6cidHn]quN~Ɩ]@P cVD kLZ)8jj\@i2}Hޕ5q$鿂ˮwR]Y"$i=:IA d@hp(,(teYds%̜FTkmѾ;qEm ӻ\yZ]Y_zWK@M+p67WF&E+v-H@+@o=HC[7lX:E)k8dE0}TpgT"/tl5ﱃȅo* yM>oRJ3~[`ƲIʄ$Q2="qVB7ZrJL^/fF0{ Jy? 4O Rt-s >e"N'5ZX Y/pFd[v<k(\قt|eW8;wipCՍYsCKw&X`(ʬBeF8 H:3ʚ{bI ֪YAgTocvNI,Qw#Buurf(ѹ:4yNWfܮ;tcѪifNfܟZ9&BOG͛a9dcNC;sEӍ\7> *^5"'!¥U"XShOL-HiޚԚV 5g*զƓaK&0޵:S`[6[h0;>Nex) w臏|ܼC^UثpޙY D*- \q!@K_|8g \)6i -\r0!evC^]]V1vª/AXk͓3~g(nMj GUjs㐶st^i$\ Ӣ'>Qb~O[;*5?3A>-85Zw&ൊ"T*! a}2W Ĝܠ/^ݙ{|Z+VhyP 򺊗w6)@v)/?+\ށ2^о3+’Ig,cJJ 9Vw+Biz-hK skWJK\W7!)Wח>IX+#zԚ!#ˮ_fJqJPkL ?ZGbI3^ yM-nSb.W(K GI][hlǗ(NȾA/yxqtQR>BiP7ϕ4Wi  +2tcdrp.C+ysypsvZT v5! 9ߢ}:\nM״Owr=OrglIkmk?<>8e8|؛=~JܒWCᱛxA6'934"!!Bv鍸b 5454֘BbQ߷Φ1i5sS!PJ-:Q#@.i rŹ,㮫O1V qRnɭe Zj Xj,haoxv(>~++"1$"}{#+^BMC6ӊQ B+@4 _oF !I3NhͨVFa] dJRfKJMbU%4KPueɰ?acD[7MIB[m4fiZ6.5S-Ȕ ah_QLBk;#r(ZdnډcW/64g)|$&n !"ǽq/|܋b> h`d贰ɒ8͘S癄T&1 8HVw2`9ϗ |Gq4zUuw/;pa^ΆLs&W0>r*O.l!R ?Mon5|UE;1 1ٻ$Q`yv+r3'VBzE)I9fU ]"zX$#g,p2!t -HzcDI~(hR9cmR%SPT{8"Pr;$?hDE<[ zr [XDr 4PCPx%yԙ2VJbWVFf H9&> ,E'q  Nr8 gr Ҹu|`400'$*\d 8n"TI%{|œo#Ϣcp1d8ʀJX.TDZ AdPA8Du!On_[M'%X/ ,YglM;EU,}. GRX+>FVDP0ൗn4:1Ѥyиm5. W2j τa-83<RJV}\S,(X1ҘD TF1XX3bm"*&WJ?$^=ҥ`K.nP.RQԌ,gDH)X{wQ< $C?YRo1ۻ8+8/t|~GK߿D^ ^+AzP"'% %^łL mN?CB*1"c{!! 1*7EqK8İ)/xhCh j:\1Wq=:^HZN@\cn`;랾2 u1ugoESR@6LCTUcNM$h*+dM#:$mP*XaJVl>~h1ۀH&# 2%DGFK#nbs? Xt OŸȃÆd̋|?~r8s,?{njlyf)_~s>Rߟx+uYoa2#,+ IoPrƓ9#> ~KsS77s~q|Nڦ 𼄧_Bo! e$!}*<6 n4xop?Z<fQ0t)G Y(yqC5UkGV5MXh\Cqj&kUod0g`pq+7E/@.*~:ͿyzRDk4X7޲-R~ގ|yjVCosdk1}׷yV$J"یP_NjGdםݟ^?c{#c/SYI#CMt`Db!0N'>/ ;=22~XO:(`GNseJ/2<CnCc;+v3:;Z 3CswΰB H7W^%Zy'J@&(U6[`nkøSb$Dɸ"V*f{X$dV)el_q4o˽|ďKO CM^6z Ԫϛ@VRYZYG'[S|x~ {G/qx'\vCk TF?,٣;{}hhu^ܑ Sqc1&fkc<ծ1y9a@R16e_ C⢱p4~ !Rqjv-'Bѷ^>\oMN:z="xlو\/>kf iJo?cD^G)CW#c.+ /RʾӋfc/b]_:jsyDre-ƻ^Nu aOs T=į`s i-RF{f#v!9Oξ 2S``.S"[ >),^pݪ۰ke2)|C V25ABbtI" YP\cQjq|f9U*v2C(~@.5+]0˫zi؄:;XJ`gI"E, ƿq6Wٓ6R @ ͥ7y8EJ(mۀ1IFwD(=T$MF\f`ND!Fqҁ  ߽9Aɨ=.-\1"ު #dVYO GJq"$++M3IPOʲ&NCBW^,Y1iZA_QĞ+z%+VD8/g]5 {m"Kg ]04uj_ojŌn|-cjBq(jhU!r{&)_̍^x31]k&r2nkRWÉ N$|&N Wl\z0Ze)h #%I>2JPs+CPD~F+^^x*ڝ( 5nc(a@kLWH0R* ƥei#'Å12b+ב#JZEW: Dq3=\bt'XL8+7?EDW*?8O=q ecz ƠbԏYI3IH2}f˴VI"Zi>Qn8DT|f`d ,aip +l̹&_ai# e(0x q&%S|Rr0 " >xiS&JM ť؄bQO(=fF\ɱFY zdyin 8,H sNBG2[ oTrqzT`D0!Ʋ94>ߋhH'r9bxqg:c^;N4 0֎Q(tWdUk쭋Z=Un uF=x v;l77E6G_HLf"`&!0ܧ}'ࢂxBRś)Yxlg.@1f w۠a@hC" % LalH|ZvW7Fo~1JtWb磞v j^<* =58U1`RZK\و_kEmn2D >6OfNVU[ݜvn:(C6NX'oqOFr%ALe~Zc L 16 ߤw+s'өE%'͈gsGs;+wo9jCS΁ͲW!&B$o . 'dJ &j >hLbp2SgO9ff̬$x͕%^q'8r)DYNcP1ԊdGCh?pv -NXe *P<(nN3 1qK]2At^+&E/606h3˦"84N 'EL@P:4BЃ {Bj5^s=dk3/ ]_[ k$M߮I]tݧGG+Yv6G7c x)S:yAU_ *OypB R0%8]Ej8aR#B٘"Tb*Qwv̘@DW!Hn0 Kr.1( @IsAqDWoy,ݞ1Kbj5 ڌJX:"0ʛh +wmZM !UjO}r #xև/W&NCKdKz 5FKڳ&-Mk>`}vzGRJ:GAx[]=+Ҍ__C4獍V7@xj!Yh&^dv>#(i G[^F 0 g AjG9σ=ъ8)n??e g8Feï'g(z 󳯑U~7 M5Ra2MŨΡC]Uqip Qv쬞պמJ S]*}KӴxjI/+k-һ%O˰W|*ۊ4;oՊHt,ݎȇ C 8B\PJ*S)Kb*{Hiy+dҋݟ6z󴥂 "tfҧӻrmH>tkVր2t|=3jWy{(x lV[O[7rjEhAopWd^ ,3BLTeеF-M = P"Tv҇`3fSA!5G"a2`RqYy'^fr$ْr㮝p ޻+|%MQ q_`ӕb B9;o??Ӈ9<>ü^1=?H` :"HNc_sӬ.1;% ΂䴡y N5U*"qH@ r}9@/m>Pl* uߣoҕl5_߯W}zB^[1W[L`oVit ^4@79Hft2dUZkgߡyJ8JUQҨ-'RY8)xVx?Pr0\)|p}qV|1 +O?_k4ꇒIJj/8LkZ:-^E\Y?Rtಗڲ=+SRɅ$\xߌpb 4ҀI03Q}4"8DZk~D~Ar١dϘ"ɋ{~h6Xѧ|%W* F?>~ *ۇ>~,}C45px6Ǟ}IѨUzin`6Bg2B]P(E* uQ/ 1'c(͌?)eIPZ%DYET G|kD$Q/z=43ClGz3?ͪwfa5t|E>d沓H`Tms쥢:XT 4gP xpwb'X쐈e}#0N`c5O7,Q["cwZ8Fz ǫQݛ3#2~#ӣ1G? 6lLJ*mkg`e(Fϟݼ & ٥Ͼ\_OT! YOj<"@_n0Xib!%DoOdA9  Z<[y9i2E?VYWs4xJ\p/U;2fQ> ~ұWg6r?h?xpr$ e-C]3@w|~xɺUc+kHs;MF׆0 uhf0!"MXwlvyf2#;4eZfϮAD[1`5'u:Z_ üfʀhr53M|\.WWb2upnuTVCYy. v]F3<\=d; Y2\gsNFΦXQz~L;%8k]u+vJv~tUu/=B_z=w[bU!җOҙJ09[[/߼勭<[y8[!Ԓ)<`;gOlOa+aZ=\w$8,h#T;ty]3Id^mUrknQгy' 6Hl\=5>L.]-i>IuYc RDhHҶ:uwKc]OR9T}Aߙr!LҦU9Xj=3cٮ=z%FC+9+,h,@C,$Bʡ9Ό=ـI)Z&lH\1v:DҞ̉8:>TKS_|(BuSdNr`NA3/hj# e\s7"8+#R21!V]eӬ`9)tA2: !!#DPq'i0c!l(`r1&9%bF"FRf%RmD*I9+zSc6!^rO7Z Tz#\!f O<&e AHZJ֊ԠJ_^`Xv2!!25!x:H'R#@DNi u{f `R-e/@bEG R)Ӆh@Xo(|;**0/Xa)CO1UX,ȉѬb!U'߽wY$p\40Q%ic9[-տϤl3U ZcjfUT lp/BWrvĪ,?)*=rjQ# NHZ3ٱ,@C iδbȁ.x0Q!+b|8p6:D}NrzyZyW9k2ypڞ|U֮P>y5fir@:߫|Z-iF #xU֙#JYǵoKz]6rӫWX E% KO- 9`m.5P$vBQLSatѶMR'wP%i";s_I씆)r0"Ei1 DTf<\qK ˒swby0 v -ʝWFߣt/d4yH 5=X~ yģŞkհ챎 8Т3 lǍcĉH0X5X਌, jsDfLp4BB QU] iUU(UxzbшM|nzΪk8nT *jIxX?Cxssۺw{H;N8QJIt3tVה>Rmn&H=CPr?A=0CuinL#3XMz H7"08\ sїXZ鳼WJ5~M&Pxa{5W<\CPJgD>BMZSA"-.; ާ!)!_Vl7lyp!B(S<ցZ`OңlQ˩kDWభSLlsQv,)&hXΞz)8@]u6y&߁GM0j̐0T GG`X,& xVjfN@{FXf\V3 gunpqD'k%L5vm0PIΙ(ƛ*EE+ZV*?#bLa!@e i\-+U= (g Ц R^N\Ԥb`ʵ9Zaj$TQ"rښ[Va!UN`VnqTKXo1&\l&i %U,wﺤ*]ע}ōs0zD_37KiM_W2kgÏbA7f$|@ ǟ>؞>On9 묨{>- uRȑg1Bɘ)=Wx"e!Oz;(b`뤄 )m1VR룘]H};dV\K閺V(^\ ?_t(Ե{Y*TydJ*h:ͮ#Y3Bȏvl(8t_ '7Wt41u6eh/"*8Hn60"0oկ$]BSn%[,wd :9fF0|y`,@oO>H1׸Gs`[,7U<ݣ7 S|_"dؿ[fwC!wdbl.y]ky6=pB! >T3ۛѭM;~ 6_w/Zi>_BH[tޅB=_t6%#пBarܲtLdG$xL_R.I&3a]#̀[?I*/XҢ_ n짻i/v &euELil[!ڶx196Er;@R"V/gvNJ߼ 2e-b&EڝihUMU'DdK'ov*iZs0B:ހk]taγ >}Ie^Q%UD=RGYGHu"IE]#-qh8fWH!( biͩaխb1$ hKϧ ףm/,}ѧˈ!60MOC6?ͪwfag/~˱F"(-3*J}4!17#% AhKERVZAֱ;Fq]>sv2Ik^@%6VŦK4!ڮlkpJiUԕ2'$orW'2BKfYbr^Zk 8L 7`MS{žC_. oX dhgHo/+1dɩ+Ey%|#h 7 nt'jd٦Y\0"ەO 9vO2T Dt|#ݙk环y YTֻJV 뜇;?Ƽ 2OƐR@}%<Ȭ-ՂW?'t|p,`߸P!aŽcnN_Li6Y-oGUpI2wŝ+sq\ܕ\KiZ:筋$إ'`F8X͕}HidOv;gg+X`7bj'+NvaF@ ݹ-) cZ3woNH}Z@\ M;߽SO咶dtJЃ;_%SK?$ӄ "On5u}f[oBB^ɔԃ7ڍlX- }G֟k<;nMH eJ2ڭuMm$`" N-#n>>VKZ8pF:Lu>(!_j;ؕbsS(ϖm!E{\{MRmh ,;X6XWcy+,%E-%@CFSJӊ2q}nch <8Z?Nm5`OcP;yF!ڠX6}I$;u}eW\&/VEs TE5:6q!{woٖ''o8 }$Y 'uq{+m:@Kt{@XgX|ZSAM#VQ4p,RqF"eXZɍIe9('Ex)Pj/;DkZs / Ȗ3=Na9ֻW v GVq>-Gve/n&fw̫!Z'wu}~+ҟ`ƴ˯Gh6m rwae }{hܯ˯᪤ݢQrp_/'cc7?F]@ADS\y\"M84,Kp̹Ѯ:NSa K(a,JS)O(i|E܎/ 9t(E#W9q9bA̴z\h󷁌*huoK/y-"T7d1#kD䇯sFe"jʕ˫*=,A䀖n> }L!!&yjB$!ky*Nx$qbXIe@l"ɐV0]"f #h"2g~0LHe,S.mBg $RNl̔TDuWFg2&udZrfbÙ1!D6)jQR3\*YέNB-BdRR_E.wwzZk?:_'Kv?ףGc-#r >O_{CrR̒l~Ҁeݯ9t?a ""B$*_3FrF7W[|W$^O{}8xGr[?{j_n}q gO(UYʍ7QkշfzGBxm-s X%)OjX R "e@g1[|\9-WGovj5B`v!:){OV`q̏_nG+3mF[N{-F:׃Kx]hnvbX(~̈&0nQ9CJ"Dy%.O"ǓmtD/lucwÞk5=pާzGE jFo|CיMc d>I ?\GP%SfwprxhjA"-DgS.kx]CN4i?YuG2?'vݑkpUGzmx Y-H:lh{{wr,Fwd4ɱG):ڭGwxZжv vkBB^)w&ԩhof2v'Yپ͜xYІ]SJ0mĐ jwS!M%tQ: Ν]U@-|i6߹{<z+L6xh}!TMeLvNg.7koAtdn8)rz>?c5./"8&#ɰv^pDx:5^逰iL{;$Nj.& wDž]yŨxdž_  kz|%CC(IGI_ZgmrN+#_.*5SU!R)e_{t-9J] 4jz0-_;nlٻ߶$ء|@x&Ap0E[Ydf1VSMْIYrHD_U׋U߅Xsii͓HөŜR {6e*PxϮA9'Ɇ%r5Uq+=iXyx\A,6?rwhŃu8||@f*"{#^UYARasm&HxP4N2 #y^}s$i2gn׺XgRTV_AW{ŏ^]>_ŏ0(Џ'š :݉n]Ta2 =/ 4^g)mX#y&Wy9 Ga.&g~R/޿{}?7oջ^y{볺O}{On'6:v;>pz!L3{9-$j. v^#^R n EXL=N:I'Iug'TW$`!ww?`?ObuFpVT10?E{|c^8r~(t'G QoeгmJ KM!B=nT!0f}Wf<*5,+?*p)Nm D%͸FhhhqkZ+(Am!{AjNPO} !tz:.U"8Ռ;~~~_3W,yH vΝ+FF#e{͒lsڶ YÙ' oU{;"O1LJTkK<8o2fM&k2:Ml 15He'悐FڇT(ح7n'7њ9CWT*{F;FՁ% $$ҵ{rYPtN˼YwĠ;O'uYRlO0kS:e Ll; G#)Gul~5 q"+ i KK#2`g܃y5}Llj~2h o@|-)yo5'EYGiߞtRmJta4DOmWPP˄. L&! ,nuHJ6g*-0H&?A#Mww?̮&ެT,cA zЂƓ 4;@Z.dV"biWխ6*12d4}!;t:|,=MbΚ9QT$=:p3&qqyx E;{Uw?|l< ۨn:Q->tq7 vJ=lؗby[%#d[8Z̐Tdo_Uh7i[ӊ VJC܌>!bGҎ dhydź$GMrcfP?jMSޔ^b*-fa1P֎&ԙ5!&5WkG3)tc64)h;aҕ"J4r$q;#-GRH|xqm=ft215JDž^x 7nE|e63eʟz~|լ<\ bӐql`L1G2=H͙{Ыt+8HBQ/x0#9u90e2G2$Av| .(A2,(:L]| I*ՁÌH;H K*uja0+|GuQ!Y+ Ӝ9,TXE\s˂u*V[kk]_*(څmOExxxgCOnh%y8L8`yNbC!w?noAAp2wuqVEF"'jt/gbL^_]]=s[cEEϽ/!b5tV/f"Hg,(n9LQ0 3\ǝ, ʲd$44X 6i3+B堟E;WFY5/$A`AZc( !1{{,0dS}6K" aAdYo:K iV& Z!I)C;Vl D]$%Qr.^Yf"(ʩ ~"r$^$^!Ej+i<8_ =AAg$~jD4pa&i=DN4Tk˄+XMa˞WbK0/Up6m0 .y1GwXFeΨ((,DOz"ҰӸ^"h㘫볿b ZJ4v= RozÛƜ|Ey aT fVڊk~!7ӷ5rB*wylv5̼TJn.)?uA9߭=|{•(oi׍<\imwɉ.q%jUzG)DHd"!naCWeߥ̈́1[cU_”{ٚjŞ.u{ctʽѐˌ3tpayf@yd+eekcWIav&H<4Yc<&%YIe& S>7Ua rI +К⒪S|^ IȔf纺bQ-ߦk(/B0 pGISlӭw:TN)&{S:xj|wQVhj[/VQˬA(cBL;BhCZe/KItSܞU!XcD0뎁=.)iѯTϕxȖCH|j|`DVNI F-9 Wіtt98A #҈B)RT]p Wp kƋ9rhѬ-VqUUɡT*wgrl$ S[}xAXM9aD'hQC2!$ & gy1LygbOFx oR&2ƀyɧKި:} &qpϙz89X)Q} LSL&;T3WSCR}=Z9ֈIt…T'G%\c(nPӿ?)˴y ?{.~lOB<&ϋ~Ep\vad $k>gFeŪa䂂%J#>Si|4rv> ; 1F&Ql8@d4)L=_{EX?Ҁ ~ŸDj*C7W1mXR#k]1}P@M%"@q%~UWWHD']\E]\E9WVVb%pMș>#LĚMX L[d[wZ[KpCy+yߓөք_l}"5aiH|R,F@+-L20ϥrgC.eLE;[ EP0J>:L}LfŸW4J uT0)Muj(WC/ 5ʢ<`a9<[1SR뵗"SwUҮ }sY|T6dGyGƹA" '8D, ֌֞+ΥucL6½"aKdRgPR ͉  c 8RlJa,LbQKqnŸKٵ}))+Fp0%be5B*x ^2ihX ja'n@R,q#ÝUBNrPTQ``9CDj,AT\"cB1GlFsʿ6KgMېyuλ!7uC`i(5ZE{һᑠ4l :ABcPzY0C5 fL[<1aP s1GL ,plD.!+B$z6l$,-H)=$Ŵ'yNH|SlvbP&q~|{&.uDђcXG\eQ؜MLYrD7ǩ"86s9 ۗp4ѐ(1*˲S:n&/r[X[BDK=8s$xD%qQ FQᾄ;,a4";NZ. SmL&IHe,So-Xxr!XIk}PSB b5ȟ>-h [S0ï;y70s6Ǜflz9Ήiry=VPp{Fn /J)/Ö!IkݼdK02r(^CRҐ 8Ux+hh w͏[`x?͇2wLaa'[Cʬőz!/[a{6*nHR=ޗ'V:Nv BޗEG4Gf6CiX+[>WqdHN `,~EWa;-h+( VP63QsWAx/:򁇫]:_UG~\Dp1fs'ūk7YC$+ !fbIP"`8ouaEy|a6b ~m d݀tDGq*hM tX\CNqNA`R'11|H9߯\<kS6Bt^߬Wi ~ՉVlŊ]F eC@ zKӯrnqdjy _y:ueĵZŀkMp,|cF QCKpr8%}DT"$:aF%4%Vj qʦ8SzXqRHZ[½[#sqSp2E˺G\l&#L &Y`7w o;!:ˢ&B"AVt NҷFvӠt;4SLȘ7«~'|]lO0` xw@CnRZj|,Rp_Sa; %j*gB Q͢p,h%K+NNr&I`((e[9u]Δ`V()`eH|3ܭT؈*.b[r]fL. L3oJh#d+uNSvțXʃ!]c_e,vʂ().g庠XxC)(2f]zCLrW#eJᨰ8+832nhV,|uV c+?%Rɋ,-$Wl"D*s,3XI7D~dUHΖrpa%@a ?AOS f͝6fbBFAB"cņ.WlTvJ>dp1,Ġ!ca'Y°ʅf5U+:_A籙An("cBg 7ُg3}qr̐BRCߖ-Ӗ9oԔIQ B] !pp7juOBLjUz۟!dْPTYN K$7FpA\`u %A!ZB̅Zy{p`yYӝP\m% fUeQy%zsġHdiUApH]@uy/Kwӯmn`&7OÖ:?dS#Q+njԖ`Ù}n%(`h0' R&#  &wBMfڛKse.6nFx~\E Dٲ/;pF6=euUUsudir"T,>􉪗ƨ.]ɮd 'JJCvcԉ"N&Ԕ (-b 4\IuzTTQ=fr{vĘg<[̋gJm \<9vپ2/WM2Uq ]N"p9 "pۭZm[KjAg%/#a31nV16BI>nDj D0=(-?Lz5$OѬ`4p?~iˋ߬@,U `e.AWErG9?eq"^LV:Tηs+!6C`leA<:0P H2 4 裑1$2sy%g@8$EJz `Y//wK,+Sd^.tPw}6@%,r&fh{6΋Rg(ֵǒ.ʔd(2:/j_? P3gX_'6SWђ{_fFY+Wv1_]Xd ~>bUuc/vjHJ) !DF *4Պ:%xGz>=_O/@۩k4I:8a E-_'_|i2wɓ 1Ym!K$Qׁ'"V LGty%p+{^ ĒpRP-X2d )- ؅iKL9#'1u:yͼksĹn4wz6,ۭ1x߽C~v ?]F`s@.|G)n)7Jw"o?]ш%9 j0ȯ?B5]-nAXaAH|zsн̻ewmۖemۖ nk[3S7hU0Qw0x%ftv.6Wp V ;Kf-KyCQ =d[۾5Ül3ȴg}fTEwӝuNW{UZ5?g UfoT½ {5!(:LkLXz(1{jFUm7a17j0BQ΄.ؠQb}(e%OkyB/嶺vq0OAh$AP3C.q{d!]mFmz7ᢠ=z AweT\>MWTVEBL/ֽ;hW{3opЉ'LO`AD25q`9VL%5K\Hbu/TR㤳B+w8aC$G̥(|X ~TU2hs#\2@}Gol1Vة`'w*{?8S왛2дJPLX} UԟzX8BTs _v3UPOKSeGDJ Ҙb~J|+ zE^SdLNД8*R GQ`N+ްK<HF ; 1]I dl 8Fhɀ/2v%B \T$O1bK'j8F@V\M9Gt|s{3i-@fk}z8b:,}t4O=~XQe̢=_AWoGe-8"}[V+r0Zul2aE5ELFOEb8Z]:s~;D*X`w> EV[kHpJ/1[uՋ0Y zGm#Oתی."c=9؞% Sǵg{ݗsR/:#/i<a쇬[ݗ8* dI):Uߞ)#jHb8W]- 1_ݍF?R`T^B Q@31(0svNǨsWDV,ՌX$=7T Yz8ų5ڮ86X@PuDmc%.e*8a9!EQ4'QM%؂zj(5I6\j<@1$DvҠ )뎮DSfFs8 K4+tD F@OKj t{"K Y:MϺRw,OU2G.}Yee+9_=_ץK8Jw~Pf/_+M~+dȫ6bU-0CuPz0HR3J zd Bj{&%'{b U!yF*%SΥIrbcF<=m+${ \ .fA=G`3N7Ù[{D,; 2#:ޒpg ႐O/>Gbz5_fȚtH%}~24;<&ª,; d~ D u1m&Q{45s!"Z°;D5?@dz}:3 $L(AM$C7T/NAƕ"1GBFCBEI e! a`N AYEH7)R3"8Dy\鬗#4Ք:a6rT`P{@E?MN9x;/r(]=$ (!ZK:z.YQ>.cƂ'1i°z,< ` 38,T+L-r@@YVf [ 9E1'ndtUi) 6Gn i8ߙKhJ/Ĺ2Ϗϻoyd=<Ĩ|Gɓ% w|wV3 z81sS cفR @= T{m;֢׭~HKEZY3^cazm_ kz?+Qy8,#zdE龢Vԥ]QRB F0]aɊxw<䞐f i -vO=ҀґSi ztS赣{zP\=-ZMoZl ~w~7ׄbnjgYi1 1*ecʜ!2qwB(1FQxREH"?2Cm@ipA 5̣dbAn,#ΟQ?g2yxC*.Ə%{^descήV3grҬUr<Ѵv#ff=25ط[fqf;s-3#zŀ^C;IP1Nsw''Gw !=WRCWɑȬ2_*gGcv^v*Yd Uxŝ E*+#p*.(Eh-l.F n {joD0-Kwd JZt*Y䠭 }}^VRm0(AOŚ.Z)CUQg"mWEfW*^WE>T)*H+}fa@ssqW wW1^.ÃTof}ycW, ]>]|^I` _v%P1U'*FB  DAYHQ"H︍F&AQ-?*'4 E)]Pp)l,tTsXpDPEfiZqW3*|t:x1 < NE ԫE\;](PQrE3$߳HflM.g#sG V] wa -ЂqMJ(bS\w^ fV2xDeфj4q*tgۣޤclH\i_mFUxj JP-jΊu, uPIuF1hi`lc'gB8!IsK眜sڴȡ|`ز^xv&\egcBovw&`d #J!TLc'Abhͣb CōGcI߳xرrE tns2:&luV/ 3Ő(] (n񊌬ZG,=ZMW0N0;lD?|S2Uq9hkR3Zt1\]gLwbHSO|a D$9[NP\Q6 %ZI'˴6CI{)rCU7x!ƈN|~hʨ9h"H&2妃%O뼢!EZWd>@5:'$`-.Oi0hexV #SQ΁Bq9U᛹|J}^I0'-'qU0ؖę3¦$PVv bQKeW(F>r/yԊ([K}< PT -ђ=C RR^f6;[umn53&-GKXE7((%V8RD.)>YJ A@d#|7|B#\ 2O ׁ@ aBshMyriDS VB(RPqE=KY$Z Im,D0.FE׹v/w}LI K1 w#whblBR?H|{ǟL*0[ [oSEscOÃş }(Nf^ n4 Uӣiᒩѕmz=]nONRGl}<| 'n05&V=h.V(1ǣf.Jv.ުA) !h{ϖԺd"x_B}-+D-'.tdj%n*|fZ[›L.q>^{ry{;yR?hWwats?o߼LMoqy7^/-ډj4wOM_7w?C8.n& Y9VR8bzi^靋&qy~@YllkVV2y(M9>y4 CG$RSOeм<g#?0C5q~>X7/ؑ[jyAweAqQHZF`)1I ~A&(eF1E ]Y>C6ejiJ C w &\ZgJ A@ SBe."5Q䚹w=Gr.drtJ9_ .0LwTG7a]urbX3ƸQ:*&^1` J3W"yP!B Zu1sN  ̨ ڀlcNȮ9) =ɮDq٩VQ(o8u ]U64.ZkF˅uwuS IgƓB9>hz{>|#ߧw7>7-A]k97C+51UmcZ-TなzbLTmSeWY577ɠzp&g4j*incwh(х6LEJ[Nn3uCJ!U/|C SxU)!`3^lcfk&zuYs{8[DV9(Y%1P2+?QgƆ37$ʽhTIr}P=&6_? ; jQiCݖt%?!+G>K>O|CFMe$Kچ^p/ Kw/U|1``A~u Qw i:~My6u~l_ꉇcyχcYo>zcO3~NP޵m,"iwT9Xlq0n։.F%[8e8x8dXTZn_1:nyEY6JsE}OOzqo>aaSD|q{; (݀ ilil&t[UچCë[˲Wdlb* 5 VOj=֯+_K$"Sg]9-&>/]::W06 #M@;DaFfAߞ[ 2_Ok^_=z|?%gAu7e~R3z"Q~8-O ߗfgrmRbҊ2: v52ŮBN1 ƴ'} m*Nl[aCO;2~~ cq71ᩪ.GnzOKY.|=(b`ۏN5 2܄uP?_ե*סGƒ༨=u?66 y`*z=2=9i̥uZZgEpyj7'5k$we3~Yݼ Ǭ'"|ubw|$WTHLp܈"$$7O} #*'&dp]6F?,$,҆xH?ʰ6gl,)&xTB@MbG;X u*֣5LmdN9?OOMS>b+-=oo1 0qRpTs:wb&3$2vx6mG3zWi )P}̾~ k7b#!Ufy>?~ӦNyE)M}0g*goryj )Y@XOSgď';n$2cDx3V^GmV&SR 9~_aO/7ye. D 2]-/)@WҹU`J_BdSZr0@3:OÕR˶i3 nJ?ԑE^ݯOCwŬrStsA.w}wo~~ظ`6kRwYQeJgͳworurr|ա| @wwN?nv~_UѼ5+2NÃe.w8)| ' )1Z,Ng77 Yło@ iDMUbJEe+{)yi,D! 2kfmN+ЍiTEV\qaZqL`Z(.Z9'Axi:dt=bR[zdMt?vcٲNʘğb-.}[2s[?@]Dg;od-HeTX}(wlLyptӛg,-pM᝷[w7{愞&P),7@N1&H%)Gʎ^`\_uQ"ϭ,n1dp@cFUMobT IJijbٲ&~%̉:+b_+Nߥ;&F>2Sx.Yyx]gU2-a&j ;"3愃x vv|~~)J>دt.ږ]Cꢽw7VH a{S'`G?}Mߧ!{)|gmd]mеhz=WʎlT{e, ko)B+"[NP&0[lA9[a[܈S;Icxmqfqī:E6{dxrډRuo*=qdXoFuiM0ʰtV~U:o+m|.IJogZ`tc}Yl/!~}KXϊizƌ&h]5.pWy,4|>{X~>Xlc̀%.{;+HG{i<,,@`[/xz d#QJY!{u=9Zc[4㗟?:(gAM$|ysSR>R\N#È?2\G'3\FeA.hHIL27yXyP<ޮyg-Գ=Yl9U^|f5'K) TW3*Cɠ9挶r*7*V"\h,stk;v ݆sB T(kLR +\֞at J*eZ=3y˔XIK29ТDo[(8 E-C-L$'WʮNhW /ӌVKP; ?\^|;bn>Wo[#WEnoJW~V}sa/Gr.S:kV促{}xP_.o!ա| @wwN?ώgҿs~֯Wggr}M*D؅\Od (_ORyM=v өu n3 *XPGi@:e}rXUt^u-"!!5jhxJ>:z|ItGQ*<At}l.YkW^YaϬ-d?6_O2;y-;g/rry™bCJ< L2WF=vn+$xgt̵hCUclWJˑNT*ZC!͵UJgbc)7ܔZ$ҔsW+ PzhCZ4[/07Lm8z{{N_SDjre%tD%&ޔ*(=El1Fy1u9TXR] Z'֛6ENBlY73wB!dr"3l\17Zއ+|Yhc Q9)\7 ߏ6mVV:q}-*J9t8#+*Wh}g }>\\ע.=k6y"ŋM ]pW,>o kCh Q "ekmH9Yr_odO ;ˉtRx栗CCj(Ͱ{C[bԨuuHoˁpTwS V#҆Rtx=u N7ZUY-W)9kg5J*$~d(笕*3YDjT B1ECo!_W @-˙9QJL0\SJK'FI˳Q{FQU'Y.IQ6U2MRU~dH^jC`XpӚ%u2%eKYB<̍k%g-XHH-j 4ۍhBkE e!cTz7erWSYǵFaGE6lp;ϳn_XgHfnP6.Gs=Y,fz;zNr.d\zh XqL\`.W|2{uw=w6; *&l1R  )BJ啿PQg¿Kjxn#khnK1I[?ޢ^l5-%e0;SPGk/36F1z l[IFd@fSʢ+3Ϧ?8pAgR-T\Q 2L3\4PH\B|g\Zl>Jsyis-\@nጢl$/,+bR/,:p,){ẇ@6:N3,9(,!tPȌCaAyQ drY0](UAX@e4)&v=)d̋pإs!VEugJdbϐI` KjEBD*&JZm]H1v% V/E}dTw~9"6[l`-kn|:\7sGR6U"IDS[jƄw(yy%9)P8Wq q 75&05N 3F0-Sb` o.a_+~<0-2>ZguSA(4-J>+Xf-RLJ4!F aŅ3_R9+ZB{( ,9d׻8^vt7*c߃h]L Z`s2`Ǖty:d ' (+\|fJ^2Uh1*TE]6 WM+Y Z>[7(Rc"2lߒ0]vRhI-Σ%_Aΰ X]' )L$#lFwPE.CUtU]gP@lU5,p ;:$C9!QznaL`%SbA| LZjdrBk{vP5C>pϲJ*Ceps8;MGOXSM*umU*Lm՟g7mռ|\cw;>ZGzydzxSD*B*YQ(ydONi-?WF D4: ^՟4J)8:~e#%Ƅcj#C L类$JN4D@03,jG ڣGO@Gs蝄Ji˙~6sLi-)OqRpD3>b@y/5&%<=l2@%H*8},!5!b+-LEs1dȥ]$..0#1{* 7ܶ=D\,߂d{I𹒖(t $rG}"EőB8wὑC ALd~[oJ9~>Zʆ^RyE2QiBیMs6c@7pö NN~k̈́ijlpU S!PAEFP;gs,|3:nPZ(>{;~J1!rD[?rdʑzcia 9 Q" yoy6Se4B; $t@ic DM M>B4JiX7zx`)7Bjٙ턤DG$*m'HTx4vJQ{vUx`>~ʨz7Fq0WoJMj}6: nbzE吶%Xz-uqkσ*~ʵ?J+d_epW΢Qbǜ2>b%g ~v:@N/U%q)Uq5l1`[I]4` NL*'^E"͚OZ0p)Q*E5zrqXq=,nZWLՇNʃ!W)8 UTk.!EZȦ2L3s$Ld.ǽB6ru pO;^~Xyyo1xUƅDgY8~z^!H³ QzJnU =_w9!gh./~QS 6 3DL78 3\CjMnxmnM8!ݔoqM!be#DO7%'c7; 9%zl8qw?yة,)iU3 cx֯DQa^g&'`y̱[(hZQd- (%󾨮_h{ Vk:B 4B_FT 2I` RWbEFѯ؃\~ta' i ~A:%vX 3tP×}ĔNYA]rd9EYi1U J9ɀF߻j\~x7Gw?[>LiׂSitۇ.8Z=o&rśp},+X2Arf B-&%>cB;X (/ GVtL4+r#sЮ!U.y;^h3ʕZ:fyhecH-(h1u@8 $cNj4CyqYjOb9{4C&hE~үU Ȧ[s~Z^i. {4ۚ\%%| 9t|1.$:*jV֎{.H>HE S/ozo֋nċYI$v[ѿjeSJW+{y[nC3o`oLiǭ+H%xK <^zBX^0UMoşI3گ˨^LqNwз?&i04\%*DT$UinYz'hT/ef9f KJxZ+Ay߭~% 98 RZfdZx߿7s026&WcQk& 2xNeSv5yhWB=G I#z>:*HbkknVّpo U~HI^:N9UF$e;[ ԐpfxR)24n4$ 2cycw RBfzif\>v]QV {mDSV8J5S 4y*ͥ"Q"v.E-Vq!ڔNװe%9Pn7 p'kNEpH/;q1u0o:>99T:-\Ywا=WRj"g8ߊ{@ xU͜}3%p@3Ɍ8?KrQZsC/I@\ Vb+3l/ $~)Wqε97KBiE-bm\$0`aGmX3 q %+CSCrox1)@ T` 7$*F'cr]rxAW0ŤCy `FZs p=B}\`Gm/I'"3U2gtA \D9H0-rh8D#hW 4AsEhA2_s-ġىD%!-n0BPˈU)Nvc %VҬ ,' q`d[}ԼfLsMS]Hh, >(ٶ㣟B.E<a<`[+N'YMMz/:Э紿\:RWHRԶ"SE%Go(E2H:r*=ђؔ(YR IhԩQY+DQy4rQv*\"$ HͬiWS(i e=[ V`_E1ʓ`:1H@x; ^M)QGpEgrߙVϖvXs*ypTML@lHtuCCBIL4).ё}!9+%ItK/G/ !Bk$J K| .X1F@C$A8 (Ѡi%ҌEuRЫ Qa= 8w-w9fx7CCNrv~q8Z1VYï<4T>~#/j p_+!LUUoۯ^9Ζo/_෴>짥Zݏ>Dvh-чSkΈx1;Ң%٬QS2ລ@rޑe"?c)/yQj4? Hl=I萑HV]R`́DʕueXKGAFAW,Mo 01 醛LqShiU 4>F&,Q"qVBP̧M\w5׷"j+ %y5h^~ИȵV N .?86tJ5"D(:dpyiVK7@7Y- MFņ7bcC>)sҬ 9Hí{ {iv]m~hcPoJo?NÂUjv<-6?ػ˒jXkQPeͿu94}AjfOgOث72ܜa臀?mXŮ;:z3G " Kx%o|KWL/~vӸ9zIET~o_py,/?:n-w7Z I#J6'B8"ey0OIiNyKs#?8^|>Q|xCaݦAвWy8=[:Lk0ךmZAkD29?oDi9.`(0@){de1 \#xR!${N,ohP!ߖx؏TkOf|W{-nXxޏɝ[|6_vS^Ѝz hLKֲzij]߱<yG0M2Ok>^i|]zha1C-$3ruLu7ҳn -8r;ܓ|"Et-Tu3fhڭ+%m1YC5=vRϓ@Qegi4O,hH J(ђ!b甚OFпLTzΣ48y_Px r_*ܗ)rkaJ IER`C ӌr͞uʗHIi/Գ o1hIP-`nCH#fj]x N>ll= I~B%R.ɓw[a=]"\";+`"7!7f ~?GX˝ڲ݃NmQ#}a(e( FŗaeaPv6)qh#q#Lz8:$9*gBgΐwZзD AmAQgCmaFB2kC>- ONQ@xL{B<o @dM;oW0׽ޙI{N 7]XVR@~x:u:mN`"w۫&1W_3T,v_Ϻ@,8}2>[|!~XcVYzۈb>^_?&8,絸c_XjY~7I+ZU,ƲMv tOteђ/ƛoJ QO̘TgݣyhG͊Ȥc Q , M]ҁh)ʉa~ih9z])qw#ݿ}oۢ}DfoO3l^0]Z{Յ($P\*̮XЋl<{\?#1G~-V=-߅!"@cbJ'*W&qKyn6dYVKgYI|Hs[GmXĖmg*!yvg A"JٹIu#r`ӥ~!,4r3E G;t]!7ˀFށ D2MͷpˉV8E9a7SI2azVLdmQR+ B`t4K@X+?$H/!@%IyPPUC%TS* B9L꡷?-Nof:@ogwqn%%"G>qeu -3:Y6sQRqW7h*\d]=0.%۫/w!fĵ2BADՄatN^leݟݲ^ڎо#|&}H]6#G&w$,q]DCMR'cRIMPØ|L/9Za ^H#8JJ+A\4G )5; l2ؐɓ&ꜥޥHVD]).@O"$Gz֘sv>%FyLR !$wAInc]El3֮-cL?"r2; #DОEf\V]ui)L j햒J۶9+#]^!k'hmܷkAyKw76l g;ggþ6 iFEЗb{р?弇!A6ֱneY֌gGFRK3a7m$F,yzŧdzJc帶`+6D@6T=#ֹ2q ѪTW"4e"0ZuZ"ZqDTjM8̡nc)9EJhSq1/Lm,_26a!DKl29y{vCEUR Ѫֵm5OYrkn`!DTf$c'Qq~<9{g#s^P E{|Tr_g<\{gƜ!W gD*i;fVa 귅?s|7(V˅ lY761cu0c vݹp֙wVazss]OF=^Ƅ\%[u]h1b6ӣn1znQa):mS]7Vcn6*D-QMϏ&AF(!Ls I]p(1@PSvz|0$8 (lD`" C1 tCdLh#?Q22'Ct7>lWJ>KVDNӓ <t)&?չ`V*T%idk#c:|aī9[KsX~u%"k%g^|Ai=pL^*n"# d?fW7[1dس|_7s5l'޻C9g̹pƸ35yeh6Aran~6@F6W?ž~I7vWJ*`WD&dPǸ|ILY =#f{A!c셐G29? u͙~bqr!wʶW3f-êJ͚/܏"ds"#z_97DZrۿxYx71cSx)?ٓmVOg&7ᙥD"6\ DB~;;mwhII\6)?n”ւgR~)?Zi^fZDH f-1*d.0<3M%R8ÄAE;`H4DYuO+ K͢ NZ0>!oԽ$u9ƨF(YfB!%=L4ԨgAXA]imG%jG#_,w3WnE6%읾MZX[)9SFmBG>v>V^%svB^v)/I@ko̥`gyyxXy>;Qs'WFe(Gi?V0FH^RJ}g'Io!'r)iYgJxH'1IQfm o|{s:A<լE??ևs$"]ذ\*ĪE.J,Eɞ(ٳ%{Vݼ! , #BoD-p.D=5VK^::kOkq K~{\wGğN_6cx ƻ[.SD\IEfag\-ah5鮄xŤ弧94pfӽ gܜ07$%[]HIȂy+A ɴ=MžE<{_K3K#Lґ&[9^,k<:Sdٗ3l./P9'eMR!oYBZ-OWá錎vvvvVW[ZRh T1RY3,TNI[} WP1\i !fv5P#!;uB"Kշa cr&rJxA@a(,Z A#uUV)J[BR`> T` \ª*;A8oujx4Ye@AEl @* eC`I4Tz:KuDbCRަ`$waC?_#}4 "5B~Ϗk,ׯ:?fOWrtQO#%z`E\RObwҟw.rM\w>o2>%[c÷>#aN$#\D[' HԳMzVS;;Wo缡2R9 GY9ᘇ"W!N Ҋja,LrEs,("mϪRذ8p`\-yPJR"3UW/QW+@(CEZ,>|ƋO*nt '}a:i;^*=Sf (8۝%8}4?Sl4eTm㶜mTO#&5%s3oCJ$fCx:s#!3v  nҒa?Sn֟EOwbflKX֦n[~ ˛mu&ÿߎ?VWwcN)}ӔqDrt6B:.ΣpʣmsLj՜&F|^ ]q/R!bM }"~+Z 91WV<ʥ4:J-tBp8IK,u۬ǷaEKȧ_EnhcΐK=CLnmw8?`ws}zCT=QYùb3͸8@}elBao^ZMKFk6!5&5l-kVcnR6*7Fo9Qq嚧cg8g:8лƜoԼǷVbKØtC39V0i%dk]vڡT7oPp8LGXK$qo}g>rڻg݊1>+ZQ`$'wf2&ܲ`ƴWB❷MxP C4Y+TBA希SbF2ϙp jiě0U#wc? kUu16{vsՍ "?H;rO$Uɫ3)%6hZ e!z m`4EM%S\ޒZd%ךV')),&Ab ^9 ZE㠷)\?|l3Y@RlFE*V)I(+.E lE"8* D$yLďG+;[3׀[RK`s - m6Q(G#-3 aWn%6DGx-Z:<ṃouv~mؤ]¸!dC¯m&\K1șUKhO8)l<& U߇rnSEe*얉G|]v-Q,R{իB)JL##1f iizjΉ6  f?>~?kMCI_*{签*E =l 7N.]Ao) 7g6GGdޅho#< #úF3C[1D8Hz2QT!}] #sSOtgS?rdV@E&ו8.{rIAj9hTNMxEA5 paU@Uw͕KIUW)JgS+O4$_DYQH}dW;@k_ӗ;L>Kg0,&eEj $D˼3RYAq4%'V1DQ~6TT JUd;  ?٬hf ,U.r&Lrnl'8x82ES'DP;3a3_ZA&RUՅUe%X eڙ 克`u;Ĕ8< ,2:`h9r/#9Tt(]1`ƱtXzpzk*@*=(a8v ʀC OXF(nvMNtc}KƝlK&`AB9{R=QǯdJn4zK11:<w6 `vvZk0_hj`Fjm5(x'K+0*.}^.susPUn3v>/O缡:/}Fe=I[nS=͆lU0yO8ㅣ_bT2Gx\[;ۯ|B[zwz**aHA)M F9u (Щr)8@Qk?{AgVh/D-K_)hA{mW,qG{gDU1F\S`; .YT,V}?ϱECw9=U u1ZۼYzCkpnq[1z*nB*:zu$rLAg滞rR"L-"t0E ` 4@x`Jˢq*aL ʄi.ZATGbJưvT-GR>l'j k]N\{䶾Ma5GZInٌh:g( $kn sS3DΠCo~}}#!:xW$0pیmeL~K>qre|Ư-*@k ՜-n%r44Ftu6v!Xhbg]RRpd4|l47tpZȦR%=f8qq1G3k>(H6oK4Cjm1=JV"юeB:DzZ³ " }m"ztԂ %')T֐Qz%bi K>\82̉D. b!C<fkJXba~Y_쌖¢XEʢZh%R晐4P8"T)Y#T^h603|8|;b= d|Q4Ͻ1ƔBPւ%Py {8{ED/qKd|;QeBD:*=s)[(("z ٻq$W\+/SOIjn/엹R"KLjk5(Y%RM<~ӍFwM]@j LjֶjZ-n܌wr[L p)ٸA"̵dV+QuNdsڃ5Ě z> "q+[H_ oL 26$AKyFJ8D-e,2˕Nd6ؿEƥF'˧N,:KA DkLIއ։(#=CiB4h{_n/n)FAb$-BTw`c<ܘT}D ƔwjW~bu ^\ʓ*6#s] .$wG yȖ`@*1EA2T 'vv?&҄V~D#=Pkp4@mAKlp8ﲼAm:Vm<{2Lj+,pg_`·JH_W|P5cTp&v5F 8*B8J'o4V KT4AyjAjt I3@ΌF%hAiw[l,V"cgA+u,I2pB&\T 0l@zB{٬$3+簄~7Pq$BHRoZ#}|]nBL8Gn<(j `MkI[QKLZuՊWҬ=}ۥ0e,oݚ+a)F;!_$1YQPG?]ѰF WDXސ>̏'=l=ڠF [j8m&jX0vf-NAel^w>I6>Å~~қD`v!묫sC0ib\VY E@xpvvr'8xըS١C\tnco볾^ !g{g3!{l= .> HM‰H`Ώ#q2Uj8QF;\5=jXIP˚;drM0!*Y2w6qo#{pswAb>蒸Ƿ24aӻN 0;^*].XuXuzԻY8E#nIIƭr޹<1%0&yps]R)!Hw-x= 9 9;1^V(ovڪ@o>hEwb|AvR;JkWZ pwacf  B/Iw P7v{R&e._]@7.dGDAHyjA 4B֓-VJJJrBu$*u)2ߜrOF08h3CŠ햊ADρ$#.2wQ@ @οIRkRW;)e.Gn&^/6zd FLP>ZؑȜ+-T@A*K+v $=^צ/( ŧ9^ɍUTQyYa?z,/$-ڔ-kJ^Һ>{yP\qj0%#Rp1GjTTBgʂX8x6gR̋x" 9;R=h'hP-J+?Vm3Ջ|+! Uu4` |SiEА8Seq;s1Z@T7rshˁ!zxEN;=n5S 4eg.5Jԝ7x~IJ 2} ߣ+XO69`:PԿ GuJ} 5r:-b.}N(-b1C 6VvDlUH7wݫd 1^emg%\q Y>7FSN wHUBLޕc[i|j׏z;uƗz{\~_f0,>j7j+]j5/OA.OUGWF0d&1G >ח"@eJ-X$HYQ FGw;zlؾFx>!>-*aŴ J&G?K|g@өğNğNS۰5'30LfYk ̯u+5aHB˂hAn:QR+[J1H:!i_E"k#ϛ)D5ƠQ$fFS gNq變SgVH8*<@ ~ofcRpt?cŠ/$MjDgRkmj h =bC5B+-sҳƱjPOR)=:Rz>Mh U? /nq?5>sw{8Y|-φ#{z?mBPAmJܤpg-i C99`zjK'cMO@r)͚d/e}MAѲ_LDMAKJǫx4"#XS/ GL5R)00Z!4c}^y6P.95R2Q[Ic7An7n2sq_[8圩Z)o,Wd]8HB[^ A}@ynب 1xܘ >jl7ڠ tp"&?kpO9999 E3U[`dF8rY,QsEH!\rk-kp%]K`ifThZ (Sڬp6!Oa %˙nĘܧ155 l(uB úr) e$0 }d4L58[Ե#C ˓:`"F`)86ء6I^1cU^(g.D \!tSZ 3f@2QX uP$9хtRoa, )s`I[mKK8idHɘI̵8̩֞=ZWg. 0TSr[m@ 驜t&CvHd k ]bx0r' :2춳1b|GA$\$!-K@z3V(k20LgODV^9d@VjQ"DEya_C!l_@9əwwϪ# |xp9ݧ{/YR Hff_c6ۜv5fU+^b#<dѮӆBWӽ3]]QQ:FrJgY&3nv1·&}Y|_ {p)+7>'ߵ/*i4,nmz%;xEC,`23#t1|Δdo]L;C:SQǕ.5N Ȭ=J2k)(ޘ+"U$aMqC^ ;%sl;`mf " =2# Kt 绑||8Zi_ lDk|QKODQٛH&E"?Nk :/ 8[r`?/O.76/<ް'|~s=-wc_ stf"vIPH?=| k(x(c:$.q):+Q6sJ3s [P3W75sWyJa_pWf`K*\`i̅ JTXDM5Ur4E :HIL(.S][o8+A^ۀ/ Cg1Xlc =O(H$vtc}Iɱ؊)*8xC;j}T[Tg+ɏT㻯s}JqawN vydA5Zٽ[ޗbU%IA$]s.hnZ"; >07sHɕk|SxA)vQΞH$pէm鋷/ޖx[RHiBTA03/Je`!*ܖ kDP0HWө %n6hS:&״Z:A@\!H%A H, uR *&Ty+uPz;&):gaE#Wr_KKX}G!O/ҋnw$7d( 8J)!nab\qR+4ԍ+UQN5Ca Vp>:l%iU8IQ~V.UXVBat:0 `D AK_[ɮȅD+-B ݻv)]/u38aR1C]pկ1sȐ;шtXyCP JTb%-"t`$,󜥋a MT )4t.0z\"=L3-Š:Ki\3gD1ϖ)neO .Fne'3ɥ$p02"2f+6|>̉'} % ħDwGEvա`-mLfv5! 2ڔ ǁ|(!g2>p ~9x E Ymg\% KF1N< .H"4)1GÁJwūb!PaZyc*-sڒZKj 9L '(tj0$4؜2ܕSD.<|]&lYW(\\Ǥ/_|B @jHW0~ԭn:eX1/ϾrQޘlap1t.Hf_:eP)Ce=v(dqK\Ճ1yz^/{=GiR*Z\ϑ:[6ͬ$1;;$It)SB_s?\'CԴlkU׋FڋO<)wbݟBy P"Fޤl/AVIONp-Aɐw1fʐgvJ{18 ,=#p+E7HFjW/]AE2 xpiI[rJq2c(!%FD-N2i*ĩV\cR:[M+~VOs5=6P%_.IO 0m4}_̪XxpSF/ڏfv[|ۧ?{y)&V 9y Tِ9ewnI6g}ƃJA'nĘNj] HDڻ%w4ֻ尐wnI6\*ݒK1oa*.vH@D.2#,*d [B;bP2QZ<} 6"hpHn9 ;7%uݓsrg\ǟ~rߙǓ@Tr,ydL }W '$7Lչ94$FSI:.ս%S&)'8\꧜? N>r<ijcTj4g>*HAq$Mַ:cJ࿅D J:$db3 ;oaꝦ'9-t:+$O.`s ig"t\DSعQ=O]|yY+Wi2 >w囝ʧ\[L/CcxfKr2dHTb8ǐfSTI/ļb˙CL-HL[j !>K[0˙i0wcfWEJ!JPhN!C5BY#qNtg`Li*Lq!C|qݎ([ژ"sM?wexTҘ::W`6@wB~k!z7X%o4.;ư|w[{;A!̮Ͱd(xY SF}NXx s]>ǟ n΁Gs$%83bb!>'qvXk]9ùw+svh% yт6y>'ŋ:]xUg0ܔ2aj;f._|V]YCQ\:hXpMd`gGEI,6٘_s9j";J^r"@RVh (WBD5"6h P=zN A*@H`Ļ?z%+mjzwν *̨V) _5Xsno#g]HCjX^,Ke.\36 nG3D`֨XA DX(JVIwY]:NOYDxpq cLɈ-][ P((9.T) H`F[VXɠe,+h*5n B\*ī |Rcae1Jd&i2.B\Ĉµy ~H֓볰-YXmgau~mqA= `Dgy4ObQMaO{=NSPT(`Z*22kFÉ h媡Ue{= V"(/2/EZwx@UH!8{ƨ;'sC{cP<Kb"  $( eA&(`Áo}b_[Ὶ6H#2;nMs.ثp|^h~=~t|!˜ `Et,+?"2`N1(hrd@@x*m2 HQU0Q0vgP + (f2JXJE1X9*ܖ kDP0HWCY08y IL/ӿc {ܜcUV b4\ Ee ku*$S,Y"mI:8yShYe$eTPc3yqx5% R0L8aJ&%.TAJ0-_MDDGO #2F|!gJy?(܂iQxW{Z5)+ [մfp\(҇n`hRg4Ռt$_.Oͬ=R^*޿\?U WS24 3nn,ҳՊӿljzSSy Tj/sM)&?{y77)xT bL'.iprh6n9,䝛hM{'y< 4XU*Bq$AY(ku'$xrWޗ7ͽFfv<UǜS ɛ/wv6oƩϟ(W<dPF;;uiQ=Utƅw p'uN>0%UGt9$z[BiƮW19|\ZYNN4[;7jcVӼ|WxNhk*\lix_!INN'\W$)rXIyIrJË́X[(8 DkgK* @lT}xn}Ƽ`5#ȀfXz¼I+A`2z#aK,G,9'? wcz 𑠷Le3bFN񃌅^^UO!~ZY_Ŀɳ/L Z: h>k>ρ;@K0E3.xBLpdғ&/&Joq 9p:FC3#}^?l%2v# /ǝV~9 m;t"Ɏ\u%Qv.$:w[tR?\=-.שw.oM9#vfry\Ke}t?sQ\3B6I>}i̻Zm{12ۺ.exJc)?iVx|*`4H T1P1@9ݻ -mc%!5t1-, `%%R,!&Ƃ!-%R|2+GMߙ3B:%M}lX݇AC'/W #P^DL1BF(kqAa\AyP($#Z?{WF C/;=pо=;vy $vKG=7QQ"uؖC%J|H$<5{*V d[N3t0,"ce^g쬍2XMEԱLbAg3(Sk3 L6gXsJ2C֫ħ5q_>-f9ưеAn=Ȗֱ2l?7$ArLoț~~Uh0ϻx 5~q *PVQJ^XVyaz^3sJps~8SetbT'J|[/2ϗU ra%_.K_b𯫻b:u} ,rJIɯR`ᦃǯrKu.,Kğ?Rt =X^.*xxaV*E<[،KgԀ>SzV$6O Z -q ՠQ`Ε~Ӳ< CƭPKE%U\192DF"87mV AVL `wj@;ȚhR Pj`0ćk79T{tuG8ir٧{6J(0]XHB{ ʝ}e5ϩ` #'X̚ u GθɨD JY۫,,Q8zPK5{AtD/ %\Kxr@T>p| `h&Y<}O,"V;xu >DqwY $f2B'NSΔG 1iʝV/ңڼIt%-ޞTUݹp.}E1/Tm^OMr/>"k ]W'nw; 9զN^jSk<&56u5h;VV}U>}z'^\vPY*j::P;FT T'i͖- {aUbr}\,#k2! i pAq $ ZR/5~K㪽E_bBtwVM>KMM1v6ÛWw"ƅb4w,tN>ZXH1Q_G ;:.Hڶu'+}k6tA71zMnqujrNt(.䟨>[u->R%}?¦3I3U|^km r43໒d#xf^!o1 K#H9*lIΝUJUkT+L8 (,8cpmyX01D/ C ϝ\KmӻUWg蕄(L,g֫)y͐B3<醮82 6RfT)4)c ]Ұ*ͧT=t|j(_Cuօ i:FrR-F,7j1ELx{,,ٜ`@30A`KAC=⍐b' 25ovkqFE>}ccqS2T(\^K)\^.}4h9?YI&tyYL3r{L"iJzVOnU$Z2U.E3I kQ= Y!QUs&y!xνHԺ.E[}GjDkقZ{ jh R2}=nB[] RD;h]<Oua ضv=Tք|"ZC*kM"('yh悋+%{TiH.cG~ k*?DE=xjx$oS[ !&KGRvYj%i泡Bvg;J~d9Lqč\S,/U>*9]L]M^LbB;.tȎzI8ިP;~TKKڷ[ bnt 817q)8R͵\%?˲Y]vOe=@xfYP,ӫgDVRybfjGB$aD_2dhCPs!uPG@TMŘ~s{!TV{T^{olzGn8!Gfqwsu]ߏ )z:?y=7RB8R:aPwź{QGuR R,(9*aK,1neF%3 HrKwH PشPkrJUu2dBISSBh*%TR:K\|8!Sxwl+Sx2B5 O%^[Jy]^Sli* jG ̰bQCcwExìSb)rwfD=ÑȀ#gC3`M'$n a ah?`GK=vʦNJ 6Yv^ŭ.6G TaX./du)a&+wL'M;LVjP τ̼˜jLcJNf cf  K2o ,轫*q^ph16`))8A+!H@(2GJƌ|'YVJ3A0UIui bao ϰ5BZae0;I ’SD ,b l jbQنxDr 3[Y0&%mcܗOq|~^ cO<_*ez-Wq©7_?*4]/zBDa1 ~jƳbxr{sa5@L*>so拿Ry\~9@H!" 0Q4b3&fWfަwCJ)cJ' sd7v\TJeaPW\0}y+>A,uJY)bŁ{&dTeTHX`e3(hO S/iǭ- H y~yjPAWEF=UGv(V/N dJ5>ٚbk;`V?&7ط9bXLIԉdaJSr y!r9~q0q;\PMB9Cm0| /Nъ]imşO2rs+=-J2# 2dY-8TSfڔ\' “^ւnH޷oNs"[D/{V7W5QWEPM/>̂(uorԼxN8x I!o0OaCF)j l1[e32$PSelt-^aD]D1OsȝR#xR&VR*CiĜa> sPR$$3{-Vgd|/rBӋEN;q2dRAI.I3D=fN`qoU5c0JN/^YuZMUq8uw%&>54׌1HFiO%rhaC8E0fjX4% {tpeFqɛD }jaˍuӘ{o?!46\ 55(FS8pYcXs$1RmFŖc)ҚyY $#$ѱ`&& LSFJ Na]U%cĐRe/I*8s 56N%DL"# WRTJ"V%aޱ9N8<"TB*ZZ<,QV0A}5A!c[fH &^Ѷ븤@tK{ʂ#z U)V**.Sq*aƵlX;^<2C;tڈUld}T?V ҏA;] آJ &Y"{&ZjIQF'E:(NjW4J֐-'ciܢgi3Wl=y{TYMr4pR*U{JUPZvn:EUs; 9e-ja~>_}zx>I{D\χI.xT&^~x<9Ԑ1H]w~zb \#-^o /okk6ek_R]~aNt*'VIf1ev}N忟쒜c. %X(bg_F_fw?2F'yfn`.K p:E[OQn=HέW-Bp+[]G9wxtr.&GP閣i}52YSLT8Ccq.KK{FInڃm1^Ӡ,|7ηN>AZQQ58IQQAA26{LhPb|@;.F{$$!)u W[!\uLrU_[BR<(nя;DErFjQC#+A(Z#җO*.^ Os{Z81{|áGCj'/)T8W9e,bB#A4bZ $1ߪQk ̸;+^J-x۵40W}[w;螨: ղrW$K8#x3[2e!r;8҇D;(v*.)}XT+Y_x!W4iZmo:'aUvZ9eg+1jV=p<׷7lج_Ѽ\ݖi\IgcwNukŘJjBU ]LV4#+o^Ϧn.*/~rvOl*g?E ,}Yf2VO3T\9r"U+"q wTV;iE/a*Kr҆RZ>š1P1 cZfmS4Rݘjq ^Ɛ Le#Db 2jrJ& )B*\א׉()4 b`@0_G9;N;큃DxR_+|+LU*' Z0iл%Pk>XUBͦAu3 10t0]Z>ë z}N"Y&7_!蒳ZG5:o SjjIWAINogԵ꺽ի&V*rJ߮}f׆`y'o!mC5ֽV=cDs59FiTnЖx+0} $MMUԯj$ԺGյ w+ [8lp!P0+HߤEvU1(~dWŀ'Xcs%PlbWܸac %VN 2r+2&3IJ1RY%RR!duA<0G,ouP g[@*$ݰ[11k~H3Ɠ+!zH%[]OVL`%V/ћZ+aҦJk2*ZMhN3j:uZbRx`{\ E :_-b2==԰>bBVu]2,nuP&,% BEpIjC)`=7& JEFĜj nZG;?,oؖ;mgbrUJ N<#.+>Sfn|\[wULVC&LHUݼ4[Cvi3vYRcZ,(.&$PHSk|y?٘rͩ I I]25X]O O%tTGD<UbhYΚmN94 |l5KsrDP_9Ѵ,=G%QW$5`Pzf6JB_+RLv*-Q;ܧB%DQ{&i$^ݿ$%Gb̈j_rCIL*KВ˝&_ X.[|9J_g77[O "3noXdy>[Ne+a;Wm@$*'~$w˕+@خf??"ȵ?D $7" 8FYaPΜ53M\ a\:(wGr@aeb?w%?b^V .F>&a9bɽzXx9ʷUe3!/ Y~ܔٝ 8Oe2DrZAaVLh)ɍVVYv0]] / -j%fXc~ztЙ#`Rxv@C Z/WP]Flȝjzd,ՔHKP̐¡ ,MH)GXU2l:jB/R zJ!^KU ݙ߯W 9 >/=k&0w?>ᒕTII7W,G|p~ z{9.9rbvя?|=3f\mT.[<)F>㿪ɖo7UwB?3`{h0k*"ϟ^_( (Mk>x衚! BsxpS/JKD 89UTô0PZh,3^PB*{[Zh;)$xvGJ=Մ"I_O)XK!}9'p=J~- ܬ`U})ñI"gͨaFD6>/w'`C\?qk,qGmGе]{е]W=@U[ 6n3-><u`>w.Py|x˟zu7VTe-=8üm +g$jeZ_i؀j5$1WqАA &"]IpJʄY:ZA!֣s0}pJJDz m[A1)oϥY\O/p9~ V6h*e+ofu9Xaua20)r0 .$.HwKЧ>,aV͡c!6IMLJ5Y(/ɕ?u#v~S:9i= #9HTQЌpDɐij4&9-.Ik a0G؊R\)-36gNಌǨGT'w&5GG3owZAhskD KMiT3bezRUbD!lCQx8'R)@CɥJ^#> ZJEw/jH# dc\J!Ԕ+FRG-x#OjC&TpT/h>{¬)o8-|3;c/42^ͦ 5!ëW754z+|8> %.?h6Z<| k/ f=sOzc! '[AauoJA~&yC~E<;P]l3Hx9V~؋$ 6)88D9Bv>\; yij??)DSE7R(j'OA'Obz,"yvITA)}Q9r I iBPN̠%.nI" UV,fF m1tF۳ ߮V^:U!J^ 24m]s[38t=:9# 8ʑLؖmݝk1RK43830Ya 3&K<`-w\mK40ͥa}:ňEwVu골sPbR>w|#Ƽce<O>$ZݫD IyLgIB"Sv{#^D'+I ZT )QXT('٠IQT A蠨(tAאxH]Z>JYaQm.Q^u:%*Ƥo*i҃g|Fe7W]p5- .޹Z^p3I L6QMV=wm\/-Y Y6z`_kf_n}\fJ5HX`s;\". ,\󼝖zf1/ vLZr_q/"bqCR 1aợsI)tIQs90h"ઓta3v,\ɢ"aII5OBP&'<̄zιٌP,=%DnJq&U!VQBAXSFr^Y$ GsL &1\t'G# `,ɰ` (H|㐁f)eXs 7ULlKZLmZ^vyTv,9 SB`N2)rk0BjV OCݵ=6ͮ"Vx񗆗oKqdlj"\X0#Lf$ ql-aKS/7~n]o6TƩ>T\U\/%5"bkYǔC9h{]^+`VQ]̦J7Z+Ù.8g #zϔcd;7#瓮M}c35]ˤ'%=eL3Z2;N,bYaR]пu"uQFQ|o텆&$EL֥H*q$Qm@Q/W%cY<% Rb@#Mu}.؆^]nR(ȍ/+UTh,u_FiIby,b´u"# %j-bu!IjXxL\a5\k 88dH\XMC '<ciR?YKn@)Y@ ؎SYOTa2і :K/=/*оeBX6@0Ʉh\ tp G8R[8Kн8.$WcqVA͞$ӃHydC#~: dW^.g6^F_6~>&S{QG;R$q1fX.pfl$#W.=fϏZ8 f#T]/cZrO^HBAK⋮C fs|,5:r,A ("H=S4J Z=[|j ] Cӕ AJeX}O{\8(AkԅqpSJua[a稅`8 J7V,wlb8FP˱cOS` )[ţmdb +f K*bΑ&'?/l :~s)_Sdjakw d{ב'F|pMP&aIeja*SdVq =k Ǧ$7uGp3N/F"Ɏ`LQ77s!&[u wZk0e4w5w\pIiQBPG]QrrdŸS 2$H@!"$b H`lmBTѺα#ј"q3'Ae$.ւXxv"A}pDXQ~ Uoa-Hϳ,@ʼnȼ= mp="lON8fz\Ͷ.5Ҹ\#.ےKp]HDF&q(b"u)&rK)i]a/;CCrq}D@ߓ˱cw20K mhR4j KB%jmRP2ܱZt} 4יeL"=̉G4=蒈qmy[h_0!!mKLnch4_ڰ;Hs^<$'( |7QHʶ%ruE8of_+ܐԯeMTMu=m̲I7_f4oV7+rS~U7f b(#+ #Q CĸqNA3eoFXRsf4/QCKhX8Mz<H^bAN]>G]rԫ3 {v6Y}zp&)yAZ$5'(xO~q ֻ36uG-Ltfu 9C pQ~.N2!\"F{t&'%!N;+S :rgFrmJFr6ONID<%L9H(ܼSI'SAik9Ug+o;Jd&bNq-I * Y"$ pUb"bhhfAe^@-'ep@B!0qB N5B~V>E:aۑUGl0> e={y~MnV﮳xg6MCPT u*؏6{޼$pٚt~ɉ0&1X8E62H"8@-3D.SwR bO,!J҄!!0?۩o!@][8o :fgnԷ?%U^MP̺O>v7w~Û\Lf֑\D0wp+#)cA  !\*HN'zMB 5rfgAEq[ lA+s2@ nEbٮD&"\uadNsNa{&!sMe/5egw_7`q_/Fy(~; }um{o{m|8b(sa RF:߆!0pȀ (Er!H $Nj>p{3x7OӭjyR-]> jӷ9;/g{C&MW\)L)ddoDd&!s[MUǰCHΜOqaւ/DBkzOqSՀ⢡,u ( t떼eav;}g"lA%GYg)!ҿ1>jXQyG!/jSSfڸD??Ϙr(PƑ~\,t4Ŕppn?Z5>mVp(|}gVgɬ>pg؇v=dާ^>w,ꟃOj>YCRPAOEO/>ĤlMs#'miomf`^FtMFjCD3⎴)CvQa#@%6ݾLxEC,$Rp+جmd05s9$-e ,(Q(` sZńG N1vW/TE3ZÄ" hnxsn8WӎYXzX[3*b(5Dqkc(qL04T@VI(LSb+qbcu=\2(s%NAqɶq\ݻZs:NaR+Ȉ v{M[ @fB(E1$MQ!V1b `wjWGb$2W$v:ℑ$ni)SV*c$IDr b1"zx!LOvSkݹ-YFOd_3M'XU-il޽]q6ճ,6A{L ?u ІhUm&r@>F xA?@%L$ XB2~S:Ns8aP-&WYM#{ (%D.a ϐ3l^"ݢN.>bԲoiї x idSv]O5Ҍ+L;RTo2rR0 n-‘?{'^Lļ4D/ͦW\Bq k/W)jan*E-RҷJJ{^byiήJ ja|"=9Bs#3@ë{٧;gn2O gG8Z;qVlCS,aQj#;|| }囬iŝV &""H# a "-s g*<|g'PQns;NDG[r=Ԓ:- kb~r9b]cܒ{JiŌ0Hq;~{x>E~$bBh4L=B=gz9_!6)*= ^ؙlqMq}#T,^Un-/"##"XZVk'\:xCZM/ftL.}+ryAN"Gp?T18)D"&P-$FYm0G"fbC)$7ȌH+#Bai3N> E5H2QGks{t3X6f&5yzI|twS,:S^$tR\z{D(aJ jD"|IgkT (Kf)VJ"㖛GH! ew} .Yy5w8kؠ:=\cyD:zPפPWϨC>SVS1 N=>P.F&G_]h LWVROvFՆ`4k,O?共E,xv?%6D$ Nb3tLGiy" \Z>Ѱ.oȇ$9uT!q>*O.Jg \>ĸ"xx(4v}\h:nWޤ3rRw%4Xewv [([CKeLdGs@>Jhԇ; Ysòȑ}tNrNyb5Z`5F=r,tA('3iۡ\[a/ #]A^ ؖI@sz{"5$.0‘ Bl!HD<ksS.NpX ¤4aHϥ9dK!%>>B a}DW|n6X"6'%~al3;_, 'qobOܤ_n^(.}c[!`U&0gVE,ԩk w7["<N8skJb=T3ZRP&: !t{#Dre00)1Y?O_i9R9(5oٯ2ϰUM* ը㒙k/Ų.(mCp7)gjIyWHiZI~yfӮtZhJI7ILJo ەx w+Aܤ'i#a&95=32І `,K{q%RiB5+tCWO֏W |ao+AtkcFjS@ssҭ$U].Y-SF+!+6z +n I OdFSu̕-ǙT B`a8`l%t*^?΂[rh&B[ĻLt"c84~<]?wn6gGgs$`iAp<"c_NE$䝋h::xܣumj"5;*u<M5~Ѫڭ y"@8x~ti;Ki}8I)"sIUGW!aգƄSo^At1$k/ __ƯpHv &H)NZFRhpOVN/CapήdЇM_RuS}JսOTbḌg:e޴9EKD̊}׉\k÷ϧ!M^@IqtR H$9Q\ iQy.tŸ:?;CݞZɡ-$JJm  BnxS ^݄ ÈKO DKQSC΂LLX/ 4Lcb% $6(GU*5HR;nԾ!G'WRzAVo3J۪k5d2Z"[nZHPe١e)A(_z'O27|Dguu[%縻b㑽]=6Y'ճy va4>6ʇ1oES;w`|fAwH7Bd:ou3uo%j 6&]migÞ!ܢ[*Vʹ?GI!u- !Qrvŕɞ9Ȃgp!pc*W +W9V,JW\RE` e~jnj0sHόycVOj@Ǧ I .e4ՃC hL`{pM',^=sk9zr⥒mo[x '।L3fjƛ›k}#1_O[$)090'; `U{nOP *8h$hBEb rF:4 hB5)_8ԭ rA;{ŭ!%1MdGs)A(N)"{MA%Fn$QY1LX+j&Ӿ*wؑIUŗWCM1G6㓁0c$)PC֑§2P buSx]n*p14D&ila/Ce Θ.0ǝ7x@?QL$A& L1"ڄÒRN:p >k˪VEeN XpK26˲WTuuY %cc1!ZuY+.rS|. V#3emCrE3vJXidۘ6纒䶛2S,9JFYs{!wfS eχxmL/egxߖiOՎ}CX^!H8%U N& ii qPkD?O?<=>Oҽ|%иzF<(DUAo/NŊbq 4^^ h!>ޣ.j|^§EnPζp ˛e6~SVE GlLs=8$re][Mek}8U@7 ='ay _Sβ׸f23yqȞ|n[;4]ch&?& lp!" /Uv69sf0Sԩ}y[G-aHt`|kqa>_Sk>ïéSŧUTNFyFx,5Df80kxK1ʣ8 [GrVWG,U\5}{0%J,X=>s&`wC<>we&z\ǣI:۹MnP V[CW?;9V#v^XJWe2UWǤ/.jLn"Os-p O3ׅ6("e]SRc nW]8Fe]݆v*  8ѩJ 0VNtxED1x-_s5XX\ajz ֥9^!PiN` Gœ` $F ! F)΀qĊ@9"b(ZRs&J5 h o{Xȼ udAoj043.?ZҕKu`H9=/F>`=/n Ё-&r/Vi`f+ lȯ_έR-Dmik2.V̍t$2w7/ewdXS8ӧ!`U&`W͘&>",bE7yEyXu^C"D--05,f$ڏ2N>EY?עU2F1x7gRO>D@ ChFrYDq^9lڳzƛłY1ef)cVZ(<bPxGj *pLq`BƴӬel9B ?zsAUSdm/].t,uW r>D1";]PZL jsYTؾB%Jrnj#a0K1JbƬŽdAzu":RIG\(B⊸ȵPUju!(X9CPt8G 5C,8"  =e^HbHw}͍HW Ug+uM*>\$ΨX^IN.~ R)AhjƖ)nݿ91S0a4CPSt8G.%䂧yg6Bh8j}BڤX}@@sB`Bv!B<(n;z$Ĭ ` e=}l-c1(P%<(GM<2#x18`,JZ-XG7j@[̬[4BS(1o/8'D.KR:֢DZI)c`J%. >O nr=k5 |kw-, )ykQJ ek\Ph:'TdI掠cpQ`LZ)& 5n)C2F!Γjaxdp^EDv%w^jᬦzzq5@_skeо[/ݔ4=IҽLawaTp`И0"ȢFĞN#ڰNjaEG?*RMJȐ`81?H>X ౪g+9_T0EދJ۵Fh,bNS5 Ժ7Nlv]]Wuu{皼^RƜ3^*}|bm&$J)SjB"||bt==ʣKu}&0;(3,#fl*4ޫy԰MWĶʜpr7`~<(dBv0.q`I[ܚnHT*оÁj[͖7K<njѳ"oQ`+uoFY~rt_>[UI{oyPVoV4Ց}&>CV ?B5d(Ė1b:0=PdWT-ݼ{oMA;ڹZ D4ӓfYB&TA{Ly:7Ri߫MQ+ nz dX_YCbuIK~M%+ t,5_zROd:H78P1jJ`fO d >v` Sѳ|Q>2JH-rdK4?;*W=ͨ$^eF??uAfnAbsWfrd_ \y*pUપ7r5B+8Pf璥2͍y <FTpjt%F/7Źbv^Rla;쇢c hh{/b4Ԓ?m1gO2o2<\?ZOSdN-I߿"m!EP%E./$8D&W%L&" OJ)$aӂiQh6m Pmy A@P8ƚQVL(#thhv[֌l8Fz$ \]a֌4=뤷!5^ dS#ժft(w^J ÀE HR*SCv,=AѸgn)q0ʺF{'{o.,C2x  0frz{F4(\egYz>q3f>%}f}8W3W<+՘..8AHeq[DTҕT> hUG mr5ˏpvDD+=>5 SkJd ^gL@`|T~z悂dǮ;XĮ1,3=|qfrc+iIAxcѥLFO-%Z)1\C`O$vʶhá-8DӰ,ύ.Ƃ .D jo FN\-gOd94TC ru%ط2od4F #vz:(s9V͜яeTp8.*6ovL6Tb눞]hBح Ps3lڱ@ߜXHy,{ZkVEnG~Qiy}I`tctWT+ 9 *fۀu_5PnvNU]Gʛ7gyQas+AޝomY~UDlXݮ2"a‡h|8'~4>W v¿<[Md-Tׅ-ڞ9/ Pړ?k}tۊ(4ڞ43WD;'MJ[,BT'v6x)g{Z>43W(1PX߾qn qB4=v6*Čg<>43W:ũ}BvNѿ}41&3V)>2Q0y$ShIX"q+ᷕ$(z LXɈ>0F(1~ "ѧHZ߯K8iq֔g'dxhsSDl 306Wx(bF”JJ2F6LBYl}Jz4ةpL9g9wDSRͤI2ҤF+u!67jTQvbL(zi%$c|fbUҼ& &Ռ'un$$Ƀ>+(bO۾gP2N?7& PJk>' -\Wl)DMg!bj)Dv>{oldBֱ lK"ܫW+W^j\mc3:Vd&!i2dlk$%rCB|ƤBلv0EvKB̺*p % ٺ5г xKb?!>"_~g\yYSԡͭϦ=Y{"ϫdA=vOT[n{{}wWEi8?L-Z}%O>/[&׳?QdޠrW9:wLbB[&V|WL>_rPȲDx/R=.SyN%4ixr߻L!Eɜ!YP1~}r")1͝"`G~;^e5y~ndv^0N8IS?r-@th&*%1LsPrwsWh6Sd=ZkxkX;LJ4|Veq/ pǡ2[vMn 2lؠq/fY=YA0C34\'OCoWy>ͽyv=p]"{xFW/*?AjVkO-zcUU,e.}qƗ.1lT7-gPeWޝϯns'a>yЈii"Pы lM;('JU߃` 䋽mG !]!XR P:$͸I g&`$6:t a!g TAg X @ ݨfv8՗u'ox+ =v=roBN"CtDKIΕ%2ts_ ).q/ig\O41 *[djIR S5tާo b~)ᒇ?eھ?AGӬH&nM"-ڟzOJMb?E*i$c5ieSކ,3 E`.z9/T4p'};@-5 \S1n&:ae*q'` aD.KRᆗ'N4sT5qm~4Ks"c)J2R"$D9ƯCcErޝj&0r 4GxD= 8f"OrBeyF&QǸBa ڀ$\P㎉Q N%L<z CcFMIN5`(';R-MFS- Wc,#J<6X)uY;RM0J8)4N5_|m= 3gba.HPd hT_֩pWasޖ>GejÁUk)SaZ _mqL-e*LK MQK!Ж_h-4a_O CK=Lw3~a;b =$:P$@zł>^~csALNUjU%Ĝ?8cRdc%c:dztwj%.aFk.HD m_XS'ӂ1 *S>-PH6TiMtzo)WT(=dˁI#<٥V2(JW|Ƅ6s\`ޤ@z *,1!jU;̤@2?ӄh%4w`uWL&'N<?Ȣean#{<| 12d*yʁ3ϩ )np0jEYJL0Z2{(i9䋮N5:2~}["~[ P .PxNr~ŧ[܊Д$(f!By3UDM4 \'6l4 X7zmo+H oQӤv&I=$ͼ+2M(: Ƙ!?r@_v.3rB9qZ)Si-߮Oam1s%YF9WZK5#E*}_AGR"TpH-\pF5'I}'TkANWj Bo? ʚ,sd"4Ku&V͸OxMng(.)0 NȘeW9҆M"JEZYЏ3"4VnY+r ؂(!JՊ-Lh}b]Iҿ#l'S@Ԋvu뻏˛3>bV.-Fr>0%Rl&9>p_rn?7aEj*bƟQ W[y^qmqW KpC$sdԯՖK_ʶ?X#R沩c?ݬuˡ*@D@o&m[5e5j_y Cի<kTr*E,>۵ed>n/}w+=l`%tǯ͵OܪaLQ {7,J7?_-z[?:y 9濾72`Yj~ڹ+g5){5û룤WVp$-f{HE¡5 eC?_l׫"Wo;bȿ̳FT]Fhetc#L:M Qoy ~_U8 1tN$zg|WxmӠ`U'iC3pV x b6=ܑ_LFHE(.Ť/7i J7J8CyZ={䉳_0G&[zT;l_qAK߬"_/~W?sBs$fH߯SKÊÊ}}ڞ^9'-\xMf"eJASi3Y Ŝ(}mrڎ2 "lD /,[ǛY/3;}؛B3"]*F~!!e]kH2ZgZ[8.;:^,ծL[,9)9]#p{BQHѕly!.B!(R:nw|.;VkWh-06>u3k&7]Kw>P|BK=k떇ߗs˫[@g>{%q9/h1318g2%=5Ѭ/ßĎ}`N|ahڞSumn.~nKXk")1.svLlxj+!!Qr(P<6@`$I3 uxPt5ne2@Ԗ,V52l\pBny!+iM{/ջmP4oBZ{PG1j%R*MetJXq^tPz|1ְZCz5ʥH/F/Sҙ<%svTd:Dm2c`mQf ԨI f 2_\(fa٘ UFZmT/(:uYpGF*/V,4ErYRVި?hޛEcI 9ar}qg%m  T|xlr #PW#3Y xY)X">ʤcCF& BuZ"-&yA9Xe]S}YLHj;Q^/dB6/UӶJ9]6qwsscs\G@IL x*fꝀ Bs>a R]`FtY}S,mG+tc4fNҥ}G; U+1o崯Y]Qvaȵ?sR;NR2H8g97jx;mΦqͱ%Wd= Ux+ӟ RυMnm1ICauq c8Ngy9YI88't ;1l3(B" Rc{+Q}1=E>g?^w(92O=i914щ )$(}s2n lZwAt^&2nn2 (L~hM* ,Z؁+if)ˉ@њv@;fz8ˑu>ѝ%/vrr9ݙsǝtg5Wx8J4)Qkj=Ԗc}Tgdt[QVB{ٜOTI >7.R颓1t$d J:Ճl-#8$<=9XI.w+Fԡ+)yFBxrۘݼٙ 8,h4 2³>k)e,'3ՓExx~Ba)$)Pw9;B*ɂR~[F%|#0ZY}|a<; uꟂL:yGwԑ6t\䵌u,ZO={ֹC{t4dz,&}^ #@b"R*r s$"ǝ̦Y@[F _e@F.R^=c>Xls34-G:m+_8y!)Sy:WR^iڅޝ⦱hJM`]u룋,k̦u4Nݒ!n֛íwvmٰj>?I%>[fV\ +@3UqzkS*먋#SZUzs|.o0 P96@p]cC ]'oo^;pwہ'!sDE_\7wg7^|JIr}WO tx_Yբ`jqyZu ږacc-!5'&\ ;ӛuFŸa`qu[+9S-4b鄆0Bj8e9$ .DqmC0hgANy2+tråfXzzL|1 X-rV9S7ID0D8,/FVv yc/Yv=^08 41z m^ mù[10Cy4Nf.K&wp>%i"|9-ww@&TuYRY"._Njd JJk1^9-=hqJKsqqbCyתq'9@Zー,EBT"=~\eGc (>>dLai=7LrXІ;Vj5EJ`6Q6$Ro. Ae{RSnr{,jj.7[% Hs%~~rZ[7h]K`={cej%&d4Y+љziӰ###`1祲0/[z N\]t닯j؜k}TBy|]̥;J}+_/GCqىyX6F(イCsܻd/i1pt~䪢K$qzMLYͺ}+:HikfY8ey<'89.vfh"rQ]8KǸTޙqO5]:zKL uZITz駘(cHP!;fL %3P-݅?~JylyPtt6_}Z&>,>c {Cw{y# P:mIze<)u<;1{-rh]teuRḄ_n|-˿7<[_G7Hno\Q`}%_ZƼĶ@=G0V;N8騫 ꜇gYμU62>Ʉx%g(D| 1䄎ء❕m8]zSe:H$3ƶSF>hJ9av'Q&,zs[H",E.q6<D0,z@ɍ&OIsE߹7J< .n آƼ{_!'*3$9ŚsmO|Y 4k3ܐ_@M;{כLj':Gd-dëw޴hkU~h0.8K>E#<`.q[ 6A<$ + 7ʛش!~+%&0ѡ_OwjͦXJ e-/wF$kP2zCDZU})0$iu/n\+_ZEt=eWg:cC$cɶ꾍 c  =kk6J]_G)ՙAj/5|}FD$A}MZ IO')l_CR"o%ꑙ;2)wl^:/Ү/Ϧ^}l@h>p!\eC~!\pUJ<*UtZЎ#c h|Ȼ5pI3BICee/ŔRcJFbuY{e.'R38UOY1Zʑd@eꪹ^eTL¨`%~!edtS *TB5Ѓ%a8Nq@:sDnj%b/ L3>3zoAsiZm#1}[[Z|0t!cSgF-r,S|u x+\L,J>v ?WI3h& Hl ٯԝ$ZɍnӘ.Rq95iڛ{y*8SiPp4]*}xau;g{~|?]ַ,ECû$h//o֗?i\EYe޽@\~>c!Q/ ~~LcӰ} ҂ k.A9\Zሼ-riZO"*Bq/~h؏˩ŽO3Q$?7i2nC 1/lX.5'8eg1N붓9l`۶U{U9zV;[U#l7=m>oj gYXf7+6X8CƿҎuw4L|RhJaw/d ŹAJ͍Q777J(on.^D-QyL3= Z>2ZܘQ(dœs L%Жxٚwyܖ)&6bz7Ŝ&ɤ:)mGYsPq&|n\\ti&:FDunuD􅼢ni841myekۑ3q!0OVq޵qachq19o؋S<ǧj#MhȂkؘ8xP<&x9o8uxF qc{Iٗ/77q!''o׽ʼw <n~qqYeXɑOү*t5m\J'jjOuMSwYs4~)[qaN628,A6fϦ7+ͽjTx^7TQ{5E w4 p ڷ8`%IERs!N3tPK?xhHX*E@8ٳ*配kcpˆ?Jm6x(ycbsw~xw{m:$>O9W]}E^-\xؿu v-\70yuLg^R@ʗ7-(:~Ѿz^g=ӟmdCtTL9z7Ed6 (n[w% -Yn5";11e ]?n{Gncm:cQݎ?K>|6ZV#B݉)Ss6msK9o8t3\<"sh8T?|VdbzՅ2&5&bJ6E'b8߆_! &+ʐY<#p[텡ipv黊]E\ù[1jUxTi&"O6jI Ւ4|1z2Kqykf+_HY@1^ӤKɬĕ k%L@7Z1 z6(^rs_I e^$XUrm|8 A[{+Ah*փ4cƌ**իm`hRQ,L z\JRy4-[NJz\vheKDBUcYx Vpw:̐%;JA5Qf4`c _?]Jg7?*@Œ. 23紞/ >-^v#Se6LLyʋSzP[iRJ \8uLsI/hkVE=&Y ew}Y`oDtI[=3{n9iKv촻bV2*憜8쐜9EI_ 8s'. )~i5I61FM+F [E:: mP:9eMGwi 8K ++lv5I 5}f4<*MZj픶& E/n0V :Q %:g89Z{oi+"Sa$2gVuJLzT Xe欔|!IkD>r:)LJg^Ȗ$\:/|5HˬcV7Dn ֶ+ڹ(G.4U\d ,9$уFh`IaK抵ft43Ղ‹ޚEv>ZhO+UXZ:0.,8QFqFAsm9u9&?M`"ps;qԀL8C[kUd 6pNd$K4Hy V]*!Ԃ)&0~nFQO?®^"( +8r]2DvL4RO5^TdW9R3 : v-hsW+57nSG"^hۛr訣6|#PBCG6``FXgp>|7b@ ,'{or9`8ZU([5 Ey ݶAJ 6FJԍ7.5ёOb,͑o|[^2kʳ BQN4߿!w},J+k qؤ./"L.KFRD@lJ2\q_,_5xXjg \,^l4Jst2B[ .Y bA{A!Csf]5LrZ8Xݭ Ig4j4}mw)>0FG`XRXZ q;q#KK?(Vˈq sC|@ƔL J]w ~&&`-d ?^=ۚ?<@3^Uˉ= |_Zj=75Gk gX6zU!;ޓ|MRocr9Q=rK~SöU:K1ӖMW]!a<,FE܊Jg*"ӹjBh'u GSVgzH0*m.UChYObܩ+;u}՝X:oWTOi& >4M35~D'rm}`Scz$nh1H'k&z=CP0nQRJMVmtr<I..eFQu'U1qh'qTѫ3SoǸgɟǠ{1w<:ӆѣs/\L1؟>&Hk'VߛvJ%d+ XK3CayQ,›9_>Xt9G*.Љa51zH=_5 iW7~ l&t^c_J{tfEeI5vχwb9NT@ I^7. pjz)e3|,7/S6ݮ¼ e:Uy6_KB23z6FzO iȋ {R|~Icz^t3rSB*7_tUuܿnbZb3؂iT"X>~ewC/ܖ`ݚ :pXzsbYlspG)^``2So'"mM.u{SʡJmr7߮okVƥ$dc~h|C6Zݩ$w)5&} *9-(7A"sҬZkڼ@q !VNgo Y Κ8,Q{Ϙlϯ!>؂G9 v)lyk vlvtډ;Y=άTc/ k4ueTڑp{X5Nr+f"LN];E3ne'[ )"Uҡ]=*b4A"K)ܽ+(roN@ڱ{ӚѻiJ`zV^`hLVBZƄp겼Efՙ$`Gc&^( B -\:a WZoӱVr5ý(`=zBKǂbe K [Uo*wCMQC ?qa=T+d BY!$1b7T0(l΅᪝\r|d3r0V'FyW{Z<9맟er%?ŨD~(A0UeuPqg2v)Z6] ba96n*3 UܰB-YNaS9-]B d$_0Ru+TgoҔi N C e8У2W+DyOtۋO;U[c5% wbǩ^EmfClw`'0{CUƤQ+4W.S|T XK=3\oh\C$ C#žEb{w.<3V_O$iHDhBb a!q."1riD?41< R57 Jj`'BY k5- z!L(0PXOZǂ_B '@VadT-&RlWj)\U'p-Yr*F-i$Z Q)Ue.P*#Qi903iફ|ѰnvSsSs{]]5 %i^tV>#aid އ ciDȥtzkhĸֶ,+z)_qsn]F^hƁb'דs n鑰.~ ]%io~m#uט:O.x}.'n㧫OJե󗧟Zws1]Y>|Y?ҒǏ\]^OW^-+nwU#Zs6>%Izש] ׊3_?}xV@}Cj!yl4gȮשmZUBػ?捰/S@9hѓ A\h{'墕M}; 2wt%Ue >3 93ҪE Q!J͇ubozɘ~gO)02NVb6#m3&x^}WoSx}G91 ~+eϼl`>84j_ F lr0_תxFHX^3AN^mp&5/9GY/KѬcNlsjK:4hGK9av/i#jFs.^B|Nt;dԼv׉t>8牳mrEHY+4zTFP;#-9 gӐ)ͻRFRP @5n1"MnscATWUSZ|ŏ~<njmfr TRoP*O&'T2|^nrVKAM ?^ }Bhg$yA4z>.cdਰ WY ߼?c8 aӒp}!@nJȒ rO N ,8 rPtI:B!@njLq_IImւ@.# AH<)c]iAw8öBH: :r~<51Z^kԱk7FX 0>SB Q\L׵(o1hs îD /On;F,ر+-B$Fs_ӝm_@C]ѮU!U !``M߯6fuV?/|%կw.[܈wNq ]l-]3FW?OLf_OWoe{|՘`=&uca/!DĿ)*ihpkA==(kNDL:$4MTfcۺN5_@U{(KWxg4*gU?~;jh2&}W#}X1mg4rG_ pTs, .{>}bP,@U { !jQjQsue5^phL>"=zp$稵Xf`t(Iև[QF6JPaoL Bbh/TjLՇ^uk;pIOgM̨j|OKI>ڻb~6M=Y瞲7{h%v!M>MbN]'2[]HP}agvQm9D 61zjJwu&>Y%ɖ7Z3ےFϽ-[rkF)aG{'~rIǻ1*c"(dY+ÔJI4,0$w a@^3&@ >t|lW{_G0ՐQL {b$4Cts`:˚o~(95t8(; A3-kH/F^L?WQ5W'/1ikp"gKlDO]BpDuNA% "̻u r]p00.4."ubDkI 5vKF +;)9iSFhnsDf, ~AAARe9T6)1,w 0;Кtf|2?~8Mo&I& q2  &"VlqmԝCS($."~^m.ʩ%}7uv7}~=,S͒TKMIa47#ٵϊn^|<^qooKmZpiR1r-p>bP+'`mkbmT踘"PLC dCA$#):*DsS!ߒ8x&WCLyb-VvZ=3Sgq&9!?_=Y9(}pho(WPMOBhoRp}@ ~{7DWQ|9>9!dnAʨ)% ,gVڤ5bcpy;A^dΙ3~-`܉]N֫m }ެNzEQ7v\m]$RkĹoQ՝! 䳯JnÎR EfǑ@Z!*ՙ'ZU5SidܾjZZslYTv!F©:k_ۆ2P;+OqiT^Aԝk y7:.ʈl)7ߦw5(Ն &PXL$R)/\Rdz\9a dsx 0ݱ .=lL{h;#_1AU}hKo lY}H2vht߱nw4NZC^z^aU5y/Uk-%jf\p7: R/30.H3D1LO_!"\C,&`676I) TZ^S#j(#2p%[(Y2C:)L<Q" & h"7D)mMXSmGLضFSGtnhf eyθ^kGH(K JQd ,Qhg¶Y|kk5OG\V^)& V#nj~4'N)C1dFj|4'X켏Vq 0& ^#НV} 4j54fJ})5J[AXy܈T[Tdp+,ÉYIX qEֹb:-RPɵ`0ԊW{4#rn8S:U2$'2G`ARV;_a z;c[ |q+Eu;ony7 .l+"ׇDXE@ hAlL):`uW;~u2ki!BZ7D\J*}L!jUB޽! 1p-%Å{zŧ,;g"|%vUqeD%vmCz2w)-()`4- L-9)ISJUi*!sRQ'=hf}Ө}B <ݱ&I664ZHH+y&O0:N 5VF 5`8spJ-R%`vhh{^J+R8Q"S aWN 7I>9Ҝ*?(;8|}zD$~ hGi}/UHqRRRcpjI.HxI8E #dUDGjN4alǤh 5 PGz_sfJ]2zBj8w_&C_8{n2o=-Œ.V @Σ&6(v ̆/}nyNwpM$57N3ꄝ4?Lhcf`,g8ZL>Yq|/RjZ',Fa1ޚ(:Ob1z8QCtoLzzpcDHdok1rQ`c=U3s2Aà^q XA!AgWL`ɳd6WeT^qGԓ4P^:u^̓EK?YMሸݔ+8{=~tן{ͥۃK*,i*Οc}.ːWoMНbQ_ޠdeD"M aLr_~{  wͳo~<8:ix<](dz?wW˻뿸%znf"4G+7^?nq՘:Nyʳ4Coɥ2BsTP쿸]|3›R)@!{O:wdq7;~e5ҐNZp.ZR .$ՓsA~,˧ $R53 $ }6T4.݀Zu F@rr ssgoK\M\ /4'62qR h.m3d&1#,W9-ы4ݏ'5 A9y٠3_~)2|?/o|  X^!@..ٻq%7Ld 7/=6m#=vDI{pCn*dW ԓcjbGcTxrLqug1LPWgC>Ac7*Q\5):OLb@jM%&ra41 s.XbbE83δC^rMe7 S^''AZy6@?AI01eq_P'0RZ%Z4CFX 3 Ǜ(tX*ț@;G-BZGi nU iXj}*. $uR0[(D ˣe|Gk&{έ%B&bF8M[Ti)vai@*LSIt hj_X$9(J`'o(>c GTbUT8&@ q)EW,2C?AuP^&`8Iޑ=&9 ARDž ;VE ]^F^!ql!-=,`B!%obJZ6KOy]6 b'nh"6c"ơukޘ,װ7:؟R1ꆨZbQлr!u!Zc<:9mkNzx_sPj+^+nupX2EA-F'4JA(4y|0"?$%Q445zzHpβBNHC_;yMAqD'}b%gг^8S^Q#4/8zEOq\E7Y )cM-Hgt<!BR)Li,hob5O^驳zp <IJm~*C_o . m4̘dd}a;B'@'1dPzZN(<&ޕ", b2?*~.cĈ} "Hs9ö2?,)}F1 \.{g 1fOcPSQSO}~ SSP7}/ ,9 HNZ!ʹ9'1GwIĈ!'۟|zʫ7+avAJa0-6ݽ'oGz;[}C-AN32YşHm^=Tp*? LjR[׭$ 9IeEup:SDyufP#QC>Xs2"|={!#yJ9c/TIXUgG8\;Szr&9^o ib=Tz5h" SR 沯'֥o  Fd!E)I-ALSY@0a;>/;(,-*H>Gwɧh0Px3ַŔP"#i`\)Q?m^SzCz*6.W9+_y/aYk[rgڇU6e5a~$D J?PMY 9(u} ]bO5l39~vAšGۀFa#i߭?h/5e6:d\R`&o/uX\oI]oww~^(xfRP[KVgUQ_*r}L5k]*L)_ʊε# y&eS#]I6 [Z[.)&xA)Gy^ޭ y&eSrFލגr1H1w4nEDp[~oDօsmoS|i9ẃVر4x0K7!W˹m]^KŴ%bW>D29i|RA\y3& {XdMyv7(GD:X6r)nƌuN9ܛ|p{D09EŔ⮱/~Ml~LccVZDctuɉUv?u-L}Td]kOVkY<ٓ N ヿ{\?.>^_upujO(L\,ŷ _hdzrS4+&1d"k\g+x+=:GR/ؑgCYD`D'L71 an䥰ǛEu4s9jPa">tq )Ȳ l]O3#j)L~b-D#f&{ӲZ"K{6_ڛW#uՙzWW[$E:i"9&I9;OY-J= UUqyeijͶ9/#-An8~*{VVuy4׃F/yh&$脇5N\P-NM4˦}hݘhARw4jxp7S>\܁&7JB޹fٔ 8P5A侣w;."õhwB޹)`B:vRyhuځ$]^(L(I ء$)IitJF"[R+v=^H)# 4+-VdcsSRPUZ)QX!7V.l'hY)'劏([R'V;/l qa7VTӶR iV![*FaҬg+=i+MZ>Rl4+aCJOJY1 +e+~%5rfZy14+-V?i+%KTk-n Qi<J sDO40ZፐvΤYFsJh)BkK VN[D[,B;}E!M!wR&gfwo~͠&/OTo #lۦZI@6o)S=ٴR JG>mwϽ}zseq*ߨ,n<^ٷ8~ڇW)?5$My; 9Ū7󯓟_}5^xW=b%N,M$ TO/҂eZé 4(NMWdpv+U-fm=\^`y{uL&E HΉ. ~p.qlzEvb`A3L<俼,G` ;sYLބ?{W㶑 1_9iep@`Og#N_60ҜbwWMj$JCIMMQFbk(yUy9QKR)0zNUS&É( lnyV:c!}^6ϒ'1Nየ!Jqۀ4m$—Jo$ 1*i3M9ȅĒs-*%eV_h(-}ƫ!ޅYۙ*=,-§V=>w_-pg2)bYg_=iWAdw"g/8锖mD<"ڎGD_ u{rHF@T s3G+aӪɢNX4Ƥ#cr>R; ZPq>z5-=ևڡP ھҚPx23a[}}Y /ڇ#!B)8VaSSF8&IiBYBcJ&a&:J6JļWOp@7'1#ZD4QR(DHjDEdH5nff2zp "bȠs_ lXQa9+Eo!R\paմԐbf; Ҳi*("VL^v`K &GP~ UŚK;޲b,4oegM 1:w{avָ^w눣$Yb)Dii&RZzZ '-e$뢍D'wRkkk)w\yyN-+>zwUd-#{]fKuwRkPi)A_*y'wRkAEi)VK!CRJ {RݴqPXI}+5,_zZʥrim)XI}+&LZzZnvR{]{]XK}+D]榥߲XZzH]$ZT },`nBY>(βp1` ZctF?ӲŶѨ=9[LPQ |(ViH N\jBOnIs*`g~ఓkY{ kƽaV~vq 8Eȋnj[0?E~?_)­^T]ĞS|?I?3ӥ.]^0zC1^qԤB` /&}2)?9@yVd?9_]+#N9nuc2ٕd xy?WsoA~Gox a`2ڄ['\CtQiQI]VN*9bbw֙<}$JY>ZL+f0+PWE'.j]8.9]%vIb~)s.}v)g٭=Lε!tGCDhKxfjI0e_8e>A9gHEUIE!"7R)"F0=֎CLU51yo.f=MSbsߚϣ|ѿjTH ul5>!>:s`"@xmΚ=6JZΑS}PQj3+J>JR2C:l)`5c3 ].)(;:|^q C}x .*o+ȼ>쫓OvUâN0٨Z<]~z '}^a>q`f4Ὴu^#؏m6ds;O1/kϷbGj5'VK@ F"74fh\0("SnWcDomp1l[sNX7ևӥ-B{3 HgOSd7 XĦp|MLgpXȮ8!{lF'v=a;: ߅Bo#4f?0uY| \I֥ Nየ!bPLJR[vn6/iy!ߑs{l~i _~]"eruT2;@Dfq„j '\ȿvJ+ЪƑu+:{Y /N,~kcXs"HkX m5nV4Cvi"1DnuNx .8Rԁ`UݻU4/"87;gUa~gs:NƴO8,>i~V/'ߍ4H F$ 7LgN";~" 4r5KN.}#`cEtiI !Ý;͉Ŏ#CǴ[ixQz2$R*;sۇa\0<9)QajDXP"Pj$ cE,N#WD'+Ixq:JDIɷj*Ɣ31NT4&TD0*3iCq$4Vip P)Ɗm2rfN⯟G$ >Ѱduu,Jz|[2a)ߑ?G%Vzd?ehn=No"bȠheC+7WA:+~>,;>Jw1X+MW8/~XH:p? L*LʫieVv `^jJ$KWgnM)PѬ<)%8g9Nesw9. AEii}eDa!X,0Ⱊ(")b 5"1R$b yRG؞d)Q6iH`m'N#H/q3-!:"b#$fK&~64`"X4$1JJ%2Da`(8:Q,0 #ʳNر RbF '#xWO-i;xW%qhhJWJa,\Vb0N_>:3v#8yb:Fh5.o쏉0\ ?}NLLpB&̾tN(ă=xÔcXBHGaNΖڛyGQLAYd܃wfxD٩͊gum6ldr =i8Q݇Ac0ץo޶ZZvݴ[zlL>ު- /PmubmZux_oͣ5]X?jlI(VF{/sWb](Th7lńUiV.oڪ N1mw^<}%% V}A S\o:Uz}?U@~/IMV ZP1xoR;nGWgpеYem/ҍ-bP`@ MmOGhṅnt`@ͧ7O#0"5 >{>,bzs}"o5/}kQ84ؾ湰/ 'WBA|*$9WX} -w 1HUlsq#] "kq`rz? ~ gٹF9ϯV&?GvSgۅmnc3;C URw>l0|&fO`:mF\# ui<+Lx}^9``T M AƄ tO#k6\zT'ƐVgZFÞ{s|ȝdVnDw}aoѸwNbbm[3N*-;'h/*j6$s[ K ]B-n`hu[ӎ{G>dn /\<L uۀ}f%@B&fk1,8:y:PM=v<7f'HͲ_#/U@۠'AOD96=('Y kA NFj]<#K't@^£G(2Ys"=\U>Fyev6% mI&e #D[C]4AQ.r&< !X⿌k7-Κ`5 uDaOAfQuKb}2ÊsHݝ!ւjYYIJt8m1Ou/d,{k8]Idq\(.ũExl ; Ku[O``3Os穇s:nLI ƋI5[5Tns (J#32Ι[!aCI#*BD8R KBu(DpdD]}.=r5/{ݤa!z/ B%3&KR 춼2q#0-wU啕KWaԋ{6j(3 0*]`lgDBkS "=k"FYE,_Bo$dO ;O!օȎyo==^t/}?1O376t7Pe7Y|Lғ7}~t ?X 9hDnl=OK ; /W= */5tzٷ/7Lzv^+DL5>@,n.F=@fM0$ ]z_0 %{oU_P7Oź(ܙy⻃_'qQ[5dI-ԙGzPʺ &\L"y)Mt/%FT/aF֚owZXs}%E@UuLgk[We/]ϞMoʈ-`;Mp~/1Rmb$u->eŷK32A~K۳(]ߟǽ[?(c`ȑdMϒb #Lp}@ݯnK̾}?[z!$dZh`I)wo ?'Ee'ʹy=7.<3!_:zLf?dl4z6߾0~~g@LqަJʷ'[=_C; S _= \! q0f31HȞIk~vBAp=.DCF=sSj:*ؤuVCA-Y˼\JJnQ޲ 䭵^ע`>:%Oy0@2ՃE!tW;NRk"( !<=SnMglpL (F[FC f<3LI 9cqk4bf>|`wu5"B}\J"8T@L$VXb$(1GؚGȐ)"d,#Y%\S*FB^d 0ﮜi6R^id(uҭhm)o_)(Z/ȋϸJ \ սwg@{7gFq"%8"x uI7S#T.~ܺyO"hhQL&}+":@t f+),;RJ 5%q|/Lj}>ݡs.00OЯJTRze2ZЄ0NF}zlkM{h?e;awƅU+AK,q7-'tY89R}Ai@VSbq+6!W8i`ĥ@9#,B?P`A:yK@x 9Id{KrZiJKcoWiOˊ;WAI)`D<5’B 0Ahs rQ:'WE*,__{G]gvo/?A j];=vLt~H=/_^~yW~W{6u1fJQ.!"8 Xb K)gadT(ֿ7q[} ZSn+b3׍1IlMj^T;Rz8)VTP@i!&!d$& @4P6 0.9&a #Y)NY"5d[g^mKSU T57 c#Eq 32 D ml  F.0I{( OEGcTV3Z)aFIOeG6pdcQH d3#ـ*aiB!&:ӘifTcE$x݌1r r`aTj~5%a. nW>ܝj.o WDE7/~UTdeߗT3M~"yB5C0F+$;w4]W7,LTBPgSߞg; H_Tm#1uGC@rdq;lSvws+<}-,ߩP]YN j!?yԧ=u&8:٭*>]jef^ SYc[u3٭w*}JܖU7A%ftmhE076U*9C~.s" JWl'=w&\w[ztGgolcNQ.'ӳyӼi,ڍ.2OQ!r0 EɳPEBn({i_ochZp }X,͒g\m$Y!"Ս<|^Z߿$I&== E¸8HJ")okP$F$3Cd SI(JR(0! {@p|W ldI>@{^aٝ1*ƮLդBպt' : ;Y[ܙ2la= H.4Nl*S9@3 sUs1Ͳyz-Ez^z\&5`0O0X#1Q-wO++ /,|H"Ie*0' G|gOP/@yttE{[%`0Y}jd{2;cu?CKrƬMAk3ELڂX-$m9.Vǒ'^#$BJ2a_ʓD)|p#qIqz36>5\c~Br%p(#leH$E#H%*hOuCܴ^h&ŖFE"FQC׆Xl xPZQD4K} 1({Jht#YQ+2VH đ$# DP*$L4-1ػ6r$Wm/U| s8`3엛Afccl#yY`-nI-~Q+ qUX"@F4%F"#)W[jll<𝳱 8Ay^u3<0fծf62 IMѩq \ kLS9i$żԎ(,* jhY(d^آWK(2bN0^w9QlAKN8UXS*Qt+$Q>'\x([RFq|HcA`/Aoan!&yc-=hUK!77W?5z91b<מ\oTI܂`xE*zo>Mh4|'m7W(cՂ:nhnGOۃ/M7QzUZ$OIM%Ҏҋz?[-~֛*ftAr,˹KU"h,KR$ɎEYs囗fOwEWTUl`E"F}ǦWǑsITƐ=MyQw0JvC "?VtP#I%?"Ò8ɒy1Ru+k+&Ý{MT[n Qpx͎f,h비xjy_-~hkf+w騖B^^^.ݔheDAk0PCP+~<Б/u殂rT FXSP*>dy޹d od#:D 풕:Gr5X[w[6| {$+vC&ZC^ez,M Zy7]h)҆P}Ԣ̒uBͦ:!3lu6O>xL6@|mXwnlu+wt\Muc:Mw;]0֛wk^{ڰDG[MDT1;xe=S{nyDz6,;7cFdngi@BlQZ`(U c2wUlTZ-?8~}'bKzUt xv}I4ߟ޹H aC`6?7/v=I%v=ZC%vaQ[p k޸6lxhFh,qżaG6<1Jj-iDn943XiT0K姢 V)Jf-=DyC(&JK-}ZZJȥZzy.55Jf|?b4x,c*шQ&CRΒV#T|qj]?w)93<)8IH ڡЛZJ֒h|Z>h]MtZ& Ǭ.4Lsux+c튄ⱸ^B2.Jg{!+<pyR;:GfUq~*ۯ-|;5yP'=.g;6`{pX3׫sOS"T;jΒv_]UW<׊4yبHZn" )L\؜TLf)!Oz MMOj2|*vdHceR>kT25FJI}.u|TsR㬔`ÝSZ>7֬<^I" ^0tכR+R*PXY) _JVR_Ka+=k+x3M 8+-JӋrgaRNt!˦y[)DPDyJ+ZsN.VzV4JOCuA;9z-YKzSX?f&Qܗv^{@T*}Of#F<(5uRy= }\Om˙b)iB] ubI lG.Օ:|a>MC$8S'c|(v|7bd|QU11,f'&˔A]= x2vs.0R:1;w<P):fgյ0A R!&,r lL4j NUTRcVGU.OI}.*- 3%DNȩ.Q+rP 4!nϝWG`}SmHx:"2,R !0a5)!)ܦ)ZLH3 Ȅ-5F"kƨoCB(gLƑRbr&U`Reͨ KcTg8k5fVn"E%0AezKjQڣeY6FZ)/VpܽR֝ԡk!Bó[3P7h>4%.OъOlMf^ aR+dDnVq1"Usc!?eLX"5!PyCq.Asg0C<qm[5^JIz "|uڭz)s-5|s5z %|kAh{a@xMUcG"GD?FG %j;juHyaLqzdTݻf?"èxiͨ굉Z1k:jžpA /$%mƏDdӼcOpw^Qxʑc Q͢C^OZ.81).7n _`F=1o=T=iGs.j1Z᤭l5N8i's\ա`-8i :T% VTT|Yku8n 5-7\?J#Y(/5Cթ7Vkȋ#tMډiJ1b)RtVJiR-kJS fQc1M u,IC4lعJsP$'SCyFIi&d8g`hwNeԂ̽Go1K?m~DPGc3uu/Egur/'SUI$ۯ_Uu|TFUw]E;H`Ϟ4q70LץEi> T=Aq7x&nb/X05IQ7d]J&{᤿|dpLyw ZJIBCO~ۆ "y xnZ^4nhP[9Bva;U6nK;GڝNsho2L{_֭"NUrJYЙ(ܗԡq#!ڪ20*'"Ъ5|9'eR)3&e"nt[DrQ.xt G(OV/9p-@j>7C@mD*;[OVT׋Hp/EaS'~aVx?{ƀr.ֆw`[.1g&yYyVeސ]^dO̓䊥(ygsR_H:Rk/3j^l}<pR'$plTo xc__gۏE#ߓ b^b=5,@)E:'* )5[<$$4 p HB$o-@Rb<>y)P*Q;G%J=x&ӯeLBItx44 |9LPކk|| /b|O'IޒQ\\nI6 6|4UJ;2U/]{07GXy2/7Ϳg:aȇkRy~j6 (Rܧ 2)X4Ddw*pb;FbS 2!^&^W(*#/ |bqRc;ol1 { =I4=\l_sl\3%1rH8fh?[?f̟gOr Qbp˧^=oGI"|[m]1Q ,8j+#_/ re!,+CrʜԦ2F3?PJ8T"%S&P30\ZP:ݬ a'L@ZT΁8Kv9a;b' r@lwyN1$M*zz6Tۃ?>3h({ZZ-?R闿zU(VϿfVbp^nV}8@&’W۫Q~;͗tw]agQQrXW;3ϲ۞j8zLHWR??=L㷑4]D*&&GaF`6j6$bdsCs@ :hn(*sOx,k< yŇţŎP8H(tXA#j]2AZA@Yӗ8 D%ڑ&U_syewB#`JHqeE?c}j^~0oN 6"ΘN&+"[\;t|dO yx?9ΰ\%̥ន%rHZY&iU̸RpAR Z,iT#B @1i2aH:B2;jbpSoͭ ^ā%|ǞJYď Qu}R,TְYX>gԥRK12^sw hT*Hc8J|).w ͗]$-G籇WlvKlu%رb׋zJ[bZ+íg"i4?"]b4VE|PA9 btVd0 >Xsa&q/S1t ucJUJLjkټ'QE! Ah=GBU(üI ɢ8rV7UߍJ}Yy쓠v_bw+qaJ/CUV[#_gFJs,՚Ppb=;BѴ(Zܡzj-ᓅ'qye5 ̑ĵp "i7U٠IR!(|u! u}Ç35 OP[٨k=_h Ԍ(ʨdG=f SFZtk̶ؙ8cg )($L j[(X)Nm (7gYA89ոowƘgXTRRm'SR UWϮ6-Fqo)n&xpYnZ@K__gH4V8Ƶ%)1KuHV2% | o/X0ߛ0#_sKCL ?m2K.Α66Achle774x7.ad3|oK">fSrD߆boK˴;39mQ8`cWn+#!oeqۙ7H{]XB]=w]c(: utZy\"f??B`aD尙\_4IW#0_:LL<[&wϑ\yIq ˟k$[$W$W(ނQ{T|w*Kŷ$Fo'w&'Lk\;e&>Vz$ֆeJ I1mp,(/ UJoB`HR#ZJEmQhATF)(MUOV1<'r1wU`Y-LJ 4D{pN$[WGEnM@ɏ:ݼ3vaͱf@Jjcwkj(š{:Ϋxj+ͫ~˼*2ę)f9ipi,Ϋ\f16y6U]W(CDXO+2.[ײQwǢjA[ _ooΨMWYr;iO̭>Ϡ՜}hmPx-~ݝI% %򰌋;aŤgelkߍ.3|wգgW`?.V+yZL[}^~zp.K}JE:f⪘}*ҨWޑv>x'#H#i-bn~n} :[ժpn#¼#g@CBs Pme5׿?1-Lq1L(xq i (*f2,+/Nnʵv߿plW3Ei~XA$@/^6,XQ/lW~0?TKiKc1 &HI `U f1#"IGg$a~k~f3 rE1`kEw%\~At5⣛j9!ll#2ҋi} >?v/6/YHBl, om;$ے@0$~#2q:ZÜ*Zaa,b=`hc V08ZdeյqT[WuRS!?okNS[EOP~~;BQJGT%m<Җףs .?aӃ`VmQ1G-ŝ]L&˹2]NʰfV5}݃.PwuVA9pa7?/mV e aҸ$}_n,cNo6 ;V\OVh@xѼ\&<Ǖ~̜@QSrmԚ_ٺhHp)\6w.zLJidڽR}#AO[C{qYH?92hP1}Jw] XzI]V `)wy@rŘ=rC3$(zrbK)l{e1S ԕѠ;\_\vwWrF4O6T z.rdC]W&c_+Ya_^9 a{݃.PB,Ԉj#viV05ZҭJ՛a50'6: q8F Fʁgh=ׂ&5TJ$O %2HLH)KnoưfkÙ+Yu =4G+m"~yd7R` Hk)@#$un'-%^PUpYc7I ߐ&ymc~#'ɓIf4(-U` \R+-d |nz$I\^K e:(Wz6IONӠ`([]%eZlłbt\%d*J]J؟oCM@MtVy3_Dvt俺ooBB9WTmjl;jnXu!zdP9yx^e(1X- l2˯x89}` :HP~*YQ-Ci=OC]+#N-PsIRI {Hf (qf[jBQE jSM\ǴM\.ܣƟ 5Imnx'9Aa\' c|q"+e[}kF}ۨ58TKtEQ ՜s.(H n᧨.X2A1;D2#ajQ"H!zbDcbH:b4~|?F2/yLY`ILQª$w$%6#bvP7~`Xqx'@lJޚ<14M  j0cVb7^O5ڈ~#˘i^|yגXݠ{7e_wܘWv┣T%J*M%TJqRFX%C{L\8Z%{ed71^p8E7>хO{roԩ Sl >?=}ȝVS8ۢ5$ZX+抡~`~Lt2=ziDh-,Vk%WJ% =^3Rk ZijĐͨj+.Џ")>#A;||z™NAwdtxTQ\zJOv ׃{;JA0ܴBe-NJyTGTW%S1ygu7(R Mِuh#6De h[4y(L1%-mQE$qE92Adn!6HF]X$&A/J).XGDGD: fSڝTb움:)|:n:튲Hudu{ 5\U~@ⶲxH]%7.JhAH ]WĔdDLS@k,?YW9?HB.p)RSNx$E][]id6gtmgbFjR 05q u-%3^/,,;"h9Ԑ8@O$ hOP\2/H=W{Sч q, /\Gю"^<ĊʫqŻRH*48}R74wZ#Q\HK9BD{mr-_SWR֍K49 /~R!hPE4_hwui$rRjߛsqԹ|OM!^#a &2Ry783ynQZBa!W큏?7`Ё>U ?) \͝!A ?IQ{46يMzOgj4muq_BJJx-FIü"(i =W*1ޯws9b;+Y4oƉh͓X6~#v`ʆkV=[ 8)u5|iUfEi JjdZ+'oQ&H `B+Of.)JW?ƜX \ 1Z%l&9w60Bb9@{ќU):_F{IJaѪwh-BDpHeLev0F: y~΁!d:0U8h֔Z:We;o.~}y.`/5 cm^ t6Y ]$˸=ϕ4.\@{Ba2Xn/Z@,8^fͰDx6ç P diL#o8f !zձ$R1|bϓ]BMW^ &E3]>Dsz'C*ϟ>h7Q@q'@˅=4YfF ,9P{[h844m5[ ¯,O1:\xN:e*Ϧ9uujDҐ+&imt0eg0A2mޕ./=<(6vPPQLe LR^d[?r%al?}%H323NahLD3P|)8~al'67"E*vY :Ӭ*٘Hz L bc[T$poUҮ`$FZ![=tff0]I9KOo {E&=O #$P)fE͸-ϕ_P<V$B@}VĈE6cd_r>W{#b6O4"րN w}c\:0\\VqZ1.,@B8A[0*К Z]u..xְK9't;tC mь} m=_/pW?Hop>\`$sf`޽XaI_X^bO%<) 6@єAbNj.*^1zul789֓8[մ֠#>Sow>V>&[V!ᄂ&|169^BKΝ/HP7ra`RxHY 6O~\b짲ȼWDF IdC ġcv$X]"|ԋh*ǒotÒ =%FRz:X2@%J Ap+UN3::HzLD.D#'tfXIhũ T9{.\PjZN v5DDS q-̓hkZ=p44,1Ōh:6]xV@M8vrv5lƨ J^)3Oh5K(NMҷtMG_q/<$(]\X5i|t6JGFmKU dd7ǜ ]O!u4?j=2ƽ@n_b<ZJ r+S#|_-  iBӶpٹȜR`HaXDzP dAڅ-ܨHl`FQ\>`Nn[}^MޖĔف1n;W 0H=ptWaq?;[ 6zMvHG چ_-^Ƽ>W$ӣHfΤfOޞ۵Ss'֛ +0)(̒DDA a0!yĺ Hî1D͊4/7߻hl:UK4%BEܜ )GP):qbfqR6xv,Cf'tH7"&gbHHn~K!!Apbs=vJ()Pnwu}V_E_% :};9PJVBc**YJnZ6b{~d|wy[e :0`.VQ(pܘš[/~ޖ#.%vt#Lƈ"Fw& $ghf 3 UtEfӶ~l]?ߜ'y}9 Qb|Ixm~S1o=~Q}_My~[~_/׿}W]gS2+ #I$O2&;[O˧2O_5/W|~.Hw7%E쿷OE~<~XdTܛLM( k4$\E;w;~Fi_? \,O=616ALۺs`![6)ą ujΎ[;g9ip"tQDtAfԺf3w9i,'$zx8ۑGrP Ġd!\0Y)3z>q01%v["vE {i-QhD̶0 z϶сukBFN6B( ugHUP1'SP!H5p57_Kq{ZqvdusȋoJ3IƭKС3$G$zհHsK҂lEn29y\11JeKDN@v蟻=,>~4qzυe"8pBզxYoUj3{܆7W86Ћ$PJTF )|+̀'̾?r-~/c DdElEO3Uzqd{|GHERx=86aUwG.=ziT)^@lޥ' V@P䡰É#?]K|Дܜ6|pg?`-8+g4{{^;@^Ť \OMIj}hَ g:S}:k,YFTEu!֓dhHjjjﹿHi."iqʃFj@|O  iBVV/h h(d[87*x Nȝ"N/aP+~ǫ1 GRfIcF `;g+7h'jED&L9J 0fc5MQD ALS1"6-m$R_*Wt:Ёsk}aS'kl&cHЕdOwm{'Qq'{onj &egU.rq<$%ϻ7؝T|nA Hb5&lpKn!k, OH~II%;)SJ P}A1~9UH6RG-Q|$d),ȣΌ'Zt?^YM-bgxM}k0vpfm2.F5pYSΟԥ| sXΌɼn}oQ58e=Gy^Qnc=tQ &;?֗L}{T.+6KoUKZntBWk44Tz8VZQJeі(D_ZH~{iV t{Ҋh(J諥EZM  ʛ!Xi 1h):D Ɲzf2n(]=_q5uI̊xNDM+YM1HќŜj 8E]qMHYoRC[wWŢ|o8~?Jp΀PF sN)yNBDw& &O( R$}:rY10ZU ߝbH0OrϢJR* pdDIyZ!! i2*mNC#~&R 25&j›J,GbD!]NNZA#@R,b#ǘRZuk)q6{:ݸ5JLA*L![r|3*quErوɆ6H@>b8946SE\2%J澌c ԅӲbB\Y&IB7Do Oh-r ( "*T.|?rY \* w:RI*1y ܷU/H0lֺ% ÐB?s@0E D Qh|XQ6Lk0*jƄ ֢Bz T1OR*AUYr<nVEU[kLVo7eW׺^.t?kQroMvGR4`/']B'K(Jxsb/Ehgq?sy}R1^> &-q/ˑt3vvz*F\XyN%>xeEot^e2-r^foN587_}s\(3 4w;C;]Z@'S\xPֱvKF8V&lxD#+(cDAndKAn-i^75,h͝ջ2N$PJmH@Q =wd+_i}K-=:WڒTn8=l?v0RVTnrEMs"k{{l 1|)W{ӥ vʇAmA(*wdo<&֕;\+mlXТ] n*]6&Pi6&QBP@+mh(al0A*B.$_*½ߢmiI% gqx立LRFKdkڽ!7{e[(~Y ns|R$7y_ 9897o?kJ#kvn?wp[Pҩts޲nҞYqD5W'˜#O/=>ћfAsy ܳ `8gng~֒K~?l}LHyyqQ'Mfq'"W2. |睟dz~(S|Plf2(W+ocKSpNvAw,f z8-?q!Ov Y?͞Pn,n_U}DX0:MKY5߃\Hϲl3*\$!\Dwoh7[S Nw. <`ڭyzTMֆ?~Fi[bt ([\4\@;#uCWL?Og_יYҤ> q2h>ᡭe3Cʨ]heޙ8\W=('X{1v 8UHJP*9#Uxo҄-8_ii)BX>YuaYWhpB$+z>R0t+V,#60O6v^f('="}H\BFzB*ՇEEț0W8SC I˴kʕ<됄,f#f}&'h%`I_DdFP]w<6"}R]Y<Ѽ T!6 ViR-k0@ Gw#)3{qknW@1#c [ڃk⼧4 Ct1ccQ[(-njUY#yJI1&r\4^DATNda nj1r_s*H>bBb[LT NZ4'J궘, H4VA25#붘ȜdP/[|K(4(ֺ-sb(?C&O9쏱ck2Q!Q DOKRq}$Dͥ*$mRy0pG9hU㧤>E"Lx 8(Z֡OŕΥѯ&GZO@ 4M` DOPM4{'dnxρX^nI_Bq@E'@1%Apc!$%1 $N?yAq^ pڸB 0oX' }6%|Q3NG@ pρ ͩV_QG5a1 i&)5'5M>->$iX<|(G6F*_ml^BD( a {DRМ1D/+unwS( Sk}+'e@h`i rOo"9-9ofp݂| n|v;N&Vu=Qqռ(4KЋ 3]hN \2fr QpWHj53הZ) vnﵣ(g+|{~g m,P,U(7-UNeuL9(Ac\ yLo[e:\"ewZ?.ܢS^Ox\;e뭯?{6ঢ়=CY`6 p &y kb[Z]d[M6%SrSMIT 0eX]] GCnNR3)_N\ww^WLcX e( ö[;6r|41Tqo)!7-m+ErltTkD‡Nh@/BhrN%HRM$|[Jy{:qj4ce2w7MFCnG{|nCC^F KZ7:1>V!&퀾&`Z صuoJhА:֝8X7'1|E>s[`-^C^T`pHH~x/ qs 6+v8^V\J42KYC˜tI YsMF:`y~[eA0i @+rtY!%)*%cG79}%;,FJGrKd;ĕM 9[)~sARk=XYM$w4$H( e@ eXA3-J6|MWvqaϧ5fJs/]J50d홇1A\ W awiu`hVL.Az8 , v& Awdn7Kh ϏC`'}`B0H¥ ȁo[vfFI"~^wXwĚtdxTM I:@1,74&3R ewSӄYa/Yc(2EZqBD]Ƴ]VqE:msڀYƩFȏ~prGc o!8"q $6-i&#]b҃^ K!#~d\bH5pt}"T> 4Y_'Sr`;c'T+.L'[ f)\B/B7f_)}h+f8GCgigOgWG_(y9癚az]Hy"泇`i(|Ӹ\dld2aOQƁy_w:^':@M}x6A uf*QXI2aa!*sfY Ȝ*a+ yR PWmg4 F=L,3tD*qsRp#'439B)WL!|Q-%8= YhDy2҉Cyk g$U8*G2rGR˜vp,YSdQ-@ҩ'4Z[&j8PRCS\K@*Ձok$fBnELb&jMgF\?29@,!xEXꚏjQѤmUakiMvLʰ8-1)O48sX4XoL!pm( _=sԢzܻ^_Nc0DzKGDԗhX" r ?rDF/vRfS[S0up[%q't՗T=i1k)@A7WkT7oD2Mr ^x%7ϫ"NR +^:Dqbz`gt:~)f+H88$O@#(1LR}hXlD!Ai?|B9AΘ.y[]O"xи2PbySյ74X@zI!N|f !Y9 fBQ~4Z$1`K02oV\f!.F`3N qњg(pQ? B=FaU Lm|V 0D FasW[H%d\3QSP*Cq\?v8p.[^ɞ ײOD&JS @]C5;c}9zZu3$S$gTTXc&3UqdTpj5\} Cu‹{f^ UÿKeKrgYe-Ƨt܌F "߄r> ld:[~^$}ǯ &r=|wT)<]})Z4_c}0f ~-d U4~O-oZJQA(G+8FajҕIjL*ժ;tYM8dcx$ۍW۔8e r(PD,'X#Y- goAaahDHJu:k@3_а>|LΧs:['cVV =Db$(M&{=|TqV<~yv YԔm|ˋr9\+Y͈WDoCk )@)k18>:A㺻~z7\3ה K6LgE ʜIܙr8BT4ke:+Q\: AXʆ?@Eޣ]k1:nftk!G@n)wS*S*fo*}ܚuuMmV6+syUJ^!X&u? A@4zQ֕ma M7 yyFwENMEϝ+咄tftw[ʿ!TgvR>%`.$D~U:֑ソ>^S#{94sr>"g^"f]1wAПok|)B8 v6z\wEa?U-DŽB@`*nDD SM#*κgLJNΩn BH|bvX(Ѐ81 #OZ`ζ^8~LA-nm,؀^eZgϩ/+]<+6eb. ; K&I|"Ot4I#uhJs,|C"^8yW[7,N_^Yy=Oa5x~vm!_xtGgPEgq(y0]o6>bo=A̺)یo5 \}Sh -r DZVSQ4R ez?6f8'LH\Ua]D3v &aSm; G%ö 8P-X跧Zg#TVּ-F̪23yק7(ȟ!=VvPDU<^8sA;x|t[Mӣw5 >,;duQie$X#W#jk9]kVcŅX]V ؿWR+bZۢC7y]|rЅi6:;1˅ ߣ*bl84hbgO(\i$#vV<*nIC^;O -V!&`ZصuhАQ:%I烴Mr+Չ}Fv0޵uhА[Ka-nEo`8jVH;)a{RC&E2Oyĺg՗T ) 勉x`Q"/:;q - S?cR_絔.d4o Zlqk)aZEiKi'0--֜ɓDz1-DUeIKQK R(f!0--Llqk)'aZI)鄖rՂTb XbpK3Kk,f~c>KƓ0M?2a$_ԷU!q̭iG Bl7y,@Rɏ:$K7DC@= 5TJTZd>Q8XAw+~%JC1O3D' J7v6pe*2d)>Ich{pvh@%x2cR甩-T?{}H߿O M̗$A=ƌ}NXWl"^z "њ Q+LPB|_)5}!7eVV4#Wx=) -QH q̦ilxٍ¬9:FBb&̥R?& p!Fv rIb&`1y]p*I.YRFXk90AX! 8f;@\v?Q>D4ƬJkFO |WB[J2mhO8I̜Z?'qδR=AF(;qQX?݆UB(T{ OS7zT^Za]E sUy|)EIv;٪:ASAӐ 2|DOOoyjjӀ;dlE:U/^ su?SmMd9Y)o/tEi󬴥4G+=g+gUI~VVV_J]sd);,OhS%o?꯫zeaz R;DxGmA~YpVrkW!zmP$&/2I3^=Mz@"E T){,t=h&2ղ1y+HP1/S#CVmeiB"t|˳eUʐCwཬFҲCB}0S ""k`5%Ieu0MR 2Jv)6`]%&8Ԣ"d'zv5,lkHok#GpoZ- v~ܨ.hhefIYQ3/M--g9ix|mdʆ3d՟cEO6 -zϞE_][ ug_͕=í'iVyrѻlrQ"LٕÖJ?j! 3TAqdRxrhP ԅZ;M Js22KYjHkRxVp\I+D3iG%ϫ//0#rc'_X9ARt^{lge/a:M?: r`cN3P /b2NQG퍮}TV$XL >U<MYeS$&+R&\2iV2!g⻾ůARHYTk3EojC 4Zvr.󼃴Ц1 Y[}DLTpҏ W{TVb11 DG0rx'柾G)c@ ؇f{._Xŕ?=.4ǫ]iT1r$ -Nuj!R}]st94Gm*#[׿g-FM7_}Z' 35ؾOWKi[0ƩZjj9g礖 p#܆b¤#ٽ~{OKfW,~>gߪ6EVӯ˯soQM$rռջ1+T[W:A޲,w̓UTN{/"&da3t )y)@+eoE"`g7תJrN J>NrZi l;Dcum ,ъyN}.~q KOQ Gz˖YV[޾zxhcWuaP[Ym2m趻5sQBsDC9.l쵪vU3O{/~^ I0bb=CO຀|n80ǼJ_@5dgjBzf))CVq ~%Ӑu>#E4[ Dme0FG *ddh:2s9r'!ҋ[Z݌\1t5Nd8#5=kMM׽+\2 <4Gd(5o)g3ti|5:uY|6mn<, 4bülMIS =ȏ=PBi%ag/ 6v9Jxkɢ,*ӖbH% @)<#v{y΁.ݔ1PD;o{&&V n?70Ԛ{t`MApНRT"6o`]fFJHuz[!UA2,:w53#顱vW3FzjCʚ&\n}oqD &9O^%ڭW~1ڭoomsϦz"+vbocj*^VCT wM݃ qv&YE%s3*OY4LvҐԴw.ef\6tV'Tƫ=Gb2¤ K~d2F2rH>DvVrif6~#+i•2RC"1T/71*@ IN^Lz7 ^ﶀ|_`eˤlJ77NTU0_!_Q@+^+8;|/FͨD.Ch

*]W_"9aZ8R5JP)C (L)מ-ȩx/{޻tGܢJL#ALQ y(H0OۻzK_qJPZ.^l*641+`ZULU#9;4U}=?$˾d@]$4p,99# D(o *X%r t3{n!n]ժnͨՐl W2C*d0 Ѽ oS%% ̒ތ!}iI^4ZDWELHDuq?)wuge[% ѷ<~PRЊ8 sC[#ۡ ̳d2o惷w/WUTm{ hjPBZm!%dßH3Bfj˸]jw˭펈Ss"VV@VB,Blh~G(ΝK9 /W&l4JsE@: :BT!ΠC2ge{TҋmAzq@\ ^%I*TbMZp%O8}}yA7Qfbߑ8&); 9Eluqؓ? Mo%wGv_l2< T44)>O9rm0]J4:7AD6%*`n,"\X`*.ĉW1ZRiWV -^p$<c|`ҐDBfJl̵Z]?]n3}|IτZV"+kHe1 ן2A18\Kݫ-cҺ@|%R C$OǾtGBk>P4;gIqHͬBsѸ.t|hWx~VQ ?*?0IF#0$HZS"V]A} t <?BQrb]Vġ[(K3Me1cdͲ3c%v5AADԮG_yc1zGvt㑡v0w5C/]d4}+5h T QsA(Eo&< qDL,ӵTTU!yqcsT%S޸3@D>FVa:y@QvZILz%j?Ne㡉JhX'yPr yy)AyT65r>يK:\YeT\d-νQ1Cb2<Xdfw_.T>bߨoՆ@(z BS*P&(6%4T_Jù5r/dtf?5@ q*:h([bƸr nGJV#[|TiF=w)K+Ʌ\bR@E}tp,σso⁑t'[̮J}ia9ϭ 0 -0dqd\I:hX#c4huaJ?ׁ|ZPKJ'èzkבh[J)礞ɓeg+/ae]DM^VNESe&SĹj~.eΠߊ0sf[ Qʽl˘j#&ke2LkcydPWm~aHf/9d$]QҾ3sC\k$0ItP+GeC?gKbjϒi'$eZ' pf!1eOb} l*T/*ƯK\-gsQpq@]$#%rAE̹_]@\^:!\o%9D“uk5;{JǑ~ ;Xdh){O;{RfR̿2|,|RnJrl9'3Sש_lK @eA>Xr=qt`4+=% y^MF`NG_xTAcG?wFM(neR?2e>N:⋼M M֋J[mz aOȤ^C:#S-Tĩb-t1$eAޝyStC1zbӱĀvv$hOEgx3WCUOj^'^S ~q& (=KګKT8rAF /0`vAppTpzfIJr~;Ԧ 4fTh(r4Bm\w~I\8Y]v@5 "oUF-l.F%8lh@>r##) A7w?I,ؼ8Cmmn.أ)ETƞJFY1MIzUԲ?5\ht%"扲ZPyU[{]B闃i.-~.Uhބ{ዪS#}\A,[\ |WYQYEܮ< Iϼ}ﴱ{K/nS|T``\1J<L01'W\%>R\%3fj9﫧} yKV&?ʆ*RBL3㘎>ߝ)J }OfycRԄj;b NؔM7VrtdV>xzz ڬ^ylOA~+Qb 0!GV%'@H F+e+y沏w!p> !:If7r'(/=!L-v|5Q_)(im[/W04ž5brwT 9_3}C`&FbosjjB6=S3dlLkMˈgQ ujvrs d>(U-5@5aޣ `>[d™:5;n{2 Fo />Ŋ^r#~^9AVFKK{FJǦ(5 x_+Y5HЇӏ袀Op g2{w r7RP@vqj#X@*`PYOG6 ;ս3Fk-4R>9*{b9UHN>x/IsUBC&c!!{LkPA6Y㌟FD($|KPB`1smI㮐bEk1qky¸=*F㮟|[3; nӼV6NjQ~Ңo(~jGqe]GOy8 pMU닟xj՝2n톋RNTiϏ\yւ9x9L3NˋkOyQV`1%%(^WwkQ"]tNG8f B SfĄ`W4C`"9Xʸ?]{+s;b~BMC=Bm9+EFr+BD,!-K!mDq {G:e{B\o'w>PW/2tPXTrC|Mf"=!#]ՄAur׊{$#ze+Xux?0rhڏ9~r)U8|XNa$ eH%ґ= V)k3%"{&u)* &eMs~VZF LfGY-l 8<&:&GS5tmԆ3ia^E3aK v@q Or']ۨ16om#rkxuXN$4F' B™C%$iœ"Czr/WiM4' W#ƀo9;gpXMU<ӈ (YkT׈vac VdWph¬4AZTDǡW(iIo`_z9a֘OoEO΋Eov+,JoxyPccC %}ySPxKb#l89oSL%Û Le:/)z1G7}'P|5<'' |8m h0AxLY5^"hce6- 3,edtM*Nd>YseS7bC\$LL=o/]{<^/{0*vAJA/ӀWP<Mt]!0=@2۲{m=s܎k"ц APaL`r S A\|] ԀS۝}0W+ՙ^UZZ=z4dx5E-s< bU]bH-m[<]0n8\VPCrHkGh~: $2d«,s;|opKKx !4FQu"G5- Bj\[\or/f4[˸C+{o׵tzޞN|!W_~ GCM(W:˃ql~2:WܮW S* ,ayMk8P6LYydPMҙ#]r+e8Ʒh=E{ֹ[S:M؅ϋiq7kr}o|6S6uc?wk:[p#@SF5.Sc|[{ʀ]JvX/#x wv/۟Zo͒"u(Vz=+B|\_ ͧs1i]ʮmyzf[a,2R*S"{@e*[|04Q"%|˵䱪l45X6צ nmʎil| âjP!~m|(.- Y6M'U:Lg`. ;զ)7IM *jmkA!\*8C/y*Bq.uB~|w'G% %ʽqX!a88 N0;D4z.'5>: 2:āO [^~p)%.I9w *b'41QRXFrOlpX觲 wK``;@%6AŰ>-4RX(;@ehნ(G=Z9.ODj ,c~U6dmr|ؚ ǧ"赡dV`%I(pL!P( Zm6^d,̌}ua)#/=MĨňSϐ£HH.r*b1 CRO="wqs1f!XCwT/ M0$_=j X1 e"8r#L&1VlN^Q#.4q^!`#ʾFhd14 I8@HI "jVag eڏA`ծ;\SN $NX`ZVQs2WvF֌Ϣ` u ][o#G+<=-ׅ* ك,f<=˒/jْݒ` XGf}9B6pbQJxIΰo{-{n꺄aqx6ث7z_8Dg`4.{%VR 6%j^vc4dH% V2T$ ,Ӯ;R25EpKq%b:^)qA-K5#BU<Q[*}pT)+kk"q#]g f "}rg=c7G7E ZINRK7q ̓JNx2=*c?aX};z(>BػV!GDJ?[}m[tθhKkCfwG7GWW\1QΰͿG▸aBqdcu[x hSo?YձFӇp=!lyiCdSl],r>"CS~ _R=Y$v aT{ h݊Q m%5`=@Za~ި [f(];>!$8;2tB) {nƀ6.3}+Uv݆wưgnCl wC{ Šagt~.v(+MMu$Szp7u{Qk=AT,+w6^c!*׾X]ABZYc<)XjDՀvؐHEc!ú已ξNXiqv9JlH\k6`v- `oCðZڎBNȤ__|iIgV{mC%a`K(2F^Iv V#ĵnU=5s*N+e}.%l1*]*3 @𪗽ZhdsM(s&H(]Ld9XS4JG Fƥ !gHDZ9#>gI)_,bO^,bR[y d,G`Fv^46=SgΦBQrHә@ŕBm;[} fmK2Y(>M%zoE5j-aU##ꐜ^Uw*v w w,AسYJhgnQtE|+t=]"ةxHN E.r|a-9Pf_u;rEF;a[,tdo1쌷ydFGX)gʮnijUvÄ `?M\'I]jv+4֮}ś|yȁ|z|~~|zrР&J,y!KP+(="N.K0)LEr&R9Vc# HBi7z??lCiIJnlFE\r XeDcuPbˬ1R 4,)P؏Dq䋍p,yBbp[f ɲHa B*!R#ȅ,_ tx.+DVGӢP%DnƖ!r6AgO`\NJ+XOƗTLh!r0O݁n-ERT&*1fYCaMHYd$JU%8j7?:5/녏c>Uw AXiY }VTtSug^o֭"aEU=W*K uSRtТE_^iflm*Ȏb3+=i5s8Ωޏ~Ե%ym<˙I8A}쟕$_|\ 8S_ώٓ1|=6%v-/쮵7*Yü W\>QglXbYwoKeD'OWH aUKCѕ|mQddzvG`9zg\,5ZXD," =r BtrX)$j`Maw>_̎+tϿʆ9|VU_0Jz&0]&QTw Z,Cztw1;u ^xO*Cy0/YoX]; zʗI!1zt@\ ׫lt'az0 Zo^E c(!#xLE.E@[s|8Fڴ@>J_r_+::c?i_.~8wwi;VoCۺd'^I8 b`eV]f*P+WZd2鮶eN֎#Z|Nh,  gМ9ϲWk' 6HREau`SJ5<;R2Tt}NGgoN/I#yxʟ/RiOY4 u#>gx^6:NYU,/X^V. 0߮W'Wu_6c6`rUu'[NgOlX\-g/u"A$/ʢK&:B>IDD!iˋ!dT:kݎZ׋`FSW٬a&jmvJnG\ױNTpm1;/pKI\+JrwqaUfbU=x_X$x~:3jΨ 'VE[wL:5'HH*Hp?n4mҞňi.&\aruڎ; \"ێ VE'Mrg3 6>J/|.K8sA"?x. ܧѭ):F˯t-E<;#8g:= \+=$D&F)IJ$YdT-;'8X,XBY7<ڊw9I>?=i7oV򪕜w?ޯsWTUZL䑳^rWl*H3bdT}Թ\RH&`.M(j_ _q~3aT,n0rE+K}\\ &O‰5A6fp:qX5X&U-&@A RƢQDa3*]{lOM׋F_쵉?_Jo]b?(X+^q)W~8T(؝^L醉<]=ζ^sdH)/ot+V-z) 4Dh)І D`Jz7:<-[BnAkؒC*_s&qkF>1>*HF di,C97L(ZQ ^!!SumPځ#-O/&@0rݼxgf7QwSSc-^9ԅWI1^p|rnBz;A2cO8-qւ>spDO|u!jBDI#fyVf!d$agnL&7ȠΖ >ر>78[61>ٻ7ԝy~!0($Rv6Q7蔵2"'.s{^5FOPv]>Kx23UDI(66#i@Hk*R;j)0*ZEi:_ͦe?*U4\_}.?X_ͭ^"k~WFR:{U=a?⡽?5@(lj[ }׌*r^7C*v\_wP &}sE`C=| eq HMu΅_:׹!W2@VJ1Yy,HӔ. j, y~}5nEkvfr|3d3VGZZjkNo~nؓJМ8_onϓ2pq 6hWGן_^WE^foOzd0RV'܁Շfi\ѥL.>r_J:eX5*ѥX'`PVZ(˫W&B;f[E/#kѻ@&H̄I͠# g'C#CdD~^kL}y0ilP-=f"Jx?8ܣ;JESk XO[́Y|vrz9`UBEpʑjųj+F-|ko7 \pL^l^]:<ݵcdz{\&_bˣ:d:&2Krp(am!མL얘O xBnrgLI15D-;WF#Kڠ qyLB@emy 2݁tZTH[fJ $0-\w-[!wY,e}_e!ok]o\axy{;ZU*oQwGr{R'<ӏgEaNqMQ,{!`GξjAq_|,qw&_'$iIH*)3q*kLio9q NGY/g?dbB2sPf)%8c9Sc#I$K4Y T3–ڥ {6{ J͸+ V@t(MN`V7J#Jv\/ Ƹ">ǐXѴ8悤FmzJ2.(2e)`,AA *a#! 1,֍NYq>~?cR/>zB羘?8HzJvgt\/E/o43b8͑76!a=7)JJA@`.[cG}w\sW%gEgH/mDr\XĔȨȋq* |Wris;ݜw8R 9gbv^5AH/<%d}} 7I=j a @z,T+&CA5AulTT. I;.v܆גv|M:t:x\a;e92o39pۭ69N]3o*2zɼzk|75q,'AbusظW >^651W f_d]Oo?e뛵o,ݬ@<8<.BQQn&<<ʏt͎eJ"sfHEfJ'`j忾_PRsj 3e,Bqaߊ|ݴRv tB& ¯r[!Sqd ::>^=q4^JyGOi:@C/^d1+̻+':;7sd !Rɞܠ+_錺)_sLyJzYs޷[if\G,PejE!~oQ JSt:ՊJL)R,Pej3Jc$%%sХ1ZrC-(ux !Jݼ ՒJP eWh(RM5AHZvRf\D4T/q1Jc;3NR8JwSSpwoH3JJ1wCiI (kRRZe > RR_J#JI$pҘlxzS1vgU.V+R+eRFU_jO"~kF:/X*z*}Xxeo"|}U6l)NYU@E,u87az]<43BH&_eu$+“s9z S>#yM-ʠA*OB)'=a"iDPT%^Ť>90şlק?57kJ/k̢}.'}\W}7,\$9㘫XYBB4k,)*ы2;qUdʛ "L`L>}I?¨ y˿zyebnrzc'GZy/ˤ7}\ m}ɧo/^}LSw"h)[?TpniORŞ=G\P)Chqa/YfbMrSB#dD8U}mnԞ9?Mղ;`5o0t5/[ +مrqk:RYc8?lc!uCPY 4rnauۊ;XA<՝1?J7=k7d1\az<^Q0S- T|UC`AoU1G%b)}9s\ "A5bc/>6}{Jr1A[Qx}@18?\X;% ٬>[k>ؔuәNJ>ʡ^Lp8"@ ^uUSՏ ڡFdptܓxQ}:ɮ'w}@ձECL`1b{sjS3򒨤#m9_2v&skN+F^-x YŨ`fa`:*83ʜ*:hsQ@Q5idJh=½+U} hL)"a-. }v2#+[!G8k7[(\6RiGBq0)x& Kr'cح,>{;‘X,b=7ot;ռ e,,t46iܺ192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 19 09:42:39 crc kubenswrapper[4965]: I0219 09:42:39.491146 4965 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:60766->192.168.126.11:17697: read: connection reset by peer" Feb 19 09:42:39 crc kubenswrapper[4965]: E0219 09:42:39.989075 4965 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{crc.18959c8a09413f92 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 09:42:25.11484917 +0000 UTC m=+0.736170490,LastTimestamp:2026-02-19 09:42:25.11484917 +0000 UTC m=+0.736170490,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 09:42:40 crc kubenswrapper[4965]: I0219 09:42:40.123259 4965 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 19 09:42:40 crc kubenswrapper[4965]: I0219 09:42:40.140634 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 06:52:57.953233314 +0000 UTC Feb 19 09:42:40 crc kubenswrapper[4965]: I0219 09:42:40.305801 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 19 09:42:40 crc kubenswrapper[4965]: I0219 09:42:40.307761 4965 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9c0288960f3e7739ec0587fcefc29e57c0e351c4903326474454df7b6b57a29c" exitCode=255 Feb 19 09:42:40 crc kubenswrapper[4965]: I0219 09:42:40.307814 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"9c0288960f3e7739ec0587fcefc29e57c0e351c4903326474454df7b6b57a29c"} Feb 19 09:42:40 crc kubenswrapper[4965]: I0219 09:42:40.308019 4965 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 09:42:40 crc kubenswrapper[4965]: I0219 09:42:40.308935 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:40 crc kubenswrapper[4965]: I0219 09:42:40.309004 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:40 crc kubenswrapper[4965]: I0219 09:42:40.309018 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:40 crc kubenswrapper[4965]: I0219 09:42:40.309570 4965 scope.go:117] "RemoveContainer" containerID="9c0288960f3e7739ec0587fcefc29e57c0e351c4903326474454df7b6b57a29c" Feb 19 09:42:40 crc kubenswrapper[4965]: I0219 09:42:40.456575 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 19 09:42:40 crc kubenswrapper[4965]: I0219 09:42:40.456836 4965 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 09:42:40 crc kubenswrapper[4965]: I0219 09:42:40.458110 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:40 crc kubenswrapper[4965]: I0219 09:42:40.458150 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:40 crc kubenswrapper[4965]: I0219 09:42:40.458166 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:40 crc kubenswrapper[4965]: I0219 09:42:40.509790 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 19 09:42:40 crc kubenswrapper[4965]: I0219 09:42:40.609602 4965 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 09:42:40 crc kubenswrapper[4965]: I0219 09:42:40.609696 4965 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 09:42:40 crc kubenswrapper[4965]: I0219 09:42:40.797401 4965 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 19 09:42:40 crc kubenswrapper[4965]: I0219 09:42:40.797472 4965 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 19 09:42:41 crc kubenswrapper[4965]: I0219 09:42:41.141700 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 14:04:03.043734648 +0000 UTC Feb 19 09:42:41 crc kubenswrapper[4965]: I0219 09:42:41.312457 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 19 09:42:41 crc kubenswrapper[4965]: I0219 09:42:41.313976 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"65b4ac45cd6766a2144308a99817f41ef69812519680ebae6edf2d6c72f1c97e"} Feb 19 09:42:41 crc kubenswrapper[4965]: I0219 09:42:41.314103 4965 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 09:42:41 crc kubenswrapper[4965]: I0219 09:42:41.314128 4965 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 09:42:41 crc kubenswrapper[4965]: I0219 09:42:41.314992 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:41 crc kubenswrapper[4965]: I0219 09:42:41.315025 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:41 crc kubenswrapper[4965]: I0219 09:42:41.315041 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:41 crc kubenswrapper[4965]: I0219 09:42:41.315825 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:41 crc kubenswrapper[4965]: I0219 09:42:41.315947 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:41 crc kubenswrapper[4965]: I0219 09:42:41.316043 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:41 crc kubenswrapper[4965]: I0219 09:42:41.326181 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 19 09:42:41 crc kubenswrapper[4965]: I0219 09:42:41.846412 4965 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 09:42:41 crc kubenswrapper[4965]: I0219 09:42:41.846809 4965 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 09:42:42 crc kubenswrapper[4965]: I0219 09:42:42.142714 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 17:05:38.908827119 +0000 UTC Feb 19 09:42:42 crc kubenswrapper[4965]: I0219 09:42:42.317165 4965 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 09:42:42 crc kubenswrapper[4965]: I0219 09:42:42.318043 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:42 crc kubenswrapper[4965]: I0219 09:42:42.318184 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:42 crc kubenswrapper[4965]: I0219 09:42:42.318296 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:43 crc kubenswrapper[4965]: I0219 09:42:43.142867 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 09:04:33.9585555 +0000 UTC Feb 19 09:42:44 crc kubenswrapper[4965]: I0219 09:42:44.016827 4965 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 19 09:42:44 crc kubenswrapper[4965]: I0219 09:42:44.144025 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 04:03:50.987398985 +0000 UTC Feb 19 09:42:44 crc kubenswrapper[4965]: I0219 09:42:44.453541 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 09:42:44 crc kubenswrapper[4965]: I0219 09:42:44.453822 4965 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 09:42:44 crc kubenswrapper[4965]: I0219 09:42:44.455178 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:44 crc kubenswrapper[4965]: I0219 09:42:44.455229 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:44 crc kubenswrapper[4965]: I0219 09:42:44.455245 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:45 crc kubenswrapper[4965]: I0219 09:42:45.145028 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 08:21:33.150678669 +0000 UTC Feb 19 09:42:45 crc kubenswrapper[4965]: E0219 09:42:45.278929 4965 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 19 09:42:45 crc kubenswrapper[4965]: I0219 09:42:45.614456 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:42:45 crc kubenswrapper[4965]: I0219 09:42:45.614652 4965 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 09:42:45 crc kubenswrapper[4965]: I0219 09:42:45.614833 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:42:45 crc kubenswrapper[4965]: I0219 09:42:45.615720 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:45 crc kubenswrapper[4965]: I0219 09:42:45.615752 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:45 crc kubenswrapper[4965]: I0219 09:42:45.615761 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:45 crc kubenswrapper[4965]: I0219 09:42:45.618892 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:42:45 crc kubenswrapper[4965]: E0219 09:42:45.799353 4965 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 19 09:42:45 crc kubenswrapper[4965]: I0219 09:42:45.800549 4965 trace.go:236] Trace[447757942]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (19-Feb-2026 09:42:35.622) (total time: 10178ms): Feb 19 09:42:45 crc kubenswrapper[4965]: Trace[447757942]: ---"Objects listed" error: 10178ms (09:42:45.800) Feb 19 09:42:45 crc kubenswrapper[4965]: Trace[447757942]: [10.178172794s] [10.178172794s] END Feb 19 09:42:45 crc kubenswrapper[4965]: I0219 09:42:45.800589 4965 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 19 09:42:45 crc kubenswrapper[4965]: I0219 09:42:45.800883 4965 trace.go:236] Trace[1664090146]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (19-Feb-2026 09:42:32.530) (total time: 13270ms): Feb 19 09:42:45 crc kubenswrapper[4965]: Trace[1664090146]: ---"Objects listed" error: 13270ms (09:42:45.800) Feb 19 09:42:45 crc kubenswrapper[4965]: Trace[1664090146]: [13.270728304s] [13.270728304s] END Feb 19 09:42:45 crc kubenswrapper[4965]: I0219 09:42:45.800909 4965 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 19 09:42:45 crc kubenswrapper[4965]: I0219 09:42:45.801939 4965 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 19 09:42:45 crc kubenswrapper[4965]: I0219 09:42:45.802140 4965 trace.go:236] Trace[351830527]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (19-Feb-2026 09:42:33.142) (total time: 12659ms): Feb 19 09:42:45 crc kubenswrapper[4965]: Trace[351830527]: ---"Objects listed" error: 12659ms (09:42:45.802) Feb 19 09:42:45 crc kubenswrapper[4965]: Trace[351830527]: [12.659433242s] [12.659433242s] END Feb 19 09:42:45 crc kubenswrapper[4965]: I0219 09:42:45.802170 4965 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 19 09:42:45 crc kubenswrapper[4965]: E0219 09:42:45.802588 4965 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 19 09:42:45 crc kubenswrapper[4965]: I0219 09:42:45.806013 4965 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 19 09:42:45 crc kubenswrapper[4965]: I0219 09:42:45.824614 4965 csr.go:261] certificate signing request csr-jb5sh is approved, waiting to be issued Feb 19 09:42:45 crc kubenswrapper[4965]: I0219 09:42:45.833395 4965 csr.go:257] certificate signing request csr-jb5sh is issued Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.105944 4965 apiserver.go:52] "Watching apiserver" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.110396 4965 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.110676 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.111048 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.111113 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.111642 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.111684 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:42:46 crc kubenswrapper[4965]: E0219 09:42:46.111606 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.111848 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 09:42:46 crc kubenswrapper[4965]: E0219 09:42:46.111953 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.113401 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:42:46 crc kubenswrapper[4965]: E0219 09:42:46.113475 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.116622 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.116633 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.116666 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.116675 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.116705 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.116732 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.116804 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.116804 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.117674 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.133303 4965 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.142246 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.145610 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 12:52:53.559168351 +0000 UTC Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.154498 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.164839 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.175141 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.187317 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.198876 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.204662 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.204698 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.204722 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.204741 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.204760 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.204777 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.204793 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.204809 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.204825 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.204842 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.204857 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.204872 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.204888 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.204904 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.204919 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.204916 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.204933 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.204949 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.204993 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.205012 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.205064 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.205088 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.205107 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.205121 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.205220 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.205239 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.205254 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.205282 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.205308 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.205327 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.205361 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.205383 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.205399 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.205416 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.205433 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.205448 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.205469 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.205487 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.205510 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.205548 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.205566 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.205598 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.205616 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.205634 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.205651 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.205669 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.205690 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.205708 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.205732 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.205749 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.205767 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.205817 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.205834 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.205849 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.205867 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.205884 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.205900 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.205916 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.205932 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.205948 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.205965 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.205981 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.205999 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.206015 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.206031 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.206047 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.206064 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.206109 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.206128 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.206144 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.206162 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.206218 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.206247 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.206303 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.206319 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.206338 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.206355 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.206372 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.206391 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.206409 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.206426 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.206442 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.206460 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.206476 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.206492 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.206509 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.206528 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.206543 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.206561 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.206578 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.206595 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.206612 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.206629 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.206648 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.206666 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.206683 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.206701 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.206717 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.206733 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.206750 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.206767 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.206783 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.206800 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.206832 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.206848 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.206865 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.206884 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.206902 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.206923 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.206939 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.206958 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.205281 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.206981 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.205961 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.205961 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.206032 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: E0219 09:42:46.207019 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:42:46.706994089 +0000 UTC m=+22.328315619 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.206043 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.207048 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.206176 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.206215 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.206165 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.207105 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.206262 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.206346 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.206558 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.206557 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.206569 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.206714 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.206779 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.206791 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.206964 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.206983 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.207239 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.207267 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.207295 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.207455 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.207812 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.207838 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.208122 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.208406 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.208770 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.208771 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.208886 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.208905 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.208951 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.208962 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.209147 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.209407 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.209412 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.209475 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.209549 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.209560 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.209685 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.209703 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.209858 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.209969 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.210027 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.210048 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.210262 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.210298 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.210377 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.210389 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.210533 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.210654 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.210841 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.210969 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.211165 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.211323 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.211634 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.214771 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.214889 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.215130 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.215016 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.215347 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.215567 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.215800 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.215914 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.216006 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.216272 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.216510 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.217389 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.217551 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.217829 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.220606 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.220650 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.220690 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.220760 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.220874 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.221020 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.221084 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.206975 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.223950 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.223978 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.223998 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.224018 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.224041 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.224059 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.224079 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.224096 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.224112 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.224129 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.224146 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.224163 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.224178 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.224210 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.224226 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.224244 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.224263 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.224286 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.224309 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.224331 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.224357 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.224381 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.224401 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.224426 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.224449 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.224471 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.224495 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.224493 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.224518 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.224725 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.224748 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.224766 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.224785 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.224807 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.224827 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.224844 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.224861 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.224877 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.224895 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.224910 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.224935 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.224953 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.224972 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.224988 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.225004 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.225021 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.225039 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.225095 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.225112 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.225129 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.225147 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.225163 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.225180 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.225215 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.225237 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.225253 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.225268 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.225285 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.225303 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.225331 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.225347 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.225364 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.225383 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.225403 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.225421 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.225437 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.225454 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.225470 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.225486 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.225501 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.225517 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.225533 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.225551 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.225567 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.225581 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.225597 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.225614 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.225630 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.225644 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.225660 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.225676 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.225692 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.225709 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.225725 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.225740 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.225755 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.225798 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.225821 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.225839 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.225861 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.225882 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.225900 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.225919 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.225934 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.225953 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.225973 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.225991 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226008 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226024 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226041 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226111 4965 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226123 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226134 4965 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226143 4965 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226152 4965 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226162 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226172 4965 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226182 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226208 4965 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226222 4965 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226232 4965 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226241 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226251 4965 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226260 4965 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226269 4965 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226278 4965 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226287 4965 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226298 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226307 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226317 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226326 4965 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226335 4965 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226344 4965 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226352 4965 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226361 4965 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226369 4965 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226378 4965 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226388 4965 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226397 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226407 4965 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226415 4965 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226425 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226434 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226443 4965 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226452 4965 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226462 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226470 4965 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226479 4965 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226488 4965 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226497 4965 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226507 4965 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226516 4965 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226525 4965 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226533 4965 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226543 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226552 4965 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226561 4965 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226572 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226581 4965 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226590 4965 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226598 4965 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226607 4965 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226616 4965 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226625 4965 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226633 4965 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226642 4965 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226651 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226661 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226669 4965 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226678 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226686 4965 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226695 4965 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226706 4965 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226714 4965 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226723 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226732 4965 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226741 4965 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226751 4965 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226761 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226770 4965 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226780 4965 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226793 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226802 4965 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226811 4965 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226819 4965 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226828 4965 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226836 4965 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226846 4965 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226854 4965 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226864 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.226874 4965 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.229588 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.229767 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.229788 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.229801 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.229942 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.230359 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.230384 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.230449 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.230569 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.230631 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.230674 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.230835 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.231407 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.231750 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.231789 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.232030 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.232236 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.232918 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.234568 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.234773 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.234788 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.234840 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.234855 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.237788 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.237854 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.238124 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.238169 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.238234 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.238171 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.238370 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.238845 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.238938 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.239228 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.239657 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.239683 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.239932 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.240360 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.241387 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.242802 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.242843 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.243444 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.243722 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.243746 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.243662 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.244077 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.244131 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.244251 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.244521 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.244878 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.244887 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.244956 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.245258 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.245885 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.245900 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.245929 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.246046 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.246151 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.246348 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.246360 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.246479 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.246554 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.246670 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.246888 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.246991 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.247144 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.248772 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.249537 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.252162 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.252465 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.252806 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.252985 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.257558 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.257714 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.258061 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.258349 4965 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.258497 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.258501 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.258907 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.259003 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.259017 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.259262 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.259335 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.259490 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.259613 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.259880 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.260034 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.260731 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.261208 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.261295 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.261315 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.261438 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: E0219 09:42:46.261765 4965 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 09:42:46 crc kubenswrapper[4965]: E0219 09:42:46.261866 4965 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 09:42:46 crc kubenswrapper[4965]: E0219 09:42:46.261953 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 09:42:46.761929869 +0000 UTC m=+22.383251179 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 09:42:46 crc kubenswrapper[4965]: E0219 09:42:46.262044 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 09:42:46.76201232 +0000 UTC m=+22.383333630 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.262128 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.264347 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.264445 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.264940 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.268096 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.269008 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.269545 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.270369 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.275143 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.276536 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.276862 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.277111 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.277140 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.279262 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.281421 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.283151 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.283410 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.283575 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.284031 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.284056 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.285248 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.289518 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.290748 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.290842 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.294294 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.294315 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.294543 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.295657 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.295953 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.301835 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: E0219 09:42:46.307641 4965 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 09:42:46 crc kubenswrapper[4965]: E0219 09:42:46.307667 4965 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 09:42:46 crc kubenswrapper[4965]: E0219 09:42:46.307684 4965 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 09:42:46 crc kubenswrapper[4965]: E0219 09:42:46.307757 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 09:42:46.807732979 +0000 UTC m=+22.429054489 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.311431 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 09:42:46 crc kubenswrapper[4965]: E0219 09:42:46.314301 4965 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 09:42:46 crc kubenswrapper[4965]: E0219 09:42:46.314323 4965 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 09:42:46 crc kubenswrapper[4965]: E0219 09:42:46.314336 4965 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 09:42:46 crc kubenswrapper[4965]: E0219 09:42:46.314385 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 09:42:46.814369707 +0000 UTC m=+22.435691017 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.316647 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.327287 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.327339 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.327459 4965 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.327476 4965 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.327488 4965 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.327501 4965 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.327515 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.327527 4965 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.327538 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.327549 4965 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.327563 4965 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.327575 4965 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.327589 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.327601 4965 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.327613 4965 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.327628 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.327640 4965 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.327653 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.327665 4965 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.327677 4965 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.327687 4965 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.327701 4965 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.327713 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.327725 4965 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.327736 4965 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.327748 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.327759 4965 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.327497 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.327770 4965 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.327826 4965 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.327842 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.327857 4965 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.327871 4965 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.327886 4965 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.327901 4965 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.327915 4965 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.327930 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.327944 4965 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.327957 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.327970 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.327983 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.327996 4965 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.328009 4965 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.328023 4965 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.328035 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.328049 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.328064 4965 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.328079 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.328094 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.328108 4965 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.328121 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.328132 4965 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.328144 4965 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.328156 4965 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.328167 4965 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.328179 4965 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.327579 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.328445 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.328465 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.328478 4965 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.328490 4965 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.328503 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.328516 4965 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.328528 4965 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.328540 4965 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.328552 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.328564 4965 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.328576 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.328587 4965 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.328598 4965 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.328609 4965 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.328621 4965 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.328633 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.328647 4965 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.328657 4965 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.328668 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.328681 4965 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.328695 4965 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.328707 4965 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.328725 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.328738 4965 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.328750 4965 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.328762 4965 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.328777 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.328788 4965 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.328801 4965 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.328812 4965 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.328825 4965 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.328836 4965 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.328848 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.328862 4965 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.328875 4965 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.328887 4965 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.328900 4965 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.328912 4965 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.328924 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.328936 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.328948 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.328960 4965 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.328972 4965 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.328984 4965 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.328996 4965 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.329007 4965 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.329020 4965 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.329031 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.329044 4965 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.329058 4965 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.329071 4965 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.329084 4965 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.329098 4965 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.329146 4965 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.329159 4965 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.329170 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.329182 4965 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.329212 4965 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.329225 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.329237 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.365389 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.427011 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.436615 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.446994 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.732752 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:42:46 crc kubenswrapper[4965]: E0219 09:42:46.732965 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:42:47.732938498 +0000 UTC m=+23.354259808 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.833908 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.833956 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.833980 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.833999 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:42:46 crc kubenswrapper[4965]: E0219 09:42:46.834124 4965 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 09:42:46 crc kubenswrapper[4965]: E0219 09:42:46.834183 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 09:42:47.83416811 +0000 UTC m=+23.455489420 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 09:42:46 crc kubenswrapper[4965]: E0219 09:42:46.834292 4965 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 09:42:46 crc kubenswrapper[4965]: E0219 09:42:46.834306 4965 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 09:42:46 crc kubenswrapper[4965]: E0219 09:42:46.834317 4965 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 09:42:46 crc kubenswrapper[4965]: E0219 09:42:46.834342 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 09:42:47.834336104 +0000 UTC m=+23.455657414 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 09:42:46 crc kubenswrapper[4965]: E0219 09:42:46.834389 4965 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 09:42:46 crc kubenswrapper[4965]: E0219 09:42:46.834399 4965 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 09:42:46 crc kubenswrapper[4965]: E0219 09:42:46.834407 4965 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 09:42:46 crc kubenswrapper[4965]: E0219 09:42:46.834429 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 09:42:47.834423097 +0000 UTC m=+23.455744487 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 09:42:46 crc kubenswrapper[4965]: E0219 09:42:46.834463 4965 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 09:42:46 crc kubenswrapper[4965]: E0219 09:42:46.834484 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 09:42:47.834478288 +0000 UTC m=+23.455799598 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.834690 4965 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-19 09:37:45 +0000 UTC, rotation deadline is 2026-12-01 06:27:06.664749269 +0000 UTC Feb 19 09:42:46 crc kubenswrapper[4965]: I0219 09:42:46.834736 4965 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6836h44m19.830015795s for next certificate rotation Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.145968 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 09:10:54.93001924 +0000 UTC Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.204973 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.205640 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.206546 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.207150 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.207714 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.208166 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.208789 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.209331 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.209969 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.210495 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.210957 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.211586 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.212078 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.212647 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.213152 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.213717 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.214279 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.214653 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.218103 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.219216 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.219813 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.221030 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.221577 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.222853 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.223368 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.224640 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.225437 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.226037 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.227191 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.227788 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.228849 4965 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.228982 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.231009 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.232116 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.232619 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.234531 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.235315 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.236381 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.237127 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.238736 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.239299 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.240454 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.241682 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.242418 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.243574 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.244275 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.245371 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.246276 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.246830 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.247815 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.248405 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.249509 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.250278 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.250846 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.336303 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"ed9a04147ac88af087b35406b7fc4e1261b034a9fbfa0014446cdc08743f7184"} Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.336386 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a27905a4c42a1d28d582484efe02020cd2b7d5a5af7c53787412705c7a6da7b1"} Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.336405 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"eb5dd02a59be1b9333902abfc480100838998a389d60d76bdc925a80e62f33c4"} Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.337236 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"a4b74ef4c759f251255f8e61ed78a0d4156a52f59eb2370a731f51e9cd6d7de3"} Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.339063 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"cdd2c04f5bfa6800521c39502b241dfea1a0b9d3ddde4eb92d501d28bcfad1ac"} Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.339118 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"c40f4b9b50723e85e319b1278f6b26c68528638d2dd51dd22aff3dc7ff9178db"} Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.360760 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.385242 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.396403 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"210f2216-544c-43a1-813b-68e47da7447e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31cd85d70b58724b5ed5071b085e5b20b31083087fc7847dfadccc52cfd717c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d403f58f80ef84bfe879f95390ffc186d6516ce879f1ca593e9d06b6fdaa9003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86861693b1090906a34be8e134571c684c24b327ed62db509ca470e79c70c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65b4ac45cd6766a2144308a99817f41ef69812519680ebae6edf2d6c72f1c97e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c0288960f3e7739ec0587fcefc29e57c0e351c4903326474454df7b6b57a29c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:42:39Z\\\",\\\"message\\\":\\\"W0219 09:42:28.573633 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 09:42:28.575071 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771494148 cert, and key in /tmp/serving-cert-427400488/serving-signer.crt, /tmp/serving-cert-427400488/serving-signer.key\\\\nI0219 09:42:29.117984 1 observer_polling.go:159] Starting file observer\\\\nW0219 09:42:29.120780 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 09:42:29.121009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:42:29.122010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-427400488/tls.crt::/tmp/serving-cert-427400488/tls.key\\\\\\\"\\\\nF0219 09:42:39.487179 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2dcb23379da52333bd115902957e4051df5472eb6d909d45f4781487e8d6e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.406994 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.420998 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.447491 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.472148 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed9a04147ac88af087b35406b7fc4e1261b034a9fbfa0014446cdc08743f7184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27905a4c42a1d28d582484efe02020cd2b7d5a5af7c53787412705c7a6da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.507340 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.557709 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"210f2216-544c-43a1-813b-68e47da7447e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31cd85d70b58724b5ed5071b085e5b20b31083087fc7847dfadccc52cfd717c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d403f58f80ef84bfe879f95390ffc186d6516ce879f1ca593e9d06b6fdaa9003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86861693b1090906a34be8e134571c684c24b327ed62db509ca470e79c70c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65b4ac45cd6766a2144308a99817f41ef69812519680ebae6edf2d6c72f1c97e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c0288960f3e7739ec0587fcefc29e57c0e351c4903326474454df7b6b57a29c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:42:39Z\\\",\\\"message\\\":\\\"W0219 09:42:28.573633 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 09:42:28.575071 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771494148 cert, and key in /tmp/serving-cert-427400488/serving-signer.crt, /tmp/serving-cert-427400488/serving-signer.key\\\\nI0219 09:42:29.117984 1 observer_polling.go:159] Starting file observer\\\\nW0219 09:42:29.120780 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 09:42:29.121009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:42:29.122010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-427400488/tls.crt::/tmp/serving-cert-427400488/tls.key\\\\\\\"\\\\nF0219 09:42:39.487179 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2dcb23379da52333bd115902957e4051df5472eb6d909d45f4781487e8d6e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.639319 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:47Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.681959 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:47Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.741435 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:42:47 crc kubenswrapper[4965]: E0219 09:42:47.741662 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:42:49.741627038 +0000 UTC m=+25.362948348 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.750903 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd2c04f5bfa6800521c39502b241dfea1a0b9d3ddde4eb92d501d28bcfad1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:47Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.782530 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-6nv8r"] Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.783119 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-6nv8r" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.786694 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.787414 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.787460 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.828567 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:47Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.843295 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/e7972115-bfc1-42ee-b756-e394806eed51-hosts-file\") pod \"node-resolver-6nv8r\" (UID: \"e7972115-bfc1-42ee-b756-e394806eed51\") " pod="openshift-dns/node-resolver-6nv8r" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.843606 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vd96\" (UniqueName: \"kubernetes.io/projected/e7972115-bfc1-42ee-b756-e394806eed51-kube-api-access-5vd96\") pod \"node-resolver-6nv8r\" (UID: \"e7972115-bfc1-42ee-b756-e394806eed51\") " pod="openshift-dns/node-resolver-6nv8r" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.843778 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.844097 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.844264 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.844371 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:42:47 crc kubenswrapper[4965]: E0219 09:42:47.844041 4965 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 09:42:47 crc kubenswrapper[4965]: E0219 09:42:47.844556 4965 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 09:42:47 crc kubenswrapper[4965]: E0219 09:42:47.844602 4965 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 09:42:47 crc kubenswrapper[4965]: E0219 09:42:47.844618 4965 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 09:42:47 crc kubenswrapper[4965]: E0219 09:42:47.844682 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 09:42:49.844662192 +0000 UTC m=+25.465983502 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 09:42:47 crc kubenswrapper[4965]: E0219 09:42:47.844379 4965 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 09:42:47 crc kubenswrapper[4965]: E0219 09:42:47.844736 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 09:42:49.844727254 +0000 UTC m=+25.466048794 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 09:42:47 crc kubenswrapper[4965]: E0219 09:42:47.844490 4965 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 09:42:47 crc kubenswrapper[4965]: E0219 09:42:47.844758 4965 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 09:42:47 crc kubenswrapper[4965]: E0219 09:42:47.844768 4965 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 09:42:47 crc kubenswrapper[4965]: E0219 09:42:47.844793 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 09:42:49.844786135 +0000 UTC m=+25.466107675 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 09:42:47 crc kubenswrapper[4965]: E0219 09:42:47.845071 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 09:42:49.845046771 +0000 UTC m=+25.466368121 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.860718 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed9a04147ac88af087b35406b7fc4e1261b034a9fbfa0014446cdc08743f7184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27905a4c42a1d28d582484efe02020cd2b7d5a5af7c53787412705c7a6da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:47Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.880588 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd2c04f5bfa6800521c39502b241dfea1a0b9d3ddde4eb92d501d28bcfad1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:47Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.894720 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:47Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.913086 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed9a04147ac88af087b35406b7fc4e1261b034a9fbfa0014446cdc08743f7184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27905a4c42a1d28d582484efe02020cd2b7d5a5af7c53787412705c7a6da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:47Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.932107 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:47Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.945530 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/e7972115-bfc1-42ee-b756-e394806eed51-hosts-file\") pod \"node-resolver-6nv8r\" (UID: \"e7972115-bfc1-42ee-b756-e394806eed51\") " pod="openshift-dns/node-resolver-6nv8r" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.945579 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vd96\" (UniqueName: \"kubernetes.io/projected/e7972115-bfc1-42ee-b756-e394806eed51-kube-api-access-5vd96\") pod \"node-resolver-6nv8r\" (UID: \"e7972115-bfc1-42ee-b756-e394806eed51\") " pod="openshift-dns/node-resolver-6nv8r" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.945652 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/e7972115-bfc1-42ee-b756-e394806eed51-hosts-file\") pod \"node-resolver-6nv8r\" (UID: \"e7972115-bfc1-42ee-b756-e394806eed51\") " pod="openshift-dns/node-resolver-6nv8r" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.951720 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:47Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.967584 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:47Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.975798 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vd96\" (UniqueName: \"kubernetes.io/projected/e7972115-bfc1-42ee-b756-e394806eed51-kube-api-access-5vd96\") pod \"node-resolver-6nv8r\" (UID: \"e7972115-bfc1-42ee-b756-e394806eed51\") " pod="openshift-dns/node-resolver-6nv8r" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.981964 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"210f2216-544c-43a1-813b-68e47da7447e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31cd85d70b58724b5ed5071b085e5b20b31083087fc7847dfadccc52cfd717c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d403f58f80ef84bfe879f95390ffc186d6516ce879f1ca593e9d06b6fdaa9003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86861693b1090906a34be8e134571c684c24b327ed62db509ca470e79c70c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65b4ac45cd6766a2144308a99817f41ef69812519680ebae6edf2d6c72f1c97e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c0288960f3e7739ec0587fcefc29e57c0e351c4903326474454df7b6b57a29c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:42:39Z\\\",\\\"message\\\":\\\"W0219 09:42:28.573633 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 09:42:28.575071 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771494148 cert, and key in /tmp/serving-cert-427400488/serving-signer.crt, /tmp/serving-cert-427400488/serving-signer.key\\\\nI0219 09:42:29.117984 1 observer_polling.go:159] Starting file observer\\\\nW0219 09:42:29.120780 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 09:42:29.121009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:42:29.122010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-427400488/tls.crt::/tmp/serving-cert-427400488/tls.key\\\\\\\"\\\\nF0219 09:42:39.487179 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2dcb23379da52333bd115902957e4051df5472eb6d909d45f4781487e8d6e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:47Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:47 crc kubenswrapper[4965]: I0219 09:42:47.993918 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6nv8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7972115-bfc1-42ee-b756-e394806eed51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vd96\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6nv8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:47Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.110014 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-6nv8r" Feb 19 09:42:48 crc kubenswrapper[4965]: W0219 09:42:48.125538 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7972115_bfc1_42ee_b756_e394806eed51.slice/crio-f9602d0edf33ea66904ffca563c297a8c0201daef49bb65076a92953553203e6 WatchSource:0}: Error finding container f9602d0edf33ea66904ffca563c297a8c0201daef49bb65076a92953553203e6: Status 404 returned error can't find the container with id f9602d0edf33ea66904ffca563c297a8c0201daef49bb65076a92953553203e6 Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.146532 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 15:28:31.786930766 +0000 UTC Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.197917 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.198002 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:42:48 crc kubenswrapper[4965]: E0219 09:42:48.198098 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.198304 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:42:48 crc kubenswrapper[4965]: E0219 09:42:48.198298 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:42:48 crc kubenswrapper[4965]: E0219 09:42:48.198407 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.219924 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-vpj8c"] Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.220614 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-vpj8c" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.221355 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-nsjqz"] Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.221737 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-nsjqz" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.222284 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.222664 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.224454 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.224663 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.224697 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.224733 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.224797 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.241586 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-7mhh9"] Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.242776 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.248780 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:48Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.248857 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-dcfpx"] Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.249146 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.249399 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.249569 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.249753 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.250037 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.250452 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.255935 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.256404 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.256742 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.257012 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.259700 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.259706 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.259739 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.281547 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"210f2216-544c-43a1-813b-68e47da7447e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31cd85d70b58724b5ed5071b085e5b20b31083087fc7847dfadccc52cfd717c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d403f58f80ef84bfe879f95390ffc186d6516ce879f1ca593e9d06b6fdaa9003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86861693b1090906a34be8e134571c684c24b327ed62db509ca470e79c70c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65b4ac45cd6766a2144308a99817f41ef69812519680ebae6edf2d6c72f1c97e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c0288960f3e7739ec0587fcefc29e57c0e351c4903326474454df7b6b57a29c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:42:39Z\\\",\\\"message\\\":\\\"W0219 09:42:28.573633 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 09:42:28.575071 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771494148 cert, and key in /tmp/serving-cert-427400488/serving-signer.crt, /tmp/serving-cert-427400488/serving-signer.key\\\\nI0219 09:42:29.117984 1 observer_polling.go:159] Starting file observer\\\\nW0219 09:42:29.120780 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 09:42:29.121009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:42:29.122010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-427400488/tls.crt::/tmp/serving-cert-427400488/tls.key\\\\\\\"\\\\nF0219 09:42:39.487179 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2dcb23379da52333bd115902957e4051df5472eb6d909d45f4781487e8d6e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:48Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.302339 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:48Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.340277 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:48Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.343456 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-6nv8r" event={"ID":"e7972115-bfc1-42ee-b756-e394806eed51","Type":"ContainerStarted","Data":"f9602d0edf33ea66904ffca563c297a8c0201daef49bb65076a92953553203e6"} Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.348874 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5e0b10c6-02b7-49d0-9a76-e89ebbb00528-host-run-netns\") pod \"multus-nsjqz\" (UID: \"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\") " pod="openshift-multus/multus-nsjqz" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.348931 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/5e0b10c6-02b7-49d0-9a76-e89ebbb00528-host-var-lib-cni-multus\") pod \"multus-nsjqz\" (UID: \"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\") " pod="openshift-multus/multus-nsjqz" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.348951 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/63ef3eb8-6103-492d-b6ef-f16081d15e83-proxy-tls\") pod \"machine-config-daemon-7mhh9\" (UID: \"63ef3eb8-6103-492d-b6ef-f16081d15e83\") " pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.348970 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5e0b10c6-02b7-49d0-9a76-e89ebbb00528-os-release\") pod \"multus-nsjqz\" (UID: \"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\") " pod="openshift-multus/multus-nsjqz" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.348988 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/5e0b10c6-02b7-49d0-9a76-e89ebbb00528-host-run-k8s-cni-cncf-io\") pod \"multus-nsjqz\" (UID: \"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\") " pod="openshift-multus/multus-nsjqz" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.349003 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/5e0b10c6-02b7-49d0-9a76-e89ebbb00528-hostroot\") pod \"multus-nsjqz\" (UID: \"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\") " pod="openshift-multus/multus-nsjqz" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.349018 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5e0b10c6-02b7-49d0-9a76-e89ebbb00528-etc-kubernetes\") pod \"multus-nsjqz\" (UID: \"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\") " pod="openshift-multus/multus-nsjqz" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.349037 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-host-run-netns\") pod \"ovnkube-node-dcfpx\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.349053 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5e0b10c6-02b7-49d0-9a76-e89ebbb00528-multus-cni-dir\") pod \"multus-nsjqz\" (UID: \"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\") " pod="openshift-multus/multus-nsjqz" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.349070 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-host-cni-bin\") pod \"ovnkube-node-dcfpx\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.349087 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/26ce37d0-9ace-438a-bdd4-6bb30e41bac8-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-vpj8c\" (UID: \"26ce37d0-9ace-438a-bdd4-6bb30e41bac8\") " pod="openshift-multus/multus-additional-cni-plugins-vpj8c" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.349102 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-host-run-ovn-kubernetes\") pod \"ovnkube-node-dcfpx\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.349116 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5e0b10c6-02b7-49d0-9a76-e89ebbb00528-multus-conf-dir\") pod \"multus-nsjqz\" (UID: \"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\") " pod="openshift-multus/multus-nsjqz" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.349133 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-node-log\") pod \"ovnkube-node-dcfpx\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.349173 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/26ce37d0-9ace-438a-bdd4-6bb30e41bac8-cnibin\") pod \"multus-additional-cni-plugins-vpj8c\" (UID: \"26ce37d0-9ace-438a-bdd4-6bb30e41bac8\") " pod="openshift-multus/multus-additional-cni-plugins-vpj8c" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.349206 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g9zg\" (UniqueName: \"kubernetes.io/projected/63ef3eb8-6103-492d-b6ef-f16081d15e83-kube-api-access-9g9zg\") pod \"machine-config-daemon-7mhh9\" (UID: \"63ef3eb8-6103-492d-b6ef-f16081d15e83\") " pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.349244 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5e0b10c6-02b7-49d0-9a76-e89ebbb00528-host-var-lib-cni-bin\") pod \"multus-nsjqz\" (UID: \"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\") " pod="openshift-multus/multus-nsjqz" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.349267 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7c788dfa-1923-4a2b-9619-73acf92ec849-ovn-node-metrics-cert\") pod \"ovnkube-node-dcfpx\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.349282 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-systemd-units\") pod \"ovnkube-node-dcfpx\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.349296 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-var-lib-openvswitch\") pod \"ovnkube-node-dcfpx\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.349315 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d758w\" (UniqueName: \"kubernetes.io/projected/7c788dfa-1923-4a2b-9619-73acf92ec849-kube-api-access-d758w\") pod \"ovnkube-node-dcfpx\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.349332 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/26ce37d0-9ace-438a-bdd4-6bb30e41bac8-os-release\") pod \"multus-additional-cni-plugins-vpj8c\" (UID: \"26ce37d0-9ace-438a-bdd4-6bb30e41bac8\") " pod="openshift-multus/multus-additional-cni-plugins-vpj8c" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.349350 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7c788dfa-1923-4a2b-9619-73acf92ec849-ovnkube-script-lib\") pod \"ovnkube-node-dcfpx\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.349365 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5e0b10c6-02b7-49d0-9a76-e89ebbb00528-host-var-lib-kubelet\") pod \"multus-nsjqz\" (UID: \"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\") " pod="openshift-multus/multus-nsjqz" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.349379 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4tp6\" (UniqueName: \"kubernetes.io/projected/5e0b10c6-02b7-49d0-9a76-e89ebbb00528-kube-api-access-s4tp6\") pod \"multus-nsjqz\" (UID: \"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\") " pod="openshift-multus/multus-nsjqz" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.349395 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dcfpx\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.349418 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7c788dfa-1923-4a2b-9619-73acf92ec849-ovnkube-config\") pod \"ovnkube-node-dcfpx\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.349434 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5e0b10c6-02b7-49d0-9a76-e89ebbb00528-cni-binary-copy\") pod \"multus-nsjqz\" (UID: \"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\") " pod="openshift-multus/multus-nsjqz" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.349448 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/5e0b10c6-02b7-49d0-9a76-e89ebbb00528-host-run-multus-certs\") pod \"multus-nsjqz\" (UID: \"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\") " pod="openshift-multus/multus-nsjqz" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.349470 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/26ce37d0-9ace-438a-bdd4-6bb30e41bac8-cni-binary-copy\") pod \"multus-additional-cni-plugins-vpj8c\" (UID: \"26ce37d0-9ace-438a-bdd4-6bb30e41bac8\") " pod="openshift-multus/multus-additional-cni-plugins-vpj8c" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.349484 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8m5tr\" (UniqueName: \"kubernetes.io/projected/26ce37d0-9ace-438a-bdd4-6bb30e41bac8-kube-api-access-8m5tr\") pod \"multus-additional-cni-plugins-vpj8c\" (UID: \"26ce37d0-9ace-438a-bdd4-6bb30e41bac8\") " pod="openshift-multus/multus-additional-cni-plugins-vpj8c" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.349513 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-run-ovn\") pod \"ovnkube-node-dcfpx\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.349528 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-log-socket\") pod \"ovnkube-node-dcfpx\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.349550 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5e0b10c6-02b7-49d0-9a76-e89ebbb00528-cnibin\") pod \"multus-nsjqz\" (UID: \"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\") " pod="openshift-multus/multus-nsjqz" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.349569 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-host-cni-netd\") pod \"ovnkube-node-dcfpx\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.349584 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7c788dfa-1923-4a2b-9619-73acf92ec849-env-overrides\") pod \"ovnkube-node-dcfpx\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.349600 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/63ef3eb8-6103-492d-b6ef-f16081d15e83-mcd-auth-proxy-config\") pod \"machine-config-daemon-7mhh9\" (UID: \"63ef3eb8-6103-492d-b6ef-f16081d15e83\") " pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.349614 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-run-openvswitch\") pod \"ovnkube-node-dcfpx\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.349629 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/5e0b10c6-02b7-49d0-9a76-e89ebbb00528-multus-socket-dir-parent\") pod \"multus-nsjqz\" (UID: \"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\") " pod="openshift-multus/multus-nsjqz" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.349644 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-run-systemd\") pod \"ovnkube-node-dcfpx\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.349658 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/26ce37d0-9ace-438a-bdd4-6bb30e41bac8-system-cni-dir\") pod \"multus-additional-cni-plugins-vpj8c\" (UID: \"26ce37d0-9ace-438a-bdd4-6bb30e41bac8\") " pod="openshift-multus/multus-additional-cni-plugins-vpj8c" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.349672 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/63ef3eb8-6103-492d-b6ef-f16081d15e83-rootfs\") pod \"machine-config-daemon-7mhh9\" (UID: \"63ef3eb8-6103-492d-b6ef-f16081d15e83\") " pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.349688 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-etc-openvswitch\") pod \"ovnkube-node-dcfpx\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.349709 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/26ce37d0-9ace-438a-bdd4-6bb30e41bac8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-vpj8c\" (UID: \"26ce37d0-9ace-438a-bdd4-6bb30e41bac8\") " pod="openshift-multus/multus-additional-cni-plugins-vpj8c" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.349727 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-host-kubelet\") pod \"ovnkube-node-dcfpx\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.349741 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-host-slash\") pod \"ovnkube-node-dcfpx\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.349756 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5e0b10c6-02b7-49d0-9a76-e89ebbb00528-system-cni-dir\") pod \"multus-nsjqz\" (UID: \"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\") " pod="openshift-multus/multus-nsjqz" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.349772 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/5e0b10c6-02b7-49d0-9a76-e89ebbb00528-multus-daemon-config\") pod \"multus-nsjqz\" (UID: \"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\") " pod="openshift-multus/multus-nsjqz" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.352481 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6nv8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7972115-bfc1-42ee-b756-e394806eed51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vd96\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6nv8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:48Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.371372 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vpj8c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26ce37d0-9ace-438a-bdd4-6bb30e41bac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vpj8c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:48Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.390128 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd2c04f5bfa6800521c39502b241dfea1a0b9d3ddde4eb92d501d28bcfad1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:48Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.408322 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:48Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.446875 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed9a04147ac88af087b35406b7fc4e1261b034a9fbfa0014446cdc08743f7184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27905a4c42a1d28d582484efe02020cd2b7d5a5af7c53787412705c7a6da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:48Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.450517 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/26ce37d0-9ace-438a-bdd4-6bb30e41bac8-cnibin\") pod \"multus-additional-cni-plugins-vpj8c\" (UID: \"26ce37d0-9ace-438a-bdd4-6bb30e41bac8\") " pod="openshift-multus/multus-additional-cni-plugins-vpj8c" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.450585 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9g9zg\" (UniqueName: \"kubernetes.io/projected/63ef3eb8-6103-492d-b6ef-f16081d15e83-kube-api-access-9g9zg\") pod \"machine-config-daemon-7mhh9\" (UID: \"63ef3eb8-6103-492d-b6ef-f16081d15e83\") " pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.450610 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/26ce37d0-9ace-438a-bdd4-6bb30e41bac8-cnibin\") pod \"multus-additional-cni-plugins-vpj8c\" (UID: \"26ce37d0-9ace-438a-bdd4-6bb30e41bac8\") " pod="openshift-multus/multus-additional-cni-plugins-vpj8c" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.450642 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7c788dfa-1923-4a2b-9619-73acf92ec849-ovn-node-metrics-cert\") pod \"ovnkube-node-dcfpx\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.450664 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5e0b10c6-02b7-49d0-9a76-e89ebbb00528-host-var-lib-cni-bin\") pod \"multus-nsjqz\" (UID: \"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\") " pod="openshift-multus/multus-nsjqz" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.450745 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5e0b10c6-02b7-49d0-9a76-e89ebbb00528-host-var-lib-cni-bin\") pod \"multus-nsjqz\" (UID: \"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\") " pod="openshift-multus/multus-nsjqz" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.450861 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-systemd-units\") pod \"ovnkube-node-dcfpx\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.450794 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-systemd-units\") pod \"ovnkube-node-dcfpx\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.450925 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-var-lib-openvswitch\") pod \"ovnkube-node-dcfpx\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.450951 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d758w\" (UniqueName: \"kubernetes.io/projected/7c788dfa-1923-4a2b-9619-73acf92ec849-kube-api-access-d758w\") pod \"ovnkube-node-dcfpx\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.451002 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-var-lib-openvswitch\") pod \"ovnkube-node-dcfpx\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.451367 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/26ce37d0-9ace-438a-bdd4-6bb30e41bac8-os-release\") pod \"multus-additional-cni-plugins-vpj8c\" (UID: \"26ce37d0-9ace-438a-bdd4-6bb30e41bac8\") " pod="openshift-multus/multus-additional-cni-plugins-vpj8c" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.451411 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dcfpx\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.451433 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7c788dfa-1923-4a2b-9619-73acf92ec849-ovnkube-config\") pod \"ovnkube-node-dcfpx\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.451488 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dcfpx\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.451506 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/26ce37d0-9ace-438a-bdd4-6bb30e41bac8-os-release\") pod \"multus-additional-cni-plugins-vpj8c\" (UID: \"26ce37d0-9ace-438a-bdd4-6bb30e41bac8\") " pod="openshift-multus/multus-additional-cni-plugins-vpj8c" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.451529 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7c788dfa-1923-4a2b-9619-73acf92ec849-ovnkube-script-lib\") pod \"ovnkube-node-dcfpx\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.451613 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5e0b10c6-02b7-49d0-9a76-e89ebbb00528-host-var-lib-kubelet\") pod \"multus-nsjqz\" (UID: \"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\") " pod="openshift-multus/multus-nsjqz" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.451649 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4tp6\" (UniqueName: \"kubernetes.io/projected/5e0b10c6-02b7-49d0-9a76-e89ebbb00528-kube-api-access-s4tp6\") pod \"multus-nsjqz\" (UID: \"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\") " pod="openshift-multus/multus-nsjqz" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.451681 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/26ce37d0-9ace-438a-bdd4-6bb30e41bac8-cni-binary-copy\") pod \"multus-additional-cni-plugins-vpj8c\" (UID: \"26ce37d0-9ace-438a-bdd4-6bb30e41bac8\") " pod="openshift-multus/multus-additional-cni-plugins-vpj8c" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.451705 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8m5tr\" (UniqueName: \"kubernetes.io/projected/26ce37d0-9ace-438a-bdd4-6bb30e41bac8-kube-api-access-8m5tr\") pod \"multus-additional-cni-plugins-vpj8c\" (UID: \"26ce37d0-9ace-438a-bdd4-6bb30e41bac8\") " pod="openshift-multus/multus-additional-cni-plugins-vpj8c" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.451730 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5e0b10c6-02b7-49d0-9a76-e89ebbb00528-cni-binary-copy\") pod \"multus-nsjqz\" (UID: \"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\") " pod="openshift-multus/multus-nsjqz" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.451752 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/5e0b10c6-02b7-49d0-9a76-e89ebbb00528-host-run-multus-certs\") pod \"multus-nsjqz\" (UID: \"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\") " pod="openshift-multus/multus-nsjqz" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.451776 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-log-socket\") pod \"ovnkube-node-dcfpx\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.451840 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-run-ovn\") pod \"ovnkube-node-dcfpx\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.451867 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-host-cni-netd\") pod \"ovnkube-node-dcfpx\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.451894 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7c788dfa-1923-4a2b-9619-73acf92ec849-env-overrides\") pod \"ovnkube-node-dcfpx\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.451936 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/63ef3eb8-6103-492d-b6ef-f16081d15e83-mcd-auth-proxy-config\") pod \"machine-config-daemon-7mhh9\" (UID: \"63ef3eb8-6103-492d-b6ef-f16081d15e83\") " pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.451963 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5e0b10c6-02b7-49d0-9a76-e89ebbb00528-cnibin\") pod \"multus-nsjqz\" (UID: \"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\") " pod="openshift-multus/multus-nsjqz" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.451985 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-run-openvswitch\") pod \"ovnkube-node-dcfpx\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.452006 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/5e0b10c6-02b7-49d0-9a76-e89ebbb00528-multus-socket-dir-parent\") pod \"multus-nsjqz\" (UID: \"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\") " pod="openshift-multus/multus-nsjqz" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.452025 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7c788dfa-1923-4a2b-9619-73acf92ec849-ovnkube-config\") pod \"ovnkube-node-dcfpx\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.452073 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-run-systemd\") pod \"ovnkube-node-dcfpx\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.452027 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-run-systemd\") pod \"ovnkube-node-dcfpx\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.452124 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/26ce37d0-9ace-438a-bdd4-6bb30e41bac8-system-cni-dir\") pod \"multus-additional-cni-plugins-vpj8c\" (UID: \"26ce37d0-9ace-438a-bdd4-6bb30e41bac8\") " pod="openshift-multus/multus-additional-cni-plugins-vpj8c" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.452148 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/63ef3eb8-6103-492d-b6ef-f16081d15e83-rootfs\") pod \"machine-config-daemon-7mhh9\" (UID: \"63ef3eb8-6103-492d-b6ef-f16081d15e83\") " pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.452176 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-host-kubelet\") pod \"ovnkube-node-dcfpx\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.452214 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-etc-openvswitch\") pod \"ovnkube-node-dcfpx\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.452230 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/26ce37d0-9ace-438a-bdd4-6bb30e41bac8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-vpj8c\" (UID: \"26ce37d0-9ace-438a-bdd4-6bb30e41bac8\") " pod="openshift-multus/multus-additional-cni-plugins-vpj8c" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.452262 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-host-slash\") pod \"ovnkube-node-dcfpx\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.452280 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5e0b10c6-02b7-49d0-9a76-e89ebbb00528-system-cni-dir\") pod \"multus-nsjqz\" (UID: \"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\") " pod="openshift-multus/multus-nsjqz" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.452299 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/5e0b10c6-02b7-49d0-9a76-e89ebbb00528-multus-daemon-config\") pod \"multus-nsjqz\" (UID: \"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\") " pod="openshift-multus/multus-nsjqz" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.452321 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5e0b10c6-02b7-49d0-9a76-e89ebbb00528-host-run-netns\") pod \"multus-nsjqz\" (UID: \"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\") " pod="openshift-multus/multus-nsjqz" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.452345 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/63ef3eb8-6103-492d-b6ef-f16081d15e83-proxy-tls\") pod \"machine-config-daemon-7mhh9\" (UID: \"63ef3eb8-6103-492d-b6ef-f16081d15e83\") " pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.452361 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5e0b10c6-02b7-49d0-9a76-e89ebbb00528-os-release\") pod \"multus-nsjqz\" (UID: \"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\") " pod="openshift-multus/multus-nsjqz" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.452379 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/5e0b10c6-02b7-49d0-9a76-e89ebbb00528-host-var-lib-cni-multus\") pod \"multus-nsjqz\" (UID: \"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\") " pod="openshift-multus/multus-nsjqz" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.452422 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-host-run-netns\") pod \"ovnkube-node-dcfpx\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.452442 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/5e0b10c6-02b7-49d0-9a76-e89ebbb00528-host-run-k8s-cni-cncf-io\") pod \"multus-nsjqz\" (UID: \"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\") " pod="openshift-multus/multus-nsjqz" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.452459 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/5e0b10c6-02b7-49d0-9a76-e89ebbb00528-hostroot\") pod \"multus-nsjqz\" (UID: \"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\") " pod="openshift-multus/multus-nsjqz" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.452475 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5e0b10c6-02b7-49d0-9a76-e89ebbb00528-etc-kubernetes\") pod \"multus-nsjqz\" (UID: \"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\") " pod="openshift-multus/multus-nsjqz" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.452496 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-host-cni-bin\") pod \"ovnkube-node-dcfpx\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.452515 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/26ce37d0-9ace-438a-bdd4-6bb30e41bac8-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-vpj8c\" (UID: \"26ce37d0-9ace-438a-bdd4-6bb30e41bac8\") " pod="openshift-multus/multus-additional-cni-plugins-vpj8c" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.452532 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5e0b10c6-02b7-49d0-9a76-e89ebbb00528-multus-cni-dir\") pod \"multus-nsjqz\" (UID: \"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\") " pod="openshift-multus/multus-nsjqz" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.452553 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-node-log\") pod \"ovnkube-node-dcfpx\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.452573 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-host-run-ovn-kubernetes\") pod \"ovnkube-node-dcfpx\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.452606 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5e0b10c6-02b7-49d0-9a76-e89ebbb00528-multus-conf-dir\") pod \"multus-nsjqz\" (UID: \"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\") " pod="openshift-multus/multus-nsjqz" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.452677 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5e0b10c6-02b7-49d0-9a76-e89ebbb00528-multus-conf-dir\") pod \"multus-nsjqz\" (UID: \"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\") " pod="openshift-multus/multus-nsjqz" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.452729 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5e0b10c6-02b7-49d0-9a76-e89ebbb00528-os-release\") pod \"multus-nsjqz\" (UID: \"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\") " pod="openshift-multus/multus-nsjqz" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.452764 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/5e0b10c6-02b7-49d0-9a76-e89ebbb00528-host-var-lib-cni-multus\") pod \"multus-nsjqz\" (UID: \"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\") " pod="openshift-multus/multus-nsjqz" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.452793 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-host-run-netns\") pod \"ovnkube-node-dcfpx\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.452418 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/26ce37d0-9ace-438a-bdd4-6bb30e41bac8-cni-binary-copy\") pod \"multus-additional-cni-plugins-vpj8c\" (UID: \"26ce37d0-9ace-438a-bdd4-6bb30e41bac8\") " pod="openshift-multus/multus-additional-cni-plugins-vpj8c" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.452828 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/5e0b10c6-02b7-49d0-9a76-e89ebbb00528-host-run-k8s-cni-cncf-io\") pod \"multus-nsjqz\" (UID: \"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\") " pod="openshift-multus/multus-nsjqz" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.452849 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/5e0b10c6-02b7-49d0-9a76-e89ebbb00528-hostroot\") pod \"multus-nsjqz\" (UID: \"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\") " pod="openshift-multus/multus-nsjqz" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.452869 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5e0b10c6-02b7-49d0-9a76-e89ebbb00528-etc-kubernetes\") pod \"multus-nsjqz\" (UID: \"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\") " pod="openshift-multus/multus-nsjqz" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.452889 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-host-cni-bin\") pod \"ovnkube-node-dcfpx\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.452923 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5e0b10c6-02b7-49d0-9a76-e89ebbb00528-cni-binary-copy\") pod \"multus-nsjqz\" (UID: \"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\") " pod="openshift-multus/multus-nsjqz" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.452955 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/5e0b10c6-02b7-49d0-9a76-e89ebbb00528-host-run-multus-certs\") pod \"multus-nsjqz\" (UID: \"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\") " pod="openshift-multus/multus-nsjqz" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.452929 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-host-cni-netd\") pod \"ovnkube-node-dcfpx\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.453006 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-log-socket\") pod \"ovnkube-node-dcfpx\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.453034 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-run-ovn\") pod \"ovnkube-node-dcfpx\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.453025 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-node-log\") pod \"ovnkube-node-dcfpx\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.451703 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5e0b10c6-02b7-49d0-9a76-e89ebbb00528-host-var-lib-kubelet\") pod \"multus-nsjqz\" (UID: \"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\") " pod="openshift-multus/multus-nsjqz" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.453078 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-host-run-ovn-kubernetes\") pod \"ovnkube-node-dcfpx\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.453087 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5e0b10c6-02b7-49d0-9a76-e89ebbb00528-multus-cni-dir\") pod \"multus-nsjqz\" (UID: \"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\") " pod="openshift-multus/multus-nsjqz" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.453109 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5e0b10c6-02b7-49d0-9a76-e89ebbb00528-host-run-netns\") pod \"multus-nsjqz\" (UID: \"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\") " pod="openshift-multus/multus-nsjqz" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.453113 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5e0b10c6-02b7-49d0-9a76-e89ebbb00528-system-cni-dir\") pod \"multus-nsjqz\" (UID: \"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\") " pod="openshift-multus/multus-nsjqz" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.453150 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-run-openvswitch\") pod \"ovnkube-node-dcfpx\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.453173 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/26ce37d0-9ace-438a-bdd4-6bb30e41bac8-system-cni-dir\") pod \"multus-additional-cni-plugins-vpj8c\" (UID: \"26ce37d0-9ace-438a-bdd4-6bb30e41bac8\") " pod="openshift-multus/multus-additional-cni-plugins-vpj8c" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.453137 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-host-slash\") pod \"ovnkube-node-dcfpx\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.453140 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/63ef3eb8-6103-492d-b6ef-f16081d15e83-rootfs\") pod \"machine-config-daemon-7mhh9\" (UID: \"63ef3eb8-6103-492d-b6ef-f16081d15e83\") " pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.453262 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/5e0b10c6-02b7-49d0-9a76-e89ebbb00528-multus-socket-dir-parent\") pod \"multus-nsjqz\" (UID: \"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\") " pod="openshift-multus/multus-nsjqz" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.453232 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-host-kubelet\") pod \"ovnkube-node-dcfpx\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.453227 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-etc-openvswitch\") pod \"ovnkube-node-dcfpx\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.453356 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5e0b10c6-02b7-49d0-9a76-e89ebbb00528-cnibin\") pod \"multus-nsjqz\" (UID: \"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\") " pod="openshift-multus/multus-nsjqz" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.453539 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/26ce37d0-9ace-438a-bdd4-6bb30e41bac8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-vpj8c\" (UID: \"26ce37d0-9ace-438a-bdd4-6bb30e41bac8\") " pod="openshift-multus/multus-additional-cni-plugins-vpj8c" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.453714 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7c788dfa-1923-4a2b-9619-73acf92ec849-env-overrides\") pod \"ovnkube-node-dcfpx\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.453766 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/26ce37d0-9ace-438a-bdd4-6bb30e41bac8-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-vpj8c\" (UID: \"26ce37d0-9ace-438a-bdd4-6bb30e41bac8\") " pod="openshift-multus/multus-additional-cni-plugins-vpj8c" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.453781 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/63ef3eb8-6103-492d-b6ef-f16081d15e83-mcd-auth-proxy-config\") pod \"machine-config-daemon-7mhh9\" (UID: \"63ef3eb8-6103-492d-b6ef-f16081d15e83\") " pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.454114 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/5e0b10c6-02b7-49d0-9a76-e89ebbb00528-multus-daemon-config\") pod \"multus-nsjqz\" (UID: \"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\") " pod="openshift-multus/multus-nsjqz" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.454325 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7c788dfa-1923-4a2b-9619-73acf92ec849-ovnkube-script-lib\") pod \"ovnkube-node-dcfpx\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.456462 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/63ef3eb8-6103-492d-b6ef-f16081d15e83-proxy-tls\") pod \"machine-config-daemon-7mhh9\" (UID: \"63ef3eb8-6103-492d-b6ef-f16081d15e83\") " pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.466241 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7c788dfa-1923-4a2b-9619-73acf92ec849-ovn-node-metrics-cert\") pod \"ovnkube-node-dcfpx\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.471711 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4tp6\" (UniqueName: \"kubernetes.io/projected/5e0b10c6-02b7-49d0-9a76-e89ebbb00528-kube-api-access-s4tp6\") pod \"multus-nsjqz\" (UID: \"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\") " pod="openshift-multus/multus-nsjqz" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.474182 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8m5tr\" (UniqueName: \"kubernetes.io/projected/26ce37d0-9ace-438a-bdd4-6bb30e41bac8-kube-api-access-8m5tr\") pod \"multus-additional-cni-plugins-vpj8c\" (UID: \"26ce37d0-9ace-438a-bdd4-6bb30e41bac8\") " pod="openshift-multus/multus-additional-cni-plugins-vpj8c" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.480486 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:48Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.485312 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9g9zg\" (UniqueName: \"kubernetes.io/projected/63ef3eb8-6103-492d-b6ef-f16081d15e83-kube-api-access-9g9zg\") pod \"machine-config-daemon-7mhh9\" (UID: \"63ef3eb8-6103-492d-b6ef-f16081d15e83\") " pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.485493 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d758w\" (UniqueName: \"kubernetes.io/projected/7c788dfa-1923-4a2b-9619-73acf92ec849-kube-api-access-d758w\") pod \"ovnkube-node-dcfpx\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.503056 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vpj8c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26ce37d0-9ace-438a-bdd4-6bb30e41bac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vpj8c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:48Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.528810 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"210f2216-544c-43a1-813b-68e47da7447e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31cd85d70b58724b5ed5071b085e5b20b31083087fc7847dfadccc52cfd717c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d403f58f80ef84bfe879f95390ffc186d6516ce879f1ca593e9d06b6fdaa9003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86861693b1090906a34be8e134571c684c24b327ed62db509ca470e79c70c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65b4ac45cd6766a2144308a99817f41ef69812519680ebae6edf2d6c72f1c97e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c0288960f3e7739ec0587fcefc29e57c0e351c4903326474454df7b6b57a29c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:42:39Z\\\",\\\"message\\\":\\\"W0219 09:42:28.573633 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 09:42:28.575071 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771494148 cert, and key in /tmp/serving-cert-427400488/serving-signer.crt, /tmp/serving-cert-427400488/serving-signer.key\\\\nI0219 09:42:29.117984 1 observer_polling.go:159] Starting file observer\\\\nW0219 09:42:29.120780 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 09:42:29.121009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:42:29.122010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-427400488/tls.crt::/tmp/serving-cert-427400488/tls.key\\\\\\\"\\\\nF0219 09:42:39.487179 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2dcb23379da52333bd115902957e4051df5472eb6d909d45f4781487e8d6e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:48Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.540591 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-vpj8c" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.552116 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-nsjqz" Feb 19 09:42:48 crc kubenswrapper[4965]: W0219 09:42:48.552596 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26ce37d0_9ace_438a_bdd4_6bb30e41bac8.slice/crio-21e62bd8f2023db8f13d6bfcfe746c4aa401059937706d120bb57487e43a8ce0 WatchSource:0}: Error finding container 21e62bd8f2023db8f13d6bfcfe746c4aa401059937706d120bb57487e43a8ce0: Status 404 returned error can't find the container with id 21e62bd8f2023db8f13d6bfcfe746c4aa401059937706d120bb57487e43a8ce0 Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.555335 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:48Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:48 crc kubenswrapper[4965]: W0219 09:42:48.568614 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e0b10c6_02b7_49d0_9a76_e89ebbb00528.slice/crio-5c0c42a18a0a9956f4481c064820e66ed8877322429fa04bbdb9a36dc43e6a58 WatchSource:0}: Error finding container 5c0c42a18a0a9956f4481c064820e66ed8877322429fa04bbdb9a36dc43e6a58: Status 404 returned error can't find the container with id 5c0c42a18a0a9956f4481c064820e66ed8877322429fa04bbdb9a36dc43e6a58 Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.572488 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.579328 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.582009 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:48Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.596886 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6nv8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7972115-bfc1-42ee-b756-e394806eed51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vd96\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6nv8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:48Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:48 crc kubenswrapper[4965]: W0219 09:42:48.605548 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c788dfa_1923_4a2b_9619_73acf92ec849.slice/crio-e29fc7842b60cddc6bf76bc025db661541b45e08fe3f04f198c9f7210e22408a WatchSource:0}: Error finding container e29fc7842b60cddc6bf76bc025db661541b45e08fe3f04f198c9f7210e22408a: Status 404 returned error can't find the container with id e29fc7842b60cddc6bf76bc025db661541b45e08fe3f04f198c9f7210e22408a Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.634591 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nsjqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4tp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nsjqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:48Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.652801 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd2c04f5bfa6800521c39502b241dfea1a0b9d3ddde4eb92d501d28bcfad1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:48Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.668161 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:48Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.689496 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed9a04147ac88af087b35406b7fc4e1261b034a9fbfa0014446cdc08743f7184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27905a4c42a1d28d582484efe02020cd2b7d5a5af7c53787412705c7a6da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:48Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.716676 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ef3eb8-6103-492d-b6ef-f16081d15e83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mhh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:48Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.782953 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c788dfa-1923-4a2b-9619-73acf92ec849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dcfpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:48Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.854258 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.861052 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.869637 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.875574 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:48Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.914310 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vpj8c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26ce37d0-9ace-438a-bdd4-6bb30e41bac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vpj8c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:48Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:48 crc kubenswrapper[4965]: I0219 09:42:48.993886 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"210f2216-544c-43a1-813b-68e47da7447e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31cd85d70b58724b5ed5071b085e5b20b31083087fc7847dfadccc52cfd717c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d403f58f80ef84bfe879f95390ffc186d6516ce879f1ca593e9d06b6fdaa9003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86861693b1090906a34be8e134571c684c24b327ed62db509ca470e79c70c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65b4ac45cd6766a2144308a99817f41ef69812519680ebae6edf2d6c72f1c97e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c0288960f3e7739ec0587fcefc29e57c0e351c4903326474454df7b6b57a29c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:42:39Z\\\",\\\"message\\\":\\\"W0219 09:42:28.573633 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 09:42:28.575071 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771494148 cert, and key in /tmp/serving-cert-427400488/serving-signer.crt, /tmp/serving-cert-427400488/serving-signer.key\\\\nI0219 09:42:29.117984 1 observer_polling.go:159] Starting file observer\\\\nW0219 09:42:29.120780 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 09:42:29.121009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:42:29.122010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-427400488/tls.crt::/tmp/serving-cert-427400488/tls.key\\\\\\\"\\\\nF0219 09:42:39.487179 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2dcb23379da52333bd115902957e4051df5472eb6d909d45f4781487e8d6e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:48Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:49 crc kubenswrapper[4965]: I0219 09:42:49.026515 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:49Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:49 crc kubenswrapper[4965]: I0219 09:42:49.140617 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:49Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:49 crc kubenswrapper[4965]: I0219 09:42:49.152996 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 19:09:11.468126465 +0000 UTC Feb 19 09:42:49 crc kubenswrapper[4965]: I0219 09:42:49.226058 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6nv8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7972115-bfc1-42ee-b756-e394806eed51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vd96\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6nv8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:49Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:49 crc kubenswrapper[4965]: I0219 09:42:49.258630 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nsjqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4tp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nsjqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:49Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:49 crc kubenswrapper[4965]: I0219 09:42:49.272951 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd2c04f5bfa6800521c39502b241dfea1a0b9d3ddde4eb92d501d28bcfad1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:49Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:49 crc kubenswrapper[4965]: I0219 09:42:49.291581 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:49Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:49 crc kubenswrapper[4965]: I0219 09:42:49.312579 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed9a04147ac88af087b35406b7fc4e1261b034a9fbfa0014446cdc08743f7184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27905a4c42a1d28d582484efe02020cd2b7d5a5af7c53787412705c7a6da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:49Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:49 crc kubenswrapper[4965]: I0219 09:42:49.329903 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ef3eb8-6103-492d-b6ef-f16081d15e83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mhh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:49Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:49 crc kubenswrapper[4965]: I0219 09:42:49.347013 4965 generic.go:334] "Generic (PLEG): container finished" podID="7c788dfa-1923-4a2b-9619-73acf92ec849" containerID="743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256" exitCode=0 Feb 19 09:42:49 crc kubenswrapper[4965]: I0219 09:42:49.347082 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" event={"ID":"7c788dfa-1923-4a2b-9619-73acf92ec849","Type":"ContainerDied","Data":"743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256"} Feb 19 09:42:49 crc kubenswrapper[4965]: I0219 09:42:49.347116 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" event={"ID":"7c788dfa-1923-4a2b-9619-73acf92ec849","Type":"ContainerStarted","Data":"e29fc7842b60cddc6bf76bc025db661541b45e08fe3f04f198c9f7210e22408a"} Feb 19 09:42:49 crc kubenswrapper[4965]: I0219 09:42:49.349533 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vpj8c" event={"ID":"26ce37d0-9ace-438a-bdd4-6bb30e41bac8","Type":"ContainerStarted","Data":"ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383"} Feb 19 09:42:49 crc kubenswrapper[4965]: I0219 09:42:49.349600 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vpj8c" event={"ID":"26ce37d0-9ace-438a-bdd4-6bb30e41bac8","Type":"ContainerStarted","Data":"21e62bd8f2023db8f13d6bfcfe746c4aa401059937706d120bb57487e43a8ce0"} Feb 19 09:42:49 crc kubenswrapper[4965]: I0219 09:42:49.351749 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-6nv8r" event={"ID":"e7972115-bfc1-42ee-b756-e394806eed51","Type":"ContainerStarted","Data":"597dabc5893cced827268c6dc222b2f1535c93e6086c25cec52e7f612952eb65"} Feb 19 09:42:49 crc kubenswrapper[4965]: I0219 09:42:49.357504 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c788dfa-1923-4a2b-9619-73acf92ec849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dcfpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:49Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:49 crc kubenswrapper[4965]: I0219 09:42:49.359916 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" event={"ID":"63ef3eb8-6103-492d-b6ef-f16081d15e83","Type":"ContainerStarted","Data":"107d47a2c3ddc138ad383ab20f81dabe2c31af50f7bd66c31b66df79488ba837"} Feb 19 09:42:49 crc kubenswrapper[4965]: I0219 09:42:49.359960 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" event={"ID":"63ef3eb8-6103-492d-b6ef-f16081d15e83","Type":"ContainerStarted","Data":"a1ff237da7e509d3b4a25e8042c384a768ef0123d1687b574502f769bde3121b"} Feb 19 09:42:49 crc kubenswrapper[4965]: I0219 09:42:49.359976 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" event={"ID":"63ef3eb8-6103-492d-b6ef-f16081d15e83","Type":"ContainerStarted","Data":"7754eb58364d61046f0a8ce558630aa2b045c04e51d13eb4bd88c19f775774c0"} Feb 19 09:42:49 crc kubenswrapper[4965]: I0219 09:42:49.364848 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-nsjqz" event={"ID":"5e0b10c6-02b7-49d0-9a76-e89ebbb00528","Type":"ContainerStarted","Data":"8aef896286f2619adf09fb4e2f4f25543b1d0d69c90fb4d301fb1c215e9b78f8"} Feb 19 09:42:49 crc kubenswrapper[4965]: I0219 09:42:49.364888 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-nsjqz" event={"ID":"5e0b10c6-02b7-49d0-9a76-e89ebbb00528","Type":"ContainerStarted","Data":"5c0c42a18a0a9956f4481c064820e66ed8877322429fa04bbdb9a36dc43e6a58"} Feb 19 09:42:49 crc kubenswrapper[4965]: I0219 09:42:49.377445 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6nv8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7972115-bfc1-42ee-b756-e394806eed51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://597dabc5893cced827268c6dc222b2f1535c93e6086c25cec52e7f612952eb65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vd96\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6nv8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:49Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:49 crc kubenswrapper[4965]: I0219 09:42:49.394547 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nsjqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4tp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nsjqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:49Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:49 crc kubenswrapper[4965]: I0219 09:42:49.410577 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:49Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:49 crc kubenswrapper[4965]: I0219 09:42:49.432299 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed9a04147ac88af087b35406b7fc4e1261b034a9fbfa0014446cdc08743f7184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27905a4c42a1d28d582484efe02020cd2b7d5a5af7c53787412705c7a6da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:49Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:49 crc kubenswrapper[4965]: I0219 09:42:49.446327 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ef3eb8-6103-492d-b6ef-f16081d15e83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mhh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:49Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:49 crc kubenswrapper[4965]: I0219 09:42:49.466812 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c788dfa-1923-4a2b-9619-73acf92ec849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dcfpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:49Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:49 crc kubenswrapper[4965]: I0219 09:42:49.479020 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd2c04f5bfa6800521c39502b241dfea1a0b9d3ddde4eb92d501d28bcfad1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:49Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:49 crc kubenswrapper[4965]: I0219 09:42:49.492784 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:49Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:49 crc kubenswrapper[4965]: I0219 09:42:49.518149 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"210f2216-544c-43a1-813b-68e47da7447e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31cd85d70b58724b5ed5071b085e5b20b31083087fc7847dfadccc52cfd717c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d403f58f80ef84bfe879f95390ffc186d6516ce879f1ca593e9d06b6fdaa9003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86861693b1090906a34be8e134571c684c24b327ed62db509ca470e79c70c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65b4ac45cd6766a2144308a99817f41ef69812519680ebae6edf2d6c72f1c97e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c0288960f3e7739ec0587fcefc29e57c0e351c4903326474454df7b6b57a29c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:42:39Z\\\",\\\"message\\\":\\\"W0219 09:42:28.573633 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 09:42:28.575071 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771494148 cert, and key in /tmp/serving-cert-427400488/serving-signer.crt, /tmp/serving-cert-427400488/serving-signer.key\\\\nI0219 09:42:29.117984 1 observer_polling.go:159] Starting file observer\\\\nW0219 09:42:29.120780 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 09:42:29.121009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:42:29.122010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-427400488/tls.crt::/tmp/serving-cert-427400488/tls.key\\\\\\\"\\\\nF0219 09:42:39.487179 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2dcb23379da52333bd115902957e4051df5472eb6d909d45f4781487e8d6e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:49Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:49 crc kubenswrapper[4965]: I0219 09:42:49.529922 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85d200ad-dc81-4825-a3e0-976c042ebfd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e2ac875fca92d3c631dc7856cd9f72b9abbf3f2edcbc7efeb49ce1c03ac52a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d4ac252f5069500eef4e1579559c883095bf1c21a29cb96a36a4aab507a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29999666cb6f12b3a4a394a38d4304dd636fe7106b771ca4ef541693fbfc76a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fb1ec6375fa0345ae67191ebc522471cabd2510440f8051132b833c0fa595e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:49Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:49 crc kubenswrapper[4965]: I0219 09:42:49.547156 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:49Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:49 crc kubenswrapper[4965]: I0219 09:42:49.562212 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:49Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:49 crc kubenswrapper[4965]: I0219 09:42:49.581958 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vpj8c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26ce37d0-9ace-438a-bdd4-6bb30e41bac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vpj8c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:49Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:49 crc kubenswrapper[4965]: I0219 09:42:49.601611 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:49Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:49 crc kubenswrapper[4965]: I0219 09:42:49.613380 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6nv8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7972115-bfc1-42ee-b756-e394806eed51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://597dabc5893cced827268c6dc222b2f1535c93e6086c25cec52e7f612952eb65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vd96\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6nv8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:49Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:49 crc kubenswrapper[4965]: I0219 09:42:49.627285 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nsjqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aef896286f2619adf09fb4e2f4f25543b1d0d69c90fb4d301fb1c215e9b78f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4tp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nsjqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:49Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:49 crc kubenswrapper[4965]: I0219 09:42:49.642884 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd2c04f5bfa6800521c39502b241dfea1a0b9d3ddde4eb92d501d28bcfad1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:49Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:49 crc kubenswrapper[4965]: I0219 09:42:49.663449 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:49Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:49 crc kubenswrapper[4965]: I0219 09:42:49.684668 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed9a04147ac88af087b35406b7fc4e1261b034a9fbfa0014446cdc08743f7184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27905a4c42a1d28d582484efe02020cd2b7d5a5af7c53787412705c7a6da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:49Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:49 crc kubenswrapper[4965]: I0219 09:42:49.701624 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ef3eb8-6103-492d-b6ef-f16081d15e83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://107d47a2c3ddc138ad383ab20f81dabe2c31af50f7bd66c31b66df79488ba837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ff237da7e509d3b4a25e8042c384a768ef0123d1687b574502f769bde3121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mhh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:49Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:49 crc kubenswrapper[4965]: I0219 09:42:49.726749 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c788dfa-1923-4a2b-9619-73acf92ec849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dcfpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:49Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:49 crc kubenswrapper[4965]: I0219 09:42:49.741568 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"210f2216-544c-43a1-813b-68e47da7447e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31cd85d70b58724b5ed5071b085e5b20b31083087fc7847dfadccc52cfd717c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d403f58f80ef84bfe879f95390ffc186d6516ce879f1ca593e9d06b6fdaa9003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86861693b1090906a34be8e134571c684c24b327ed62db509ca470e79c70c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65b4ac45cd6766a2144308a99817f41ef69812519680ebae6edf2d6c72f1c97e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c0288960f3e7739ec0587fcefc29e57c0e351c4903326474454df7b6b57a29c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:42:39Z\\\",\\\"message\\\":\\\"W0219 09:42:28.573633 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 09:42:28.575071 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771494148 cert, and key in /tmp/serving-cert-427400488/serving-signer.crt, /tmp/serving-cert-427400488/serving-signer.key\\\\nI0219 09:42:29.117984 1 observer_polling.go:159] Starting file observer\\\\nW0219 09:42:29.120780 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 09:42:29.121009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:42:29.122010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-427400488/tls.crt::/tmp/serving-cert-427400488/tls.key\\\\\\\"\\\\nF0219 09:42:39.487179 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2dcb23379da52333bd115902957e4051df5472eb6d909d45f4781487e8d6e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:49Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:49 crc kubenswrapper[4965]: I0219 09:42:49.755082 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85d200ad-dc81-4825-a3e0-976c042ebfd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e2ac875fca92d3c631dc7856cd9f72b9abbf3f2edcbc7efeb49ce1c03ac52a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d4ac252f5069500eef4e1579559c883095bf1c21a29cb96a36a4aab507a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29999666cb6f12b3a4a394a38d4304dd636fe7106b771ca4ef541693fbfc76a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fb1ec6375fa0345ae67191ebc522471cabd2510440f8051132b833c0fa595e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:49Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:49 crc kubenswrapper[4965]: E0219 09:42:49.771331 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:42:53.771297388 +0000 UTC m=+29.392618698 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:42:49 crc kubenswrapper[4965]: I0219 09:42:49.771046 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:42:49 crc kubenswrapper[4965]: I0219 09:42:49.772352 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:49Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:49 crc kubenswrapper[4965]: I0219 09:42:49.785648 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:49Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:49 crc kubenswrapper[4965]: I0219 09:42:49.801381 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vpj8c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26ce37d0-9ace-438a-bdd4-6bb30e41bac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vpj8c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:49Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:49 crc kubenswrapper[4965]: I0219 09:42:49.872900 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:42:49 crc kubenswrapper[4965]: I0219 09:42:49.872951 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:42:49 crc kubenswrapper[4965]: I0219 09:42:49.872976 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:42:49 crc kubenswrapper[4965]: I0219 09:42:49.873002 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:42:49 crc kubenswrapper[4965]: E0219 09:42:49.873139 4965 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 09:42:49 crc kubenswrapper[4965]: E0219 09:42:49.873282 4965 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 09:42:49 crc kubenswrapper[4965]: E0219 09:42:49.873312 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 09:42:53.873283938 +0000 UTC m=+29.494605428 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 09:42:49 crc kubenswrapper[4965]: E0219 09:42:49.873307 4965 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 09:42:49 crc kubenswrapper[4965]: E0219 09:42:49.873470 4965 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 09:42:49 crc kubenswrapper[4965]: E0219 09:42:49.873493 4965 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 09:42:49 crc kubenswrapper[4965]: E0219 09:42:49.873146 4965 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 09:42:49 crc kubenswrapper[4965]: E0219 09:42:49.873546 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 09:42:53.873439771 +0000 UTC m=+29.494761071 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 09:42:49 crc kubenswrapper[4965]: E0219 09:42:49.873556 4965 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 09:42:49 crc kubenswrapper[4965]: E0219 09:42:49.873575 4965 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 09:42:49 crc kubenswrapper[4965]: E0219 09:42:49.873577 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 09:42:53.873565774 +0000 UTC m=+29.494887084 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 09:42:49 crc kubenswrapper[4965]: E0219 09:42:49.873637 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 09:42:53.873612365 +0000 UTC m=+29.494933865 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 09:42:49 crc kubenswrapper[4965]: I0219 09:42:49.917591 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-pjxbf"] Feb 19 09:42:49 crc kubenswrapper[4965]: I0219 09:42:49.918163 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-pjxbf" Feb 19 09:42:49 crc kubenswrapper[4965]: I0219 09:42:49.920811 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 19 09:42:49 crc kubenswrapper[4965]: I0219 09:42:49.920856 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 19 09:42:49 crc kubenswrapper[4965]: I0219 09:42:49.921238 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 19 09:42:49 crc kubenswrapper[4965]: I0219 09:42:49.923738 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 19 09:42:49 crc kubenswrapper[4965]: I0219 09:42:49.937096 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd2c04f5bfa6800521c39502b241dfea1a0b9d3ddde4eb92d501d28bcfad1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:49Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:49 crc kubenswrapper[4965]: I0219 09:42:49.953794 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:49Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:49 crc kubenswrapper[4965]: I0219 09:42:49.971407 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed9a04147ac88af087b35406b7fc4e1261b034a9fbfa0014446cdc08743f7184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27905a4c42a1d28d582484efe02020cd2b7d5a5af7c53787412705c7a6da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:49Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:49 crc kubenswrapper[4965]: I0219 09:42:49.985515 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ef3eb8-6103-492d-b6ef-f16081d15e83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://107d47a2c3ddc138ad383ab20f81dabe2c31af50f7bd66c31b66df79488ba837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ff237da7e509d3b4a25e8042c384a768ef0123d1687b574502f769bde3121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mhh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:49Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:50 crc kubenswrapper[4965]: I0219 09:42:50.005530 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c788dfa-1923-4a2b-9619-73acf92ec849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dcfpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:50Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:50 crc kubenswrapper[4965]: I0219 09:42:50.024853 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"210f2216-544c-43a1-813b-68e47da7447e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31cd85d70b58724b5ed5071b085e5b20b31083087fc7847dfadccc52cfd717c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d403f58f80ef84bfe879f95390ffc186d6516ce879f1ca593e9d06b6fdaa9003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86861693b1090906a34be8e134571c684c24b327ed62db509ca470e79c70c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65b4ac45cd6766a2144308a99817f41ef69812519680ebae6edf2d6c72f1c97e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c0288960f3e7739ec0587fcefc29e57c0e351c4903326474454df7b6b57a29c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:42:39Z\\\",\\\"message\\\":\\\"W0219 09:42:28.573633 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 09:42:28.575071 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771494148 cert, and key in /tmp/serving-cert-427400488/serving-signer.crt, /tmp/serving-cert-427400488/serving-signer.key\\\\nI0219 09:42:29.117984 1 observer_polling.go:159] Starting file observer\\\\nW0219 09:42:29.120780 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 09:42:29.121009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:42:29.122010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-427400488/tls.crt::/tmp/serving-cert-427400488/tls.key\\\\\\\"\\\\nF0219 09:42:39.487179 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2dcb23379da52333bd115902957e4051df5472eb6d909d45f4781487e8d6e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:50Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:50 crc kubenswrapper[4965]: I0219 09:42:50.039048 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85d200ad-dc81-4825-a3e0-976c042ebfd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e2ac875fca92d3c631dc7856cd9f72b9abbf3f2edcbc7efeb49ce1c03ac52a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d4ac252f5069500eef4e1579559c883095bf1c21a29cb96a36a4aab507a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29999666cb6f12b3a4a394a38d4304dd636fe7106b771ca4ef541693fbfc76a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fb1ec6375fa0345ae67191ebc522471cabd2510440f8051132b833c0fa595e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:50Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:50 crc kubenswrapper[4965]: I0219 09:42:50.051786 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:50Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:50 crc kubenswrapper[4965]: I0219 09:42:50.066065 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:50Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:50 crc kubenswrapper[4965]: I0219 09:42:50.075261 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxkbz\" (UniqueName: \"kubernetes.io/projected/e3965f16-f751-4de2-9f58-db2070fc99b7-kube-api-access-gxkbz\") pod \"node-ca-pjxbf\" (UID: \"e3965f16-f751-4de2-9f58-db2070fc99b7\") " pod="openshift-image-registry/node-ca-pjxbf" Feb 19 09:42:50 crc kubenswrapper[4965]: I0219 09:42:50.075324 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e3965f16-f751-4de2-9f58-db2070fc99b7-host\") pod \"node-ca-pjxbf\" (UID: \"e3965f16-f751-4de2-9f58-db2070fc99b7\") " pod="openshift-image-registry/node-ca-pjxbf" Feb 19 09:42:50 crc kubenswrapper[4965]: I0219 09:42:50.075348 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e3965f16-f751-4de2-9f58-db2070fc99b7-serviceca\") pod \"node-ca-pjxbf\" (UID: \"e3965f16-f751-4de2-9f58-db2070fc99b7\") " pod="openshift-image-registry/node-ca-pjxbf" Feb 19 09:42:50 crc kubenswrapper[4965]: I0219 09:42:50.082585 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vpj8c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26ce37d0-9ace-438a-bdd4-6bb30e41bac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vpj8c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:50Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:50 crc kubenswrapper[4965]: I0219 09:42:50.098206 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:50Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:50 crc kubenswrapper[4965]: I0219 09:42:50.110632 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6nv8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7972115-bfc1-42ee-b756-e394806eed51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://597dabc5893cced827268c6dc222b2f1535c93e6086c25cec52e7f612952eb65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vd96\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6nv8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:50Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:50 crc kubenswrapper[4965]: I0219 09:42:50.125895 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nsjqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aef896286f2619adf09fb4e2f4f25543b1d0d69c90fb4d301fb1c215e9b78f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4tp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nsjqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:50Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:50 crc kubenswrapper[4965]: I0219 09:42:50.138171 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pjxbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3965f16-f751-4de2-9f58-db2070fc99b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pjxbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:50Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:50 crc kubenswrapper[4965]: I0219 09:42:50.157708 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 09:38:41.937770355 +0000 UTC Feb 19 09:42:50 crc kubenswrapper[4965]: I0219 09:42:50.176071 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxkbz\" (UniqueName: \"kubernetes.io/projected/e3965f16-f751-4de2-9f58-db2070fc99b7-kube-api-access-gxkbz\") pod \"node-ca-pjxbf\" (UID: \"e3965f16-f751-4de2-9f58-db2070fc99b7\") " pod="openshift-image-registry/node-ca-pjxbf" Feb 19 09:42:50 crc kubenswrapper[4965]: I0219 09:42:50.176137 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e3965f16-f751-4de2-9f58-db2070fc99b7-host\") pod \"node-ca-pjxbf\" (UID: \"e3965f16-f751-4de2-9f58-db2070fc99b7\") " pod="openshift-image-registry/node-ca-pjxbf" Feb 19 09:42:50 crc kubenswrapper[4965]: I0219 09:42:50.176169 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e3965f16-f751-4de2-9f58-db2070fc99b7-serviceca\") pod \"node-ca-pjxbf\" (UID: \"e3965f16-f751-4de2-9f58-db2070fc99b7\") " pod="openshift-image-registry/node-ca-pjxbf" Feb 19 09:42:50 crc kubenswrapper[4965]: I0219 09:42:50.176273 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e3965f16-f751-4de2-9f58-db2070fc99b7-host\") pod \"node-ca-pjxbf\" (UID: \"e3965f16-f751-4de2-9f58-db2070fc99b7\") " pod="openshift-image-registry/node-ca-pjxbf" Feb 19 09:42:50 crc kubenswrapper[4965]: I0219 09:42:50.177092 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e3965f16-f751-4de2-9f58-db2070fc99b7-serviceca\") pod \"node-ca-pjxbf\" (UID: \"e3965f16-f751-4de2-9f58-db2070fc99b7\") " pod="openshift-image-registry/node-ca-pjxbf" Feb 19 09:42:50 crc kubenswrapper[4965]: I0219 09:42:50.193845 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxkbz\" (UniqueName: \"kubernetes.io/projected/e3965f16-f751-4de2-9f58-db2070fc99b7-kube-api-access-gxkbz\") pod \"node-ca-pjxbf\" (UID: \"e3965f16-f751-4de2-9f58-db2070fc99b7\") " pod="openshift-image-registry/node-ca-pjxbf" Feb 19 09:42:50 crc kubenswrapper[4965]: I0219 09:42:50.197095 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:42:50 crc kubenswrapper[4965]: I0219 09:42:50.197110 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:42:50 crc kubenswrapper[4965]: I0219 09:42:50.197215 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:42:50 crc kubenswrapper[4965]: E0219 09:42:50.197262 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:42:50 crc kubenswrapper[4965]: E0219 09:42:50.197393 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:42:50 crc kubenswrapper[4965]: E0219 09:42:50.197543 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:42:50 crc kubenswrapper[4965]: I0219 09:42:50.232563 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-pjxbf" Feb 19 09:42:50 crc kubenswrapper[4965]: W0219 09:42:50.244120 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3965f16_f751_4de2_9f58_db2070fc99b7.slice/crio-46563aa4af40a2fddf2ceaf3a74295ca010cd20dbee8060966a10bb6eaf018a1 WatchSource:0}: Error finding container 46563aa4af40a2fddf2ceaf3a74295ca010cd20dbee8060966a10bb6eaf018a1: Status 404 returned error can't find the container with id 46563aa4af40a2fddf2ceaf3a74295ca010cd20dbee8060966a10bb6eaf018a1 Feb 19 09:42:50 crc kubenswrapper[4965]: I0219 09:42:50.372565 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" event={"ID":"7c788dfa-1923-4a2b-9619-73acf92ec849","Type":"ContainerStarted","Data":"0ebb933d7238665138ec7e854756522607a2814b48116b2ce4474869b39344c1"} Feb 19 09:42:50 crc kubenswrapper[4965]: I0219 09:42:50.372778 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" event={"ID":"7c788dfa-1923-4a2b-9619-73acf92ec849","Type":"ContainerStarted","Data":"efa60b6875cede631c9383845eb085f96d62a6365609f1f98b84165b54e0872a"} Feb 19 09:42:50 crc kubenswrapper[4965]: I0219 09:42:50.372803 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" event={"ID":"7c788dfa-1923-4a2b-9619-73acf92ec849","Type":"ContainerStarted","Data":"ccba1acfe523175d218c25c2f59a6f9874426235c9cba981a80cc53aca12408a"} Feb 19 09:42:50 crc kubenswrapper[4965]: I0219 09:42:50.372814 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" event={"ID":"7c788dfa-1923-4a2b-9619-73acf92ec849","Type":"ContainerStarted","Data":"9bc418c94085bcd4ed93250cce9eb6bc122cd045035b72800df2bdf4b364d6ab"} Feb 19 09:42:50 crc kubenswrapper[4965]: I0219 09:42:50.374739 4965 generic.go:334] "Generic (PLEG): container finished" podID="26ce37d0-9ace-438a-bdd4-6bb30e41bac8" containerID="ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383" exitCode=0 Feb 19 09:42:50 crc kubenswrapper[4965]: I0219 09:42:50.374797 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vpj8c" event={"ID":"26ce37d0-9ace-438a-bdd4-6bb30e41bac8","Type":"ContainerDied","Data":"ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383"} Feb 19 09:42:50 crc kubenswrapper[4965]: I0219 09:42:50.380688 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"1ca9c67a49c188984680f98e96b659087034f30727c1fcdad7dfc298157745c0"} Feb 19 09:42:50 crc kubenswrapper[4965]: I0219 09:42:50.381866 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-pjxbf" event={"ID":"e3965f16-f751-4de2-9f58-db2070fc99b7","Type":"ContainerStarted","Data":"46563aa4af40a2fddf2ceaf3a74295ca010cd20dbee8060966a10bb6eaf018a1"} Feb 19 09:42:50 crc kubenswrapper[4965]: I0219 09:42:50.390144 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:50Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:50 crc kubenswrapper[4965]: I0219 09:42:50.403746 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6nv8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7972115-bfc1-42ee-b756-e394806eed51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://597dabc5893cced827268c6dc222b2f1535c93e6086c25cec52e7f612952eb65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vd96\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6nv8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:50Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:50 crc kubenswrapper[4965]: I0219 09:42:50.423591 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nsjqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aef896286f2619adf09fb4e2f4f25543b1d0d69c90fb4d301fb1c215e9b78f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4tp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nsjqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:50Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:50 crc kubenswrapper[4965]: I0219 09:42:50.435531 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pjxbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3965f16-f751-4de2-9f58-db2070fc99b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pjxbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:50Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:50 crc kubenswrapper[4965]: I0219 09:42:50.450990 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd2c04f5bfa6800521c39502b241dfea1a0b9d3ddde4eb92d501d28bcfad1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:50Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:50 crc kubenswrapper[4965]: I0219 09:42:50.465437 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:50Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:50 crc kubenswrapper[4965]: I0219 09:42:50.477938 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed9a04147ac88af087b35406b7fc4e1261b034a9fbfa0014446cdc08743f7184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27905a4c42a1d28d582484efe02020cd2b7d5a5af7c53787412705c7a6da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:50Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:50 crc kubenswrapper[4965]: I0219 09:42:50.492719 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ef3eb8-6103-492d-b6ef-f16081d15e83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://107d47a2c3ddc138ad383ab20f81dabe2c31af50f7bd66c31b66df79488ba837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ff237da7e509d3b4a25e8042c384a768ef0123d1687b574502f769bde3121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mhh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:50Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:50 crc kubenswrapper[4965]: I0219 09:42:50.512876 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c788dfa-1923-4a2b-9619-73acf92ec849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dcfpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:50Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:50 crc kubenswrapper[4965]: I0219 09:42:50.532402 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"210f2216-544c-43a1-813b-68e47da7447e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31cd85d70b58724b5ed5071b085e5b20b31083087fc7847dfadccc52cfd717c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d403f58f80ef84bfe879f95390ffc186d6516ce879f1ca593e9d06b6fdaa9003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86861693b1090906a34be8e134571c684c24b327ed62db509ca470e79c70c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65b4ac45cd6766a2144308a99817f41ef69812519680ebae6edf2d6c72f1c97e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c0288960f3e7739ec0587fcefc29e57c0e351c4903326474454df7b6b57a29c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:42:39Z\\\",\\\"message\\\":\\\"W0219 09:42:28.573633 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 09:42:28.575071 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771494148 cert, and key in /tmp/serving-cert-427400488/serving-signer.crt, /tmp/serving-cert-427400488/serving-signer.key\\\\nI0219 09:42:29.117984 1 observer_polling.go:159] Starting file observer\\\\nW0219 09:42:29.120780 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 09:42:29.121009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:42:29.122010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-427400488/tls.crt::/tmp/serving-cert-427400488/tls.key\\\\\\\"\\\\nF0219 09:42:39.487179 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2dcb23379da52333bd115902957e4051df5472eb6d909d45f4781487e8d6e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:50Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:50 crc kubenswrapper[4965]: I0219 09:42:50.546837 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85d200ad-dc81-4825-a3e0-976c042ebfd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e2ac875fca92d3c631dc7856cd9f72b9abbf3f2edcbc7efeb49ce1c03ac52a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d4ac252f5069500eef4e1579559c883095bf1c21a29cb96a36a4aab507a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29999666cb6f12b3a4a394a38d4304dd636fe7106b771ca4ef541693fbfc76a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fb1ec6375fa0345ae67191ebc522471cabd2510440f8051132b833c0fa595e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:50Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:50 crc kubenswrapper[4965]: I0219 09:42:50.559269 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:50Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:50 crc kubenswrapper[4965]: I0219 09:42:50.573136 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:50Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:50 crc kubenswrapper[4965]: I0219 09:42:50.587898 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vpj8c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26ce37d0-9ace-438a-bdd4-6bb30e41bac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vpj8c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:50Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:50 crc kubenswrapper[4965]: I0219 09:42:50.601264 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:50Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:50 crc kubenswrapper[4965]: I0219 09:42:50.612267 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6nv8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7972115-bfc1-42ee-b756-e394806eed51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://597dabc5893cced827268c6dc222b2f1535c93e6086c25cec52e7f612952eb65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vd96\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6nv8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:50Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:50 crc kubenswrapper[4965]: I0219 09:42:50.626266 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nsjqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aef896286f2619adf09fb4e2f4f25543b1d0d69c90fb4d301fb1c215e9b78f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4tp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nsjqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:50Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:50 crc kubenswrapper[4965]: I0219 09:42:50.635756 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pjxbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3965f16-f751-4de2-9f58-db2070fc99b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pjxbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:50Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:50 crc kubenswrapper[4965]: I0219 09:42:50.649304 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:50Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:50 crc kubenswrapper[4965]: I0219 09:42:50.663084 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed9a04147ac88af087b35406b7fc4e1261b034a9fbfa0014446cdc08743f7184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27905a4c42a1d28d582484efe02020cd2b7d5a5af7c53787412705c7a6da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:50Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:50 crc kubenswrapper[4965]: I0219 09:42:50.674630 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ef3eb8-6103-492d-b6ef-f16081d15e83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://107d47a2c3ddc138ad383ab20f81dabe2c31af50f7bd66c31b66df79488ba837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ff237da7e509d3b4a25e8042c384a768ef0123d1687b574502f769bde3121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mhh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:50Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:50 crc kubenswrapper[4965]: I0219 09:42:50.691904 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c788dfa-1923-4a2b-9619-73acf92ec849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dcfpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:50Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:50 crc kubenswrapper[4965]: I0219 09:42:50.728867 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd2c04f5bfa6800521c39502b241dfea1a0b9d3ddde4eb92d501d28bcfad1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:50Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:50 crc kubenswrapper[4965]: I0219 09:42:50.786002 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"210f2216-544c-43a1-813b-68e47da7447e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31cd85d70b58724b5ed5071b085e5b20b31083087fc7847dfadccc52cfd717c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d403f58f80ef84bfe879f95390ffc186d6516ce879f1ca593e9d06b6fdaa9003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86861693b1090906a34be8e134571c684c24b327ed62db509ca470e79c70c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65b4ac45cd6766a2144308a99817f41ef69812519680ebae6edf2d6c72f1c97e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c0288960f3e7739ec0587fcefc29e57c0e351c4903326474454df7b6b57a29c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:42:39Z\\\",\\\"message\\\":\\\"W0219 09:42:28.573633 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 09:42:28.575071 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771494148 cert, and key in /tmp/serving-cert-427400488/serving-signer.crt, /tmp/serving-cert-427400488/serving-signer.key\\\\nI0219 09:42:29.117984 1 observer_polling.go:159] Starting file observer\\\\nW0219 09:42:29.120780 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 09:42:29.121009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:42:29.122010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-427400488/tls.crt::/tmp/serving-cert-427400488/tls.key\\\\\\\"\\\\nF0219 09:42:39.487179 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2dcb23379da52333bd115902957e4051df5472eb6d909d45f4781487e8d6e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:50Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:50 crc kubenswrapper[4965]: I0219 09:42:50.816125 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85d200ad-dc81-4825-a3e0-976c042ebfd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e2ac875fca92d3c631dc7856cd9f72b9abbf3f2edcbc7efeb49ce1c03ac52a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d4ac252f5069500eef4e1579559c883095bf1c21a29cb96a36a4aab507a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29999666cb6f12b3a4a394a38d4304dd636fe7106b771ca4ef541693fbfc76a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fb1ec6375fa0345ae67191ebc522471cabd2510440f8051132b833c0fa595e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:50Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:50 crc kubenswrapper[4965]: I0219 09:42:50.849044 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:50Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:50 crc kubenswrapper[4965]: I0219 09:42:50.890365 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ca9c67a49c188984680f98e96b659087034f30727c1fcdad7dfc298157745c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:50Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:50 crc kubenswrapper[4965]: I0219 09:42:50.931103 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vpj8c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26ce37d0-9ace-438a-bdd4-6bb30e41bac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vpj8c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:50Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:51 crc kubenswrapper[4965]: I0219 09:42:51.157862 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 18:34:36.430023423 +0000 UTC Feb 19 09:42:51 crc kubenswrapper[4965]: I0219 09:42:51.387238 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" event={"ID":"7c788dfa-1923-4a2b-9619-73acf92ec849","Type":"ContainerStarted","Data":"dac7fd5095ec7fd8ce98b9150bd5c0a642004e2c1239a6fa1ff002efa67471df"} Feb 19 09:42:51 crc kubenswrapper[4965]: I0219 09:42:51.387305 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" event={"ID":"7c788dfa-1923-4a2b-9619-73acf92ec849","Type":"ContainerStarted","Data":"51316b32af59fe23cdf832fbc0b37b11f74d3a57d01eed32ca30a196d4c7e2c3"} Feb 19 09:42:51 crc kubenswrapper[4965]: I0219 09:42:51.388677 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vpj8c" event={"ID":"26ce37d0-9ace-438a-bdd4-6bb30e41bac8","Type":"ContainerStarted","Data":"0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d"} Feb 19 09:42:51 crc kubenswrapper[4965]: I0219 09:42:51.391316 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-pjxbf" event={"ID":"e3965f16-f751-4de2-9f58-db2070fc99b7","Type":"ContainerStarted","Data":"d81fdde65dd95b5dd26fd2bccb3c26f4491eee9891d4e837fd01338432057878"} Feb 19 09:42:51 crc kubenswrapper[4965]: I0219 09:42:51.406124 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:51Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:51 crc kubenswrapper[4965]: I0219 09:42:51.420277 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6nv8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7972115-bfc1-42ee-b756-e394806eed51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://597dabc5893cced827268c6dc222b2f1535c93e6086c25cec52e7f612952eb65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vd96\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6nv8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:51Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:51 crc kubenswrapper[4965]: I0219 09:42:51.434742 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nsjqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aef896286f2619adf09fb4e2f4f25543b1d0d69c90fb4d301fb1c215e9b78f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4tp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nsjqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:51Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:51 crc kubenswrapper[4965]: I0219 09:42:51.443717 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pjxbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3965f16-f751-4de2-9f58-db2070fc99b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pjxbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:51Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:51 crc kubenswrapper[4965]: I0219 09:42:51.460133 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd2c04f5bfa6800521c39502b241dfea1a0b9d3ddde4eb92d501d28bcfad1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:51Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:51 crc kubenswrapper[4965]: I0219 09:42:51.473085 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:51Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:51 crc kubenswrapper[4965]: I0219 09:42:51.485047 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed9a04147ac88af087b35406b7fc4e1261b034a9fbfa0014446cdc08743f7184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27905a4c42a1d28d582484efe02020cd2b7d5a5af7c53787412705c7a6da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:51Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:51 crc kubenswrapper[4965]: I0219 09:42:51.495332 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ef3eb8-6103-492d-b6ef-f16081d15e83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://107d47a2c3ddc138ad383ab20f81dabe2c31af50f7bd66c31b66df79488ba837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ff237da7e509d3b4a25e8042c384a768ef0123d1687b574502f769bde3121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mhh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:51Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:51 crc kubenswrapper[4965]: I0219 09:42:51.514633 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c788dfa-1923-4a2b-9619-73acf92ec849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dcfpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:51Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:51 crc kubenswrapper[4965]: I0219 09:42:51.538353 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vpj8c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26ce37d0-9ace-438a-bdd4-6bb30e41bac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vpj8c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:51Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:51 crc kubenswrapper[4965]: I0219 09:42:51.551679 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"210f2216-544c-43a1-813b-68e47da7447e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31cd85d70b58724b5ed5071b085e5b20b31083087fc7847dfadccc52cfd717c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d403f58f80ef84bfe879f95390ffc186d6516ce879f1ca593e9d06b6fdaa9003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86861693b1090906a34be8e134571c684c24b327ed62db509ca470e79c70c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65b4ac45cd6766a2144308a99817f41ef69812519680ebae6edf2d6c72f1c97e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c0288960f3e7739ec0587fcefc29e57c0e351c4903326474454df7b6b57a29c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:42:39Z\\\",\\\"message\\\":\\\"W0219 09:42:28.573633 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 09:42:28.575071 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771494148 cert, and key in /tmp/serving-cert-427400488/serving-signer.crt, /tmp/serving-cert-427400488/serving-signer.key\\\\nI0219 09:42:29.117984 1 observer_polling.go:159] Starting file observer\\\\nW0219 09:42:29.120780 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 09:42:29.121009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:42:29.122010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-427400488/tls.crt::/tmp/serving-cert-427400488/tls.key\\\\\\\"\\\\nF0219 09:42:39.487179 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2dcb23379da52333bd115902957e4051df5472eb6d909d45f4781487e8d6e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:51Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:51 crc kubenswrapper[4965]: I0219 09:42:51.576724 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85d200ad-dc81-4825-a3e0-976c042ebfd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e2ac875fca92d3c631dc7856cd9f72b9abbf3f2edcbc7efeb49ce1c03ac52a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d4ac252f5069500eef4e1579559c883095bf1c21a29cb96a36a4aab507a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29999666cb6f12b3a4a394a38d4304dd636fe7106b771ca4ef541693fbfc76a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fb1ec6375fa0345ae67191ebc522471cabd2510440f8051132b833c0fa595e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:51Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:51 crc kubenswrapper[4965]: I0219 09:42:51.591114 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:51Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:51 crc kubenswrapper[4965]: I0219 09:42:51.610882 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ca9c67a49c188984680f98e96b659087034f30727c1fcdad7dfc298157745c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:51Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:51 crc kubenswrapper[4965]: I0219 09:42:51.632757 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd2c04f5bfa6800521c39502b241dfea1a0b9d3ddde4eb92d501d28bcfad1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:51Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:51 crc kubenswrapper[4965]: I0219 09:42:51.647280 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:51Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:51 crc kubenswrapper[4965]: I0219 09:42:51.662434 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed9a04147ac88af087b35406b7fc4e1261b034a9fbfa0014446cdc08743f7184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27905a4c42a1d28d582484efe02020cd2b7d5a5af7c53787412705c7a6da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:51Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:51 crc kubenswrapper[4965]: I0219 09:42:51.674512 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ef3eb8-6103-492d-b6ef-f16081d15e83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://107d47a2c3ddc138ad383ab20f81dabe2c31af50f7bd66c31b66df79488ba837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ff237da7e509d3b4a25e8042c384a768ef0123d1687b574502f769bde3121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mhh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:51Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:51 crc kubenswrapper[4965]: I0219 09:42:51.692469 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c788dfa-1923-4a2b-9619-73acf92ec849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dcfpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:51Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:51 crc kubenswrapper[4965]: I0219 09:42:51.728116 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ca9c67a49c188984680f98e96b659087034f30727c1fcdad7dfc298157745c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:51Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:51 crc kubenswrapper[4965]: I0219 09:42:51.785446 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vpj8c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26ce37d0-9ace-438a-bdd4-6bb30e41bac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vpj8c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:51Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:51 crc kubenswrapper[4965]: I0219 09:42:51.815637 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"210f2216-544c-43a1-813b-68e47da7447e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31cd85d70b58724b5ed5071b085e5b20b31083087fc7847dfadccc52cfd717c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d403f58f80ef84bfe879f95390ffc186d6516ce879f1ca593e9d06b6fdaa9003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86861693b1090906a34be8e134571c684c24b327ed62db509ca470e79c70c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65b4ac45cd6766a2144308a99817f41ef69812519680ebae6edf2d6c72f1c97e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c0288960f3e7739ec0587fcefc29e57c0e351c4903326474454df7b6b57a29c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:42:39Z\\\",\\\"message\\\":\\\"W0219 09:42:28.573633 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 09:42:28.575071 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771494148 cert, and key in /tmp/serving-cert-427400488/serving-signer.crt, /tmp/serving-cert-427400488/serving-signer.key\\\\nI0219 09:42:29.117984 1 observer_polling.go:159] Starting file observer\\\\nW0219 09:42:29.120780 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 09:42:29.121009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:42:29.122010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-427400488/tls.crt::/tmp/serving-cert-427400488/tls.key\\\\\\\"\\\\nF0219 09:42:39.487179 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2dcb23379da52333bd115902957e4051df5472eb6d909d45f4781487e8d6e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:51Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:51 crc kubenswrapper[4965]: I0219 09:42:51.850609 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85d200ad-dc81-4825-a3e0-976c042ebfd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e2ac875fca92d3c631dc7856cd9f72b9abbf3f2edcbc7efeb49ce1c03ac52a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d4ac252f5069500eef4e1579559c883095bf1c21a29cb96a36a4aab507a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29999666cb6f12b3a4a394a38d4304dd636fe7106b771ca4ef541693fbfc76a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fb1ec6375fa0345ae67191ebc522471cabd2510440f8051132b833c0fa595e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:51Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:51 crc kubenswrapper[4965]: I0219 09:42:51.889895 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:51Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:51 crc kubenswrapper[4965]: I0219 09:42:51.930239 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:51Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:51 crc kubenswrapper[4965]: I0219 09:42:51.965932 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6nv8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7972115-bfc1-42ee-b756-e394806eed51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://597dabc5893cced827268c6dc222b2f1535c93e6086c25cec52e7f612952eb65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vd96\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6nv8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:51Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.014014 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nsjqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aef896286f2619adf09fb4e2f4f25543b1d0d69c90fb4d301fb1c215e9b78f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4tp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nsjqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:52Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.046072 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pjxbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3965f16-f751-4de2-9f58-db2070fc99b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d81fdde65dd95b5dd26fd2bccb3c26f4491eee9891d4e837fd01338432057878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pjxbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:52Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.160069 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 11:16:04.44298714 +0000 UTC Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.197427 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.197471 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.197533 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:42:52 crc kubenswrapper[4965]: E0219 09:42:52.197590 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:42:52 crc kubenswrapper[4965]: E0219 09:42:52.197751 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:42:52 crc kubenswrapper[4965]: E0219 09:42:52.197832 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.203158 4965 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.205473 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.205534 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.205547 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.205676 4965 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.212953 4965 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.213163 4965 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.213970 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.214086 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.214179 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.214298 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.214397 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:52Z","lastTransitionTime":"2026-02-19T09:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:52 crc kubenswrapper[4965]: E0219 09:42:52.232552 4965 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1c83089-21b1-454c-b8cd-3bf0aaa04cd0\\\",\\\"systemUUID\\\":\\\"70334fb7-3860-4c43-90b6-37f049faeb9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:52Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.237146 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.237508 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.237593 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.237682 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.237770 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:52Z","lastTransitionTime":"2026-02-19T09:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:52 crc kubenswrapper[4965]: E0219 09:42:52.256986 4965 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1c83089-21b1-454c-b8cd-3bf0aaa04cd0\\\",\\\"systemUUID\\\":\\\"70334fb7-3860-4c43-90b6-37f049faeb9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:52Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.261282 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.261313 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.261321 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.261337 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.261347 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:52Z","lastTransitionTime":"2026-02-19T09:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:52 crc kubenswrapper[4965]: E0219 09:42:52.274717 4965 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1c83089-21b1-454c-b8cd-3bf0aaa04cd0\\\",\\\"systemUUID\\\":\\\"70334fb7-3860-4c43-90b6-37f049faeb9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:52Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.282280 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.282348 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.282363 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.282385 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.282399 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:52Z","lastTransitionTime":"2026-02-19T09:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:52 crc kubenswrapper[4965]: E0219 09:42:52.299443 4965 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1c83089-21b1-454c-b8cd-3bf0aaa04cd0\\\",\\\"systemUUID\\\":\\\"70334fb7-3860-4c43-90b6-37f049faeb9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:52Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.310264 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.310305 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.310314 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.310330 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.310343 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:52Z","lastTransitionTime":"2026-02-19T09:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:52 crc kubenswrapper[4965]: E0219 09:42:52.325356 4965 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1c83089-21b1-454c-b8cd-3bf0aaa04cd0\\\",\\\"systemUUID\\\":\\\"70334fb7-3860-4c43-90b6-37f049faeb9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:52Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:52 crc kubenswrapper[4965]: E0219 09:42:52.325490 4965 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.327491 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.327533 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.327547 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.327566 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.327578 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:52Z","lastTransitionTime":"2026-02-19T09:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.429950 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.430001 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.430014 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.430032 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.430042 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:52Z","lastTransitionTime":"2026-02-19T09:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.533227 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.533482 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.533491 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.533507 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.533520 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:52Z","lastTransitionTime":"2026-02-19T09:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.636373 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.636454 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.636469 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.636491 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.636504 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:52Z","lastTransitionTime":"2026-02-19T09:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.739061 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.739111 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.739121 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.739138 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.739151 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:52Z","lastTransitionTime":"2026-02-19T09:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.841329 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.841400 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.841424 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.841449 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.841467 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:52Z","lastTransitionTime":"2026-02-19T09:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.943952 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.943995 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.944004 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.944022 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:52 crc kubenswrapper[4965]: I0219 09:42:52.944032 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:52Z","lastTransitionTime":"2026-02-19T09:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.047414 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.047453 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.047462 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.047483 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.047494 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:53Z","lastTransitionTime":"2026-02-19T09:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.150623 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.150909 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.150917 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.150938 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.150950 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:53Z","lastTransitionTime":"2026-02-19T09:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.160931 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 04:42:09.582591958 +0000 UTC Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.254033 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.254077 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.254089 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.254109 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.254121 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:53Z","lastTransitionTime":"2026-02-19T09:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.356804 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.356843 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.356852 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.356869 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.356879 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:53Z","lastTransitionTime":"2026-02-19T09:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.400535 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" event={"ID":"7c788dfa-1923-4a2b-9619-73acf92ec849","Type":"ContainerStarted","Data":"533452e14c9d0d57a451ec0dd06097f87f60658a8f008203b29c31b2b5310eb2"} Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.402757 4965 generic.go:334] "Generic (PLEG): container finished" podID="26ce37d0-9ace-438a-bdd4-6bb30e41bac8" containerID="0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d" exitCode=0 Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.402810 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vpj8c" event={"ID":"26ce37d0-9ace-438a-bdd4-6bb30e41bac8","Type":"ContainerDied","Data":"0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d"} Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.424006 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:53Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.437424 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ca9c67a49c188984680f98e96b659087034f30727c1fcdad7dfc298157745c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:53Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.459562 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vpj8c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26ce37d0-9ace-438a-bdd4-6bb30e41bac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vpj8c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:53Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.459955 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.460015 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.460029 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.460051 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.460065 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:53Z","lastTransitionTime":"2026-02-19T09:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.481061 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"210f2216-544c-43a1-813b-68e47da7447e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31cd85d70b58724b5ed5071b085e5b20b31083087fc7847dfadccc52cfd717c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d403f58f80ef84bfe879f95390ffc186d6516ce879f1ca593e9d06b6fdaa9003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86861693b1090906a34be8e134571c684c24b327ed62db509ca470e79c70c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65b4ac45cd6766a2144308a99817f41ef69812519680ebae6edf2d6c72f1c97e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c0288960f3e7739ec0587fcefc29e57c0e351c4903326474454df7b6b57a29c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:42:39Z\\\",\\\"message\\\":\\\"W0219 09:42:28.573633 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 09:42:28.575071 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771494148 cert, and key in /tmp/serving-cert-427400488/serving-signer.crt, /tmp/serving-cert-427400488/serving-signer.key\\\\nI0219 09:42:29.117984 1 observer_polling.go:159] Starting file observer\\\\nW0219 09:42:29.120780 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 09:42:29.121009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:42:29.122010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-427400488/tls.crt::/tmp/serving-cert-427400488/tls.key\\\\\\\"\\\\nF0219 09:42:39.487179 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2dcb23379da52333bd115902957e4051df5472eb6d909d45f4781487e8d6e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:53Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.501700 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85d200ad-dc81-4825-a3e0-976c042ebfd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e2ac875fca92d3c631dc7856cd9f72b9abbf3f2edcbc7efeb49ce1c03ac52a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d4ac252f5069500eef4e1579559c883095bf1c21a29cb96a36a4aab507a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29999666cb6f12b3a4a394a38d4304dd636fe7106b771ca4ef541693fbfc76a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fb1ec6375fa0345ae67191ebc522471cabd2510440f8051132b833c0fa595e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:53Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.513471 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pjxbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3965f16-f751-4de2-9f58-db2070fc99b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d81fdde65dd95b5dd26fd2bccb3c26f4491eee9891d4e837fd01338432057878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pjxbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:53Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.526995 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:53Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.542039 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6nv8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7972115-bfc1-42ee-b756-e394806eed51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://597dabc5893cced827268c6dc222b2f1535c93e6086c25cec52e7f612952eb65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vd96\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6nv8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:53Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.560430 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nsjqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aef896286f2619adf09fb4e2f4f25543b1d0d69c90fb4d301fb1c215e9b78f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4tp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nsjqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:53Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.563237 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.563298 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.563315 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.563341 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.563358 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:53Z","lastTransitionTime":"2026-02-19T09:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.583844 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c788dfa-1923-4a2b-9619-73acf92ec849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dcfpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:53Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.598853 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd2c04f5bfa6800521c39502b241dfea1a0b9d3ddde4eb92d501d28bcfad1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:53Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.612532 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:53Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.628314 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed9a04147ac88af087b35406b7fc4e1261b034a9fbfa0014446cdc08743f7184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27905a4c42a1d28d582484efe02020cd2b7d5a5af7c53787412705c7a6da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:53Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.641823 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ef3eb8-6103-492d-b6ef-f16081d15e83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://107d47a2c3ddc138ad383ab20f81dabe2c31af50f7bd66c31b66df79488ba837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ff237da7e509d3b4a25e8042c384a768ef0123d1687b574502f769bde3121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mhh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:53Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.666161 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.666236 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.666250 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.666275 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.666289 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:53Z","lastTransitionTime":"2026-02-19T09:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.768221 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.768263 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.768271 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.768289 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.768301 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:53Z","lastTransitionTime":"2026-02-19T09:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.830311 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:42:53 crc kubenswrapper[4965]: E0219 09:42:53.830602 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:43:01.830560556 +0000 UTC m=+37.451881906 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.871482 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.871543 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.871563 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.871590 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.871609 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:53Z","lastTransitionTime":"2026-02-19T09:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.931502 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.931570 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.931602 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.931634 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:42:53 crc kubenswrapper[4965]: E0219 09:42:53.931742 4965 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 09:42:53 crc kubenswrapper[4965]: E0219 09:42:53.931742 4965 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 09:42:53 crc kubenswrapper[4965]: E0219 09:42:53.931760 4965 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 09:42:53 crc kubenswrapper[4965]: E0219 09:42:53.931775 4965 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 09:42:53 crc kubenswrapper[4965]: E0219 09:42:53.931780 4965 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 09:42:53 crc kubenswrapper[4965]: E0219 09:42:53.931786 4965 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 09:42:53 crc kubenswrapper[4965]: E0219 09:42:53.931847 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 09:43:01.931829419 +0000 UTC m=+37.553150729 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 09:42:53 crc kubenswrapper[4965]: E0219 09:42:53.931848 4965 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 09:42:53 crc kubenswrapper[4965]: E0219 09:42:53.931883 4965 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 09:42:53 crc kubenswrapper[4965]: E0219 09:42:53.931866 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 09:43:01.93185945 +0000 UTC m=+37.553180760 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 09:42:53 crc kubenswrapper[4965]: E0219 09:42:53.932027 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 09:43:01.931995873 +0000 UTC m=+37.553317353 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 09:42:53 crc kubenswrapper[4965]: E0219 09:42:53.932060 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 09:43:01.932043604 +0000 UTC m=+37.553365144 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.974463 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.974499 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.974509 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.974524 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:53 crc kubenswrapper[4965]: I0219 09:42:53.974535 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:53Z","lastTransitionTime":"2026-02-19T09:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:54 crc kubenswrapper[4965]: I0219 09:42:54.076842 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:54 crc kubenswrapper[4965]: I0219 09:42:54.076896 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:54 crc kubenswrapper[4965]: I0219 09:42:54.076912 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:54 crc kubenswrapper[4965]: I0219 09:42:54.076936 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:54 crc kubenswrapper[4965]: I0219 09:42:54.076949 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:54Z","lastTransitionTime":"2026-02-19T09:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:54 crc kubenswrapper[4965]: I0219 09:42:54.161223 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 19:11:36.875406983 +0000 UTC Feb 19 09:42:54 crc kubenswrapper[4965]: I0219 09:42:54.180060 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:54 crc kubenswrapper[4965]: I0219 09:42:54.180096 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:54 crc kubenswrapper[4965]: I0219 09:42:54.180105 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:54 crc kubenswrapper[4965]: I0219 09:42:54.180119 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:54 crc kubenswrapper[4965]: I0219 09:42:54.180129 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:54Z","lastTransitionTime":"2026-02-19T09:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:54 crc kubenswrapper[4965]: I0219 09:42:54.197634 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:42:54 crc kubenswrapper[4965]: I0219 09:42:54.197680 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:42:54 crc kubenswrapper[4965]: I0219 09:42:54.197634 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:42:54 crc kubenswrapper[4965]: E0219 09:42:54.197788 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:42:54 crc kubenswrapper[4965]: E0219 09:42:54.197836 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:42:54 crc kubenswrapper[4965]: E0219 09:42:54.197947 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:42:54 crc kubenswrapper[4965]: I0219 09:42:54.282993 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:54 crc kubenswrapper[4965]: I0219 09:42:54.283034 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:54 crc kubenswrapper[4965]: I0219 09:42:54.283042 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:54 crc kubenswrapper[4965]: I0219 09:42:54.283061 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:54 crc kubenswrapper[4965]: I0219 09:42:54.283076 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:54Z","lastTransitionTime":"2026-02-19T09:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:54 crc kubenswrapper[4965]: I0219 09:42:54.385825 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:54 crc kubenswrapper[4965]: I0219 09:42:54.385880 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:54 crc kubenswrapper[4965]: I0219 09:42:54.385901 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:54 crc kubenswrapper[4965]: I0219 09:42:54.385941 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:54 crc kubenswrapper[4965]: I0219 09:42:54.385956 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:54Z","lastTransitionTime":"2026-02-19T09:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:54 crc kubenswrapper[4965]: I0219 09:42:54.408980 4965 generic.go:334] "Generic (PLEG): container finished" podID="26ce37d0-9ace-438a-bdd4-6bb30e41bac8" containerID="e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e" exitCode=0 Feb 19 09:42:54 crc kubenswrapper[4965]: I0219 09:42:54.409048 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vpj8c" event={"ID":"26ce37d0-9ace-438a-bdd4-6bb30e41bac8","Type":"ContainerDied","Data":"e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e"} Feb 19 09:42:54 crc kubenswrapper[4965]: I0219 09:42:54.430133 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd2c04f5bfa6800521c39502b241dfea1a0b9d3ddde4eb92d501d28bcfad1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:54Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:54 crc kubenswrapper[4965]: I0219 09:42:54.446546 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:54Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:54 crc kubenswrapper[4965]: I0219 09:42:54.461019 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed9a04147ac88af087b35406b7fc4e1261b034a9fbfa0014446cdc08743f7184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27905a4c42a1d28d582484efe02020cd2b7d5a5af7c53787412705c7a6da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:54Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:54 crc kubenswrapper[4965]: I0219 09:42:54.472806 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ef3eb8-6103-492d-b6ef-f16081d15e83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://107d47a2c3ddc138ad383ab20f81dabe2c31af50f7bd66c31b66df79488ba837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ff237da7e509d3b4a25e8042c384a768ef0123d1687b574502f769bde3121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mhh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:54Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:54 crc kubenswrapper[4965]: I0219 09:42:54.488760 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:54 crc kubenswrapper[4965]: I0219 09:42:54.488812 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:54 crc kubenswrapper[4965]: I0219 09:42:54.488826 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:54 crc kubenswrapper[4965]: I0219 09:42:54.488848 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:54 crc kubenswrapper[4965]: I0219 09:42:54.488861 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:54Z","lastTransitionTime":"2026-02-19T09:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:54 crc kubenswrapper[4965]: I0219 09:42:54.493497 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c788dfa-1923-4a2b-9619-73acf92ec849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dcfpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:54Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:54 crc kubenswrapper[4965]: I0219 09:42:54.510881 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ca9c67a49c188984680f98e96b659087034f30727c1fcdad7dfc298157745c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:54Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:54 crc kubenswrapper[4965]: I0219 09:42:54.527474 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vpj8c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26ce37d0-9ace-438a-bdd4-6bb30e41bac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vpj8c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:54Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:54 crc kubenswrapper[4965]: I0219 09:42:54.541550 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"210f2216-544c-43a1-813b-68e47da7447e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31cd85d70b58724b5ed5071b085e5b20b31083087fc7847dfadccc52cfd717c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d403f58f80ef84bfe879f95390ffc186d6516ce879f1ca593e9d06b6fdaa9003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86861693b1090906a34be8e134571c684c24b327ed62db509ca470e79c70c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65b4ac45cd6766a2144308a99817f41ef69812519680ebae6edf2d6c72f1c97e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c0288960f3e7739ec0587fcefc29e57c0e351c4903326474454df7b6b57a29c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:42:39Z\\\",\\\"message\\\":\\\"W0219 09:42:28.573633 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 09:42:28.575071 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771494148 cert, and key in /tmp/serving-cert-427400488/serving-signer.crt, /tmp/serving-cert-427400488/serving-signer.key\\\\nI0219 09:42:29.117984 1 observer_polling.go:159] Starting file observer\\\\nW0219 09:42:29.120780 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 09:42:29.121009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:42:29.122010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-427400488/tls.crt::/tmp/serving-cert-427400488/tls.key\\\\\\\"\\\\nF0219 09:42:39.487179 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2dcb23379da52333bd115902957e4051df5472eb6d909d45f4781487e8d6e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:54Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:54 crc kubenswrapper[4965]: I0219 09:42:54.555580 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85d200ad-dc81-4825-a3e0-976c042ebfd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e2ac875fca92d3c631dc7856cd9f72b9abbf3f2edcbc7efeb49ce1c03ac52a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d4ac252f5069500eef4e1579559c883095bf1c21a29cb96a36a4aab507a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29999666cb6f12b3a4a394a38d4304dd636fe7106b771ca4ef541693fbfc76a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fb1ec6375fa0345ae67191ebc522471cabd2510440f8051132b833c0fa595e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:54Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:54 crc kubenswrapper[4965]: I0219 09:42:54.569153 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:54Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:54 crc kubenswrapper[4965]: I0219 09:42:54.585214 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:54Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:54 crc kubenswrapper[4965]: I0219 09:42:54.592539 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:54 crc kubenswrapper[4965]: I0219 09:42:54.592576 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:54 crc kubenswrapper[4965]: I0219 09:42:54.592588 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:54 crc kubenswrapper[4965]: I0219 09:42:54.592607 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:54 crc kubenswrapper[4965]: I0219 09:42:54.592619 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:54Z","lastTransitionTime":"2026-02-19T09:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:54 crc kubenswrapper[4965]: I0219 09:42:54.597450 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6nv8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7972115-bfc1-42ee-b756-e394806eed51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://597dabc5893cced827268c6dc222b2f1535c93e6086c25cec52e7f612952eb65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vd96\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6nv8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:54Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:54 crc kubenswrapper[4965]: I0219 09:42:54.612040 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nsjqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aef896286f2619adf09fb4e2f4f25543b1d0d69c90fb4d301fb1c215e9b78f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4tp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nsjqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:54Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:54 crc kubenswrapper[4965]: I0219 09:42:54.624702 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pjxbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3965f16-f751-4de2-9f58-db2070fc99b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d81fdde65dd95b5dd26fd2bccb3c26f4491eee9891d4e837fd01338432057878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pjxbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:54Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:54 crc kubenswrapper[4965]: I0219 09:42:54.694836 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:54 crc kubenswrapper[4965]: I0219 09:42:54.694900 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:54 crc kubenswrapper[4965]: I0219 09:42:54.694918 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:54 crc kubenswrapper[4965]: I0219 09:42:54.694945 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:54 crc kubenswrapper[4965]: I0219 09:42:54.694966 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:54Z","lastTransitionTime":"2026-02-19T09:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:54 crc kubenswrapper[4965]: I0219 09:42:54.796984 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:54 crc kubenswrapper[4965]: I0219 09:42:54.797025 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:54 crc kubenswrapper[4965]: I0219 09:42:54.797034 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:54 crc kubenswrapper[4965]: I0219 09:42:54.797049 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:54 crc kubenswrapper[4965]: I0219 09:42:54.797058 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:54Z","lastTransitionTime":"2026-02-19T09:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:54 crc kubenswrapper[4965]: I0219 09:42:54.900071 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:54 crc kubenswrapper[4965]: I0219 09:42:54.900136 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:54 crc kubenswrapper[4965]: I0219 09:42:54.900153 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:54 crc kubenswrapper[4965]: I0219 09:42:54.900182 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:54 crc kubenswrapper[4965]: I0219 09:42:54.900222 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:54Z","lastTransitionTime":"2026-02-19T09:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:54 crc kubenswrapper[4965]: I0219 09:42:54.912123 4965 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.003436 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.003496 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.003513 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.003545 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.003565 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:55Z","lastTransitionTime":"2026-02-19T09:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.105832 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.105885 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.105896 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.105917 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.105933 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:55Z","lastTransitionTime":"2026-02-19T09:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.161420 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 19:34:02.104977594 +0000 UTC Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.208458 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.208697 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.208707 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.208720 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.208731 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:55Z","lastTransitionTime":"2026-02-19T09:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.215874 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nsjqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aef896286f2619adf09fb4e2f4f25543b1d0d69c90fb4d301fb1c215e9b78f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4tp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nsjqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:55Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.230431 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pjxbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3965f16-f751-4de2-9f58-db2070fc99b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d81fdde65dd95b5dd26fd2bccb3c26f4491eee9891d4e837fd01338432057878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pjxbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:55Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.243688 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:55Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.254491 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6nv8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7972115-bfc1-42ee-b756-e394806eed51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://597dabc5893cced827268c6dc222b2f1535c93e6086c25cec52e7f612952eb65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vd96\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6nv8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:55Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.274067 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ef3eb8-6103-492d-b6ef-f16081d15e83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://107d47a2c3ddc138ad383ab20f81dabe2c31af50f7bd66c31b66df79488ba837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ff237da7e509d3b4a25e8042c384a768ef0123d1687b574502f769bde3121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mhh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:55Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.297752 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c788dfa-1923-4a2b-9619-73acf92ec849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dcfpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:55Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.310506 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.310548 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.310560 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.310576 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.310587 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:55Z","lastTransitionTime":"2026-02-19T09:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.322521 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd2c04f5bfa6800521c39502b241dfea1a0b9d3ddde4eb92d501d28bcfad1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:55Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.337879 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:55Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.359088 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed9a04147ac88af087b35406b7fc4e1261b034a9fbfa0014446cdc08743f7184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27905a4c42a1d28d582484efe02020cd2b7d5a5af7c53787412705c7a6da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:55Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.372503 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85d200ad-dc81-4825-a3e0-976c042ebfd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e2ac875fca92d3c631dc7856cd9f72b9abbf3f2edcbc7efeb49ce1c03ac52a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d4ac252f5069500eef4e1579559c883095bf1c21a29cb96a36a4aab507a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29999666cb6f12b3a4a394a38d4304dd636fe7106b771ca4ef541693fbfc76a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fb1ec6375fa0345ae67191ebc522471cabd2510440f8051132b833c0fa595e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:55Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.388325 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:55Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.403439 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ca9c67a49c188984680f98e96b659087034f30727c1fcdad7dfc298157745c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:55Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.413540 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.413606 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.413618 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.413691 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.413715 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:55Z","lastTransitionTime":"2026-02-19T09:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.418121 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" event={"ID":"7c788dfa-1923-4a2b-9619-73acf92ec849","Type":"ContainerStarted","Data":"93e915205e75c0aa802775af4b27f25846f264025ef038251463546485cf2acf"} Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.418447 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.421567 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vpj8c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26ce37d0-9ace-438a-bdd4-6bb30e41bac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vpj8c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:55Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.422008 4965 generic.go:334] "Generic (PLEG): container finished" podID="26ce37d0-9ace-438a-bdd4-6bb30e41bac8" containerID="9c3b065fcc4e11f5576c86810e3af403b4336878950bb02f61e1e461b38b009e" exitCode=0 Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.422047 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vpj8c" event={"ID":"26ce37d0-9ace-438a-bdd4-6bb30e41bac8","Type":"ContainerDied","Data":"9c3b065fcc4e11f5576c86810e3af403b4336878950bb02f61e1e461b38b009e"} Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.436713 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"210f2216-544c-43a1-813b-68e47da7447e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31cd85d70b58724b5ed5071b085e5b20b31083087fc7847dfadccc52cfd717c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d403f58f80ef84bfe879f95390ffc186d6516ce879f1ca593e9d06b6fdaa9003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86861693b1090906a34be8e134571c684c24b327ed62db509ca470e79c70c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65b4ac45cd6766a2144308a99817f41ef69812519680ebae6edf2d6c72f1c97e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c0288960f3e7739ec0587fcefc29e57c0e351c4903326474454df7b6b57a29c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:42:39Z\\\",\\\"message\\\":\\\"W0219 09:42:28.573633 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 09:42:28.575071 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771494148 cert, and key in /tmp/serving-cert-427400488/serving-signer.crt, /tmp/serving-cert-427400488/serving-signer.key\\\\nI0219 09:42:29.117984 1 observer_polling.go:159] Starting file observer\\\\nW0219 09:42:29.120780 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 09:42:29.121009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:42:29.122010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-427400488/tls.crt::/tmp/serving-cert-427400488/tls.key\\\\\\\"\\\\nF0219 09:42:39.487179 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2dcb23379da52333bd115902957e4051df5472eb6d909d45f4781487e8d6e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:55Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.469759 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd2c04f5bfa6800521c39502b241dfea1a0b9d3ddde4eb92d501d28bcfad1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:55Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.470291 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.486101 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:55Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.501563 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed9a04147ac88af087b35406b7fc4e1261b034a9fbfa0014446cdc08743f7184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27905a4c42a1d28d582484efe02020cd2b7d5a5af7c53787412705c7a6da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:55Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.517517 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.517569 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.517581 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.517600 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.517610 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:55Z","lastTransitionTime":"2026-02-19T09:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.517759 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ef3eb8-6103-492d-b6ef-f16081d15e83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://107d47a2c3ddc138ad383ab20f81dabe2c31af50f7bd66c31b66df79488ba837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ff237da7e509d3b4a25e8042c384a768ef0123d1687b574502f769bde3121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mhh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:55Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.539434 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c788dfa-1923-4a2b-9619-73acf92ec849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa60b6875cede631c9383845eb085f96d62a6365609f1f98b84165b54e0872a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ebb933d7238665138ec7e854756522607a2814b48116b2ce4474869b39344c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac7fd5095ec7fd8ce98b9150bd5c0a642004e2c1239a6fa1ff002efa67471df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51316b32af59fe23cdf832fbc0b37b11f74d3a57d01eed32ca30a196d4c7e2c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccba1acfe523175d218c25c2f59a6f9874426235c9cba981a80cc53aca12408a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc418c94085bcd4ed93250cce9eb6bc122cd045035b72800df2bdf4b364d6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93e915205e75c0aa802775af4b27f25846f264025ef038251463546485cf2acf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://533452e14c9d0d57a451ec0dd06097f87f60658a8f008203b29c31b2b5310eb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dcfpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:55Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.553595 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"210f2216-544c-43a1-813b-68e47da7447e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31cd85d70b58724b5ed5071b085e5b20b31083087fc7847dfadccc52cfd717c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d403f58f80ef84bfe879f95390ffc186d6516ce879f1ca593e9d06b6fdaa9003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86861693b1090906a34be8e134571c684c24b327ed62db509ca470e79c70c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65b4ac45cd6766a2144308a99817f41ef69812519680ebae6edf2d6c72f1c97e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c0288960f3e7739ec0587fcefc29e57c0e351c4903326474454df7b6b57a29c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:42:39Z\\\",\\\"message\\\":\\\"W0219 09:42:28.573633 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 09:42:28.575071 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771494148 cert, and key in /tmp/serving-cert-427400488/serving-signer.crt, /tmp/serving-cert-427400488/serving-signer.key\\\\nI0219 09:42:29.117984 1 observer_polling.go:159] Starting file observer\\\\nW0219 09:42:29.120780 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 09:42:29.121009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:42:29.122010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-427400488/tls.crt::/tmp/serving-cert-427400488/tls.key\\\\\\\"\\\\nF0219 09:42:39.487179 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2dcb23379da52333bd115902957e4051df5472eb6d909d45f4781487e8d6e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:55Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.567943 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85d200ad-dc81-4825-a3e0-976c042ebfd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e2ac875fca92d3c631dc7856cd9f72b9abbf3f2edcbc7efeb49ce1c03ac52a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d4ac252f5069500eef4e1579559c883095bf1c21a29cb96a36a4aab507a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29999666cb6f12b3a4a394a38d4304dd636fe7106b771ca4ef541693fbfc76a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fb1ec6375fa0345ae67191ebc522471cabd2510440f8051132b833c0fa595e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:55Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.582918 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:55Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.596269 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ca9c67a49c188984680f98e96b659087034f30727c1fcdad7dfc298157745c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:55Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.617894 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vpj8c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26ce37d0-9ace-438a-bdd4-6bb30e41bac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3b065fcc4e11f5576c86810e3af403b4336878950bb02f61e1e461b38b009e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c3b065fcc4e11f5576c86810e3af403b4336878950bb02f61e1e461b38b009e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vpj8c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:55Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.621839 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.621887 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.621898 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.621916 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.621928 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:55Z","lastTransitionTime":"2026-02-19T09:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.631372 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:55Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.642811 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6nv8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7972115-bfc1-42ee-b756-e394806eed51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://597dabc5893cced827268c6dc222b2f1535c93e6086c25cec52e7f612952eb65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vd96\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6nv8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:55Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.656410 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nsjqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aef896286f2619adf09fb4e2f4f25543b1d0d69c90fb4d301fb1c215e9b78f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4tp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nsjqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:55Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.668566 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pjxbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3965f16-f751-4de2-9f58-db2070fc99b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d81fdde65dd95b5dd26fd2bccb3c26f4491eee9891d4e837fd01338432057878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pjxbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:55Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.682974 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd2c04f5bfa6800521c39502b241dfea1a0b9d3ddde4eb92d501d28bcfad1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:55Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.693606 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:55Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.712259 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed9a04147ac88af087b35406b7fc4e1261b034a9fbfa0014446cdc08743f7184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27905a4c42a1d28d582484efe02020cd2b7d5a5af7c53787412705c7a6da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:55Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.723320 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ef3eb8-6103-492d-b6ef-f16081d15e83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://107d47a2c3ddc138ad383ab20f81dabe2c31af50f7bd66c31b66df79488ba837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ff237da7e509d3b4a25e8042c384a768ef0123d1687b574502f769bde3121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mhh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:55Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.725155 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.725237 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.725254 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.725295 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.725312 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:55Z","lastTransitionTime":"2026-02-19T09:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.744798 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c788dfa-1923-4a2b-9619-73acf92ec849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa60b6875cede631c9383845eb085f96d62a6365609f1f98b84165b54e0872a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ebb933d7238665138ec7e854756522607a2814b48116b2ce4474869b39344c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac7fd5095ec7fd8ce98b9150bd5c0a642004e2c1239a6fa1ff002efa67471df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51316b32af59fe23cdf832fbc0b37b11f74d3a57d01eed32ca30a196d4c7e2c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccba1acfe523175d218c25c2f59a6f9874426235c9cba981a80cc53aca12408a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc418c94085bcd4ed93250cce9eb6bc122cd045035b72800df2bdf4b364d6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93e915205e75c0aa802775af4b27f25846f264025ef038251463546485cf2acf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://533452e14c9d0d57a451ec0dd06097f87f60658a8f008203b29c31b2b5310eb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dcfpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:55Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.764435 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"210f2216-544c-43a1-813b-68e47da7447e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31cd85d70b58724b5ed5071b085e5b20b31083087fc7847dfadccc52cfd717c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d403f58f80ef84bfe879f95390ffc186d6516ce879f1ca593e9d06b6fdaa9003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86861693b1090906a34be8e134571c684c24b327ed62db509ca470e79c70c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65b4ac45cd6766a2144308a99817f41ef69812519680ebae6edf2d6c72f1c97e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c0288960f3e7739ec0587fcefc29e57c0e351c4903326474454df7b6b57a29c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:42:39Z\\\",\\\"message\\\":\\\"W0219 09:42:28.573633 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 09:42:28.575071 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771494148 cert, and key in /tmp/serving-cert-427400488/serving-signer.crt, /tmp/serving-cert-427400488/serving-signer.key\\\\nI0219 09:42:29.117984 1 observer_polling.go:159] Starting file observer\\\\nW0219 09:42:29.120780 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 09:42:29.121009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:42:29.122010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-427400488/tls.crt::/tmp/serving-cert-427400488/tls.key\\\\\\\"\\\\nF0219 09:42:39.487179 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2dcb23379da52333bd115902957e4051df5472eb6d909d45f4781487e8d6e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:55Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.778417 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85d200ad-dc81-4825-a3e0-976c042ebfd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e2ac875fca92d3c631dc7856cd9f72b9abbf3f2edcbc7efeb49ce1c03ac52a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d4ac252f5069500eef4e1579559c883095bf1c21a29cb96a36a4aab507a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29999666cb6f12b3a4a394a38d4304dd636fe7106b771ca4ef541693fbfc76a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fb1ec6375fa0345ae67191ebc522471cabd2510440f8051132b833c0fa595e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:55Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.792898 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:55Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.814412 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ca9c67a49c188984680f98e96b659087034f30727c1fcdad7dfc298157745c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:55Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.827942 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.827985 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.827996 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.828013 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.828023 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:55Z","lastTransitionTime":"2026-02-19T09:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.834092 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vpj8c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26ce37d0-9ace-438a-bdd4-6bb30e41bac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3b065fcc4e11f5576c86810e3af403b4336878950bb02f61e1e461b38b009e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c3b065fcc4e11f5576c86810e3af403b4336878950bb02f61e1e461b38b009e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vpj8c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:55Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.849296 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:55Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.861003 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6nv8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7972115-bfc1-42ee-b756-e394806eed51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://597dabc5893cced827268c6dc222b2f1535c93e6086c25cec52e7f612952eb65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vd96\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6nv8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:55Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.878035 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nsjqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aef896286f2619adf09fb4e2f4f25543b1d0d69c90fb4d301fb1c215e9b78f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4tp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nsjqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:55Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.888590 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pjxbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3965f16-f751-4de2-9f58-db2070fc99b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d81fdde65dd95b5dd26fd2bccb3c26f4491eee9891d4e837fd01338432057878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pjxbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:55Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.933246 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.933287 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.933301 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.933317 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:55 crc kubenswrapper[4965]: I0219 09:42:55.933333 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:55Z","lastTransitionTime":"2026-02-19T09:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.036388 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.036449 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.036460 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.036477 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.036488 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:56Z","lastTransitionTime":"2026-02-19T09:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.139414 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.139472 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.139488 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.139512 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.139525 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:56Z","lastTransitionTime":"2026-02-19T09:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.161886 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 04:12:10.361464928 +0000 UTC Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.197866 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.197960 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:42:56 crc kubenswrapper[4965]: E0219 09:42:56.198063 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:42:56 crc kubenswrapper[4965]: E0219 09:42:56.198223 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.198004 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:42:56 crc kubenswrapper[4965]: E0219 09:42:56.198355 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.243092 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.243152 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.243166 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.243190 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.243231 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:56Z","lastTransitionTime":"2026-02-19T09:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.346280 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.346326 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.346339 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.346360 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.346372 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:56Z","lastTransitionTime":"2026-02-19T09:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.428386 4965 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.429036 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vpj8c" event={"ID":"26ce37d0-9ace-438a-bdd4-6bb30e41bac8","Type":"ContainerStarted","Data":"f4089ca812b00a9e7ab8b2ba1f148d929de9837384e6b0b64ea8f62dd6917fee"} Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.429452 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.449633 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:56Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.454500 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.462635 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6nv8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7972115-bfc1-42ee-b756-e394806eed51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://597dabc5893cced827268c6dc222b2f1535c93e6086c25cec52e7f612952eb65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vd96\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6nv8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:56Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.482216 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nsjqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aef896286f2619adf09fb4e2f4f25543b1d0d69c90fb4d301fb1c215e9b78f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4tp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nsjqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:56Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.486796 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.487114 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.487180 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.487318 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.487397 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:56Z","lastTransitionTime":"2026-02-19T09:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.495137 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pjxbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3965f16-f751-4de2-9f58-db2070fc99b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d81fdde65dd95b5dd26fd2bccb3c26f4491eee9891d4e837fd01338432057878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pjxbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:56Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.508845 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd2c04f5bfa6800521c39502b241dfea1a0b9d3ddde4eb92d501d28bcfad1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:56Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.523577 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:56Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.535089 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed9a04147ac88af087b35406b7fc4e1261b034a9fbfa0014446cdc08743f7184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27905a4c42a1d28d582484efe02020cd2b7d5a5af7c53787412705c7a6da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:56Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.545433 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ef3eb8-6103-492d-b6ef-f16081d15e83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://107d47a2c3ddc138ad383ab20f81dabe2c31af50f7bd66c31b66df79488ba837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ff237da7e509d3b4a25e8042c384a768ef0123d1687b574502f769bde3121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mhh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:56Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.571055 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c788dfa-1923-4a2b-9619-73acf92ec849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa60b6875cede631c9383845eb085f96d62a6365609f1f98b84165b54e0872a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ebb933d7238665138ec7e854756522607a2814b48116b2ce4474869b39344c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac7fd5095ec7fd8ce98b9150bd5c0a642004e2c1239a6fa1ff002efa67471df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51316b32af59fe23cdf832fbc0b37b11f74d3a57d01eed32ca30a196d4c7e2c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccba1acfe523175d218c25c2f59a6f9874426235c9cba981a80cc53aca12408a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc418c94085bcd4ed93250cce9eb6bc122cd045035b72800df2bdf4b364d6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93e915205e75c0aa802775af4b27f25846f264025ef038251463546485cf2acf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://533452e14c9d0d57a451ec0dd06097f87f60658a8f008203b29c31b2b5310eb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dcfpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:56Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.590345 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.590391 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.590402 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.590421 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.590433 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:56Z","lastTransitionTime":"2026-02-19T09:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.592135 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vpj8c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26ce37d0-9ace-438a-bdd4-6bb30e41bac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3b065fcc4e11f5576c86810e3af403b4336878950bb02f61e1e461b38b009e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c3b065fcc4e11f5576c86810e3af403b4336878950bb02f61e1e461b38b009e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4089ca812b00a9e7ab8b2ba1f148d929de9837384e6b0b64ea8f62dd6917fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vpj8c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:56Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.609040 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"210f2216-544c-43a1-813b-68e47da7447e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31cd85d70b58724b5ed5071b085e5b20b31083087fc7847dfadccc52cfd717c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d403f58f80ef84bfe879f95390ffc186d6516ce879f1ca593e9d06b6fdaa9003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86861693b1090906a34be8e134571c684c24b327ed62db509ca470e79c70c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65b4ac45cd6766a2144308a99817f41ef69812519680ebae6edf2d6c72f1c97e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c0288960f3e7739ec0587fcefc29e57c0e351c4903326474454df7b6b57a29c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:42:39Z\\\",\\\"message\\\":\\\"W0219 09:42:28.573633 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 09:42:28.575071 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771494148 cert, and key in /tmp/serving-cert-427400488/serving-signer.crt, /tmp/serving-cert-427400488/serving-signer.key\\\\nI0219 09:42:29.117984 1 observer_polling.go:159] Starting file observer\\\\nW0219 09:42:29.120780 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 09:42:29.121009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:42:29.122010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-427400488/tls.crt::/tmp/serving-cert-427400488/tls.key\\\\\\\"\\\\nF0219 09:42:39.487179 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2dcb23379da52333bd115902957e4051df5472eb6d909d45f4781487e8d6e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:56Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.623189 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85d200ad-dc81-4825-a3e0-976c042ebfd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e2ac875fca92d3c631dc7856cd9f72b9abbf3f2edcbc7efeb49ce1c03ac52a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d4ac252f5069500eef4e1579559c883095bf1c21a29cb96a36a4aab507a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29999666cb6f12b3a4a394a38d4304dd636fe7106b771ca4ef541693fbfc76a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fb1ec6375fa0345ae67191ebc522471cabd2510440f8051132b833c0fa595e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:56Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.665890 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:56Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.688366 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ca9c67a49c188984680f98e96b659087034f30727c1fcdad7dfc298157745c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:56Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.693039 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.693090 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.693103 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.693126 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.693138 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:56Z","lastTransitionTime":"2026-02-19T09:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.713027 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd2c04f5bfa6800521c39502b241dfea1a0b9d3ddde4eb92d501d28bcfad1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:56Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.737981 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:56Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.754776 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed9a04147ac88af087b35406b7fc4e1261b034a9fbfa0014446cdc08743f7184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27905a4c42a1d28d582484efe02020cd2b7d5a5af7c53787412705c7a6da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:56Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.768275 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ef3eb8-6103-492d-b6ef-f16081d15e83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://107d47a2c3ddc138ad383ab20f81dabe2c31af50f7bd66c31b66df79488ba837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ff237da7e509d3b4a25e8042c384a768ef0123d1687b574502f769bde3121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mhh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:56Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.787496 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c788dfa-1923-4a2b-9619-73acf92ec849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa60b6875cede631c9383845eb085f96d62a6365609f1f98b84165b54e0872a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ebb933d7238665138ec7e854756522607a2814b48116b2ce4474869b39344c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac7fd5095ec7fd8ce98b9150bd5c0a642004e2c1239a6fa1ff002efa67471df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51316b32af59fe23cdf832fbc0b37b11f74d3a57d01eed32ca30a196d4c7e2c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccba1acfe523175d218c25c2f59a6f9874426235c9cba981a80cc53aca12408a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc418c94085bcd4ed93250cce9eb6bc122cd045035b72800df2bdf4b364d6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93e915205e75c0aa802775af4b27f25846f264025ef038251463546485cf2acf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://533452e14c9d0d57a451ec0dd06097f87f60658a8f008203b29c31b2b5310eb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dcfpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:56Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.795905 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.795942 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.795951 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.795966 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.795976 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:56Z","lastTransitionTime":"2026-02-19T09:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.803553 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"210f2216-544c-43a1-813b-68e47da7447e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31cd85d70b58724b5ed5071b085e5b20b31083087fc7847dfadccc52cfd717c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d403f58f80ef84bfe879f95390ffc186d6516ce879f1ca593e9d06b6fdaa9003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86861693b1090906a34be8e134571c684c24b327ed62db509ca470e79c70c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65b4ac45cd6766a2144308a99817f41ef69812519680ebae6edf2d6c72f1c97e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c0288960f3e7739ec0587fcefc29e57c0e351c4903326474454df7b6b57a29c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:42:39Z\\\",\\\"message\\\":\\\"W0219 09:42:28.573633 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 09:42:28.575071 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771494148 cert, and key in /tmp/serving-cert-427400488/serving-signer.crt, /tmp/serving-cert-427400488/serving-signer.key\\\\nI0219 09:42:29.117984 1 observer_polling.go:159] Starting file observer\\\\nW0219 09:42:29.120780 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 09:42:29.121009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:42:29.122010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-427400488/tls.crt::/tmp/serving-cert-427400488/tls.key\\\\\\\"\\\\nF0219 09:42:39.487179 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2dcb23379da52333bd115902957e4051df5472eb6d909d45f4781487e8d6e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:56Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.817897 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85d200ad-dc81-4825-a3e0-976c042ebfd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e2ac875fca92d3c631dc7856cd9f72b9abbf3f2edcbc7efeb49ce1c03ac52a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d4ac252f5069500eef4e1579559c883095bf1c21a29cb96a36a4aab507a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29999666cb6f12b3a4a394a38d4304dd636fe7106b771ca4ef541693fbfc76a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fb1ec6375fa0345ae67191ebc522471cabd2510440f8051132b833c0fa595e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:56Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.832235 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:56Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.845278 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ca9c67a49c188984680f98e96b659087034f30727c1fcdad7dfc298157745c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:56Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.860763 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vpj8c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26ce37d0-9ace-438a-bdd4-6bb30e41bac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3b065fcc4e11f5576c86810e3af403b4336878950bb02f61e1e461b38b009e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c3b065fcc4e11f5576c86810e3af403b4336878950bb02f61e1e461b38b009e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4089ca812b00a9e7ab8b2ba1f148d929de9837384e6b0b64ea8f62dd6917fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vpj8c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:56Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.874304 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:56Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.886827 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6nv8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7972115-bfc1-42ee-b756-e394806eed51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://597dabc5893cced827268c6dc222b2f1535c93e6086c25cec52e7f612952eb65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vd96\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6nv8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:56Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.892565 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.898336 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.898374 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.898385 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.898404 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.898415 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:56Z","lastTransitionTime":"2026-02-19T09:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.904315 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nsjqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aef896286f2619adf09fb4e2f4f25543b1d0d69c90fb4d301fb1c215e9b78f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4tp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nsjqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:56Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.917031 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pjxbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3965f16-f751-4de2-9f58-db2070fc99b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d81fdde65dd95b5dd26fd2bccb3c26f4491eee9891d4e837fd01338432057878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pjxbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:56Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.934727 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ca9c67a49c188984680f98e96b659087034f30727c1fcdad7dfc298157745c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:56Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.950776 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vpj8c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26ce37d0-9ace-438a-bdd4-6bb30e41bac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3b065fcc4e11f5576c86810e3af403b4336878950bb02f61e1e461b38b009e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c3b065fcc4e11f5576c86810e3af403b4336878950bb02f61e1e461b38b009e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4089ca812b00a9e7ab8b2ba1f148d929de9837384e6b0b64ea8f62dd6917fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vpj8c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:56Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.966107 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"210f2216-544c-43a1-813b-68e47da7447e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31cd85d70b58724b5ed5071b085e5b20b31083087fc7847dfadccc52cfd717c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d403f58f80ef84bfe879f95390ffc186d6516ce879f1ca593e9d06b6fdaa9003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86861693b1090906a34be8e134571c684c24b327ed62db509ca470e79c70c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65b4ac45cd6766a2144308a99817f41ef69812519680ebae6edf2d6c72f1c97e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c0288960f3e7739ec0587fcefc29e57c0e351c4903326474454df7b6b57a29c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:42:39Z\\\",\\\"message\\\":\\\"W0219 09:42:28.573633 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 09:42:28.575071 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771494148 cert, and key in /tmp/serving-cert-427400488/serving-signer.crt, /tmp/serving-cert-427400488/serving-signer.key\\\\nI0219 09:42:29.117984 1 observer_polling.go:159] Starting file observer\\\\nW0219 09:42:29.120780 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 09:42:29.121009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:42:29.122010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-427400488/tls.crt::/tmp/serving-cert-427400488/tls.key\\\\\\\"\\\\nF0219 09:42:39.487179 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2dcb23379da52333bd115902957e4051df5472eb6d909d45f4781487e8d6e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:56Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.979690 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85d200ad-dc81-4825-a3e0-976c042ebfd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e2ac875fca92d3c631dc7856cd9f72b9abbf3f2edcbc7efeb49ce1c03ac52a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d4ac252f5069500eef4e1579559c883095bf1c21a29cb96a36a4aab507a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29999666cb6f12b3a4a394a38d4304dd636fe7106b771ca4ef541693fbfc76a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fb1ec6375fa0345ae67191ebc522471cabd2510440f8051132b833c0fa595e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:56Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:56 crc kubenswrapper[4965]: I0219 09:42:56.992927 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:56Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:57 crc kubenswrapper[4965]: I0219 09:42:57.001430 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:57 crc kubenswrapper[4965]: I0219 09:42:57.001479 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:57 crc kubenswrapper[4965]: I0219 09:42:57.001489 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:57 crc kubenswrapper[4965]: I0219 09:42:57.001507 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:57 crc kubenswrapper[4965]: I0219 09:42:57.001519 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:57Z","lastTransitionTime":"2026-02-19T09:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:57 crc kubenswrapper[4965]: I0219 09:42:57.006829 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:57Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:57 crc kubenswrapper[4965]: I0219 09:42:57.019337 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6nv8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7972115-bfc1-42ee-b756-e394806eed51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://597dabc5893cced827268c6dc222b2f1535c93e6086c25cec52e7f612952eb65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vd96\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6nv8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:57Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:57 crc kubenswrapper[4965]: I0219 09:42:57.033810 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nsjqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aef896286f2619adf09fb4e2f4f25543b1d0d69c90fb4d301fb1c215e9b78f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4tp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nsjqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:57Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:57 crc kubenswrapper[4965]: I0219 09:42:57.046343 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pjxbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3965f16-f751-4de2-9f58-db2070fc99b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d81fdde65dd95b5dd26fd2bccb3c26f4491eee9891d4e837fd01338432057878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pjxbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:57Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:57 crc kubenswrapper[4965]: I0219 09:42:57.067018 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd2c04f5bfa6800521c39502b241dfea1a0b9d3ddde4eb92d501d28bcfad1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:57Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:57 crc kubenswrapper[4965]: I0219 09:42:57.089966 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:57Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:57 crc kubenswrapper[4965]: I0219 09:42:57.104988 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:57 crc kubenswrapper[4965]: I0219 09:42:57.105042 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:57 crc kubenswrapper[4965]: I0219 09:42:57.105055 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:57 crc kubenswrapper[4965]: I0219 09:42:57.105075 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:57 crc kubenswrapper[4965]: I0219 09:42:57.104975 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed9a04147ac88af087b35406b7fc4e1261b034a9fbfa0014446cdc08743f7184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27905a4c42a1d28d582484efe02020cd2b7d5a5af7c53787412705c7a6da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:57Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:57 crc kubenswrapper[4965]: I0219 09:42:57.105087 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:57Z","lastTransitionTime":"2026-02-19T09:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:57 crc kubenswrapper[4965]: I0219 09:42:57.119914 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ef3eb8-6103-492d-b6ef-f16081d15e83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://107d47a2c3ddc138ad383ab20f81dabe2c31af50f7bd66c31b66df79488ba837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ff237da7e509d3b4a25e8042c384a768ef0123d1687b574502f769bde3121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mhh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:57Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:57 crc kubenswrapper[4965]: I0219 09:42:57.141281 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c788dfa-1923-4a2b-9619-73acf92ec849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa60b6875cede631c9383845eb085f96d62a6365609f1f98b84165b54e0872a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ebb933d7238665138ec7e854756522607a2814b48116b2ce4474869b39344c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac7fd5095ec7fd8ce98b9150bd5c0a642004e2c1239a6fa1ff002efa67471df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51316b32af59fe23cdf832fbc0b37b11f74d3a57d01eed32ca30a196d4c7e2c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccba1acfe523175d218c25c2f59a6f9874426235c9cba981a80cc53aca12408a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc418c94085bcd4ed93250cce9eb6bc122cd045035b72800df2bdf4b364d6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93e915205e75c0aa802775af4b27f25846f264025ef038251463546485cf2acf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://533452e14c9d0d57a451ec0dd06097f87f60658a8f008203b29c31b2b5310eb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dcfpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:57Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:57 crc kubenswrapper[4965]: I0219 09:42:57.162513 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 06:34:38.82571887 +0000 UTC Feb 19 09:42:57 crc kubenswrapper[4965]: I0219 09:42:57.207392 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:57 crc kubenswrapper[4965]: I0219 09:42:57.207428 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:57 crc kubenswrapper[4965]: I0219 09:42:57.207440 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:57 crc kubenswrapper[4965]: I0219 09:42:57.207456 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:57 crc kubenswrapper[4965]: I0219 09:42:57.207470 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:57Z","lastTransitionTime":"2026-02-19T09:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:57 crc kubenswrapper[4965]: I0219 09:42:57.309607 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:57 crc kubenswrapper[4965]: I0219 09:42:57.309657 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:57 crc kubenswrapper[4965]: I0219 09:42:57.309668 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:57 crc kubenswrapper[4965]: I0219 09:42:57.309686 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:57 crc kubenswrapper[4965]: I0219 09:42:57.309698 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:57Z","lastTransitionTime":"2026-02-19T09:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:57 crc kubenswrapper[4965]: I0219 09:42:57.412490 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:57 crc kubenswrapper[4965]: I0219 09:42:57.412539 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:57 crc kubenswrapper[4965]: I0219 09:42:57.412550 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:57 crc kubenswrapper[4965]: I0219 09:42:57.412576 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:57 crc kubenswrapper[4965]: I0219 09:42:57.412587 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:57Z","lastTransitionTime":"2026-02-19T09:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:57 crc kubenswrapper[4965]: I0219 09:42:57.432370 4965 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 09:42:57 crc kubenswrapper[4965]: I0219 09:42:57.518741 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:57 crc kubenswrapper[4965]: I0219 09:42:57.518819 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:57 crc kubenswrapper[4965]: I0219 09:42:57.518831 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:57 crc kubenswrapper[4965]: I0219 09:42:57.518850 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:57 crc kubenswrapper[4965]: I0219 09:42:57.518870 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:57Z","lastTransitionTime":"2026-02-19T09:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:57 crc kubenswrapper[4965]: I0219 09:42:57.621755 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:57 crc kubenswrapper[4965]: I0219 09:42:57.621820 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:57 crc kubenswrapper[4965]: I0219 09:42:57.621833 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:57 crc kubenswrapper[4965]: I0219 09:42:57.621856 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:57 crc kubenswrapper[4965]: I0219 09:42:57.621871 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:57Z","lastTransitionTime":"2026-02-19T09:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:57 crc kubenswrapper[4965]: I0219 09:42:57.724426 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:57 crc kubenswrapper[4965]: I0219 09:42:57.724473 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:57 crc kubenswrapper[4965]: I0219 09:42:57.724486 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:57 crc kubenswrapper[4965]: I0219 09:42:57.724505 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:57 crc kubenswrapper[4965]: I0219 09:42:57.724518 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:57Z","lastTransitionTime":"2026-02-19T09:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:57 crc kubenswrapper[4965]: I0219 09:42:57.826899 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:57 crc kubenswrapper[4965]: I0219 09:42:57.826974 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:57 crc kubenswrapper[4965]: I0219 09:42:57.826996 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:57 crc kubenswrapper[4965]: I0219 09:42:57.827025 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:57 crc kubenswrapper[4965]: I0219 09:42:57.827048 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:57Z","lastTransitionTime":"2026-02-19T09:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:57 crc kubenswrapper[4965]: I0219 09:42:57.929630 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:57 crc kubenswrapper[4965]: I0219 09:42:57.929677 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:57 crc kubenswrapper[4965]: I0219 09:42:57.929685 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:57 crc kubenswrapper[4965]: I0219 09:42:57.929702 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:57 crc kubenswrapper[4965]: I0219 09:42:57.929712 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:57Z","lastTransitionTime":"2026-02-19T09:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.031771 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.031817 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.031830 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.031850 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.031864 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:58Z","lastTransitionTime":"2026-02-19T09:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.135348 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.135382 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.135392 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.135407 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.135419 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:58Z","lastTransitionTime":"2026-02-19T09:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.162955 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 05:41:18.137363779 +0000 UTC Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.197670 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.197689 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.197732 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:42:58 crc kubenswrapper[4965]: E0219 09:42:58.197913 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:42:58 crc kubenswrapper[4965]: E0219 09:42:58.198079 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:42:58 crc kubenswrapper[4965]: E0219 09:42:58.198231 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.238168 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.238238 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.238256 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.238282 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.238298 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:58Z","lastTransitionTime":"2026-02-19T09:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.341390 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.341469 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.341491 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.341517 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.341536 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:58Z","lastTransitionTime":"2026-02-19T09:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.439094 4965 generic.go:334] "Generic (PLEG): container finished" podID="26ce37d0-9ace-438a-bdd4-6bb30e41bac8" containerID="f4089ca812b00a9e7ab8b2ba1f148d929de9837384e6b0b64ea8f62dd6917fee" exitCode=0 Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.439215 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vpj8c" event={"ID":"26ce37d0-9ace-438a-bdd4-6bb30e41bac8","Type":"ContainerDied","Data":"f4089ca812b00a9e7ab8b2ba1f148d929de9837384e6b0b64ea8f62dd6917fee"} Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.439258 4965 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.443312 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.443339 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.443351 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.443373 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.443387 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:58Z","lastTransitionTime":"2026-02-19T09:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.463843 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd2c04f5bfa6800521c39502b241dfea1a0b9d3ddde4eb92d501d28bcfad1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:58Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.485106 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:58Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.501976 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed9a04147ac88af087b35406b7fc4e1261b034a9fbfa0014446cdc08743f7184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27905a4c42a1d28d582484efe02020cd2b7d5a5af7c53787412705c7a6da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:58Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.515585 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ef3eb8-6103-492d-b6ef-f16081d15e83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://107d47a2c3ddc138ad383ab20f81dabe2c31af50f7bd66c31b66df79488ba837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ff237da7e509d3b4a25e8042c384a768ef0123d1687b574502f769bde3121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mhh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:58Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.536726 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c788dfa-1923-4a2b-9619-73acf92ec849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa60b6875cede631c9383845eb085f96d62a6365609f1f98b84165b54e0872a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ebb933d7238665138ec7e854756522607a2814b48116b2ce4474869b39344c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac7fd5095ec7fd8ce98b9150bd5c0a642004e2c1239a6fa1ff002efa67471df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51316b32af59fe23cdf832fbc0b37b11f74d3a57d01eed32ca30a196d4c7e2c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccba1acfe523175d218c25c2f59a6f9874426235c9cba981a80cc53aca12408a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc418c94085bcd4ed93250cce9eb6bc122cd045035b72800df2bdf4b364d6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93e915205e75c0aa802775af4b27f25846f264025ef038251463546485cf2acf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://533452e14c9d0d57a451ec0dd06097f87f60658a8f008203b29c31b2b5310eb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dcfpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:58Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.547117 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.547158 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.547170 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.547214 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.547231 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:58Z","lastTransitionTime":"2026-02-19T09:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.551965 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"210f2216-544c-43a1-813b-68e47da7447e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31cd85d70b58724b5ed5071b085e5b20b31083087fc7847dfadccc52cfd717c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d403f58f80ef84bfe879f95390ffc186d6516ce879f1ca593e9d06b6fdaa9003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86861693b1090906a34be8e134571c684c24b327ed62db509ca470e79c70c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65b4ac45cd6766a2144308a99817f41ef69812519680ebae6edf2d6c72f1c97e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c0288960f3e7739ec0587fcefc29e57c0e351c4903326474454df7b6b57a29c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:42:39Z\\\",\\\"message\\\":\\\"W0219 09:42:28.573633 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 09:42:28.575071 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771494148 cert, and key in /tmp/serving-cert-427400488/serving-signer.crt, /tmp/serving-cert-427400488/serving-signer.key\\\\nI0219 09:42:29.117984 1 observer_polling.go:159] Starting file observer\\\\nW0219 09:42:29.120780 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 09:42:29.121009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:42:29.122010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-427400488/tls.crt::/tmp/serving-cert-427400488/tls.key\\\\\\\"\\\\nF0219 09:42:39.487179 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2dcb23379da52333bd115902957e4051df5472eb6d909d45f4781487e8d6e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:58Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.566933 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85d200ad-dc81-4825-a3e0-976c042ebfd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e2ac875fca92d3c631dc7856cd9f72b9abbf3f2edcbc7efeb49ce1c03ac52a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d4ac252f5069500eef4e1579559c883095bf1c21a29cb96a36a4aab507a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29999666cb6f12b3a4a394a38d4304dd636fe7106b771ca4ef541693fbfc76a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fb1ec6375fa0345ae67191ebc522471cabd2510440f8051132b833c0fa595e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:58Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.581798 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:58Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.596481 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ca9c67a49c188984680f98e96b659087034f30727c1fcdad7dfc298157745c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:58Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.612464 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vpj8c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26ce37d0-9ace-438a-bdd4-6bb30e41bac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3b065fcc4e11f5576c86810e3af403b4336878950bb02f61e1e461b38b009e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c3b065fcc4e11f5576c86810e3af403b4336878950bb02f61e1e461b38b009e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4089ca812b00a9e7ab8b2ba1f148d929de9837384e6b0b64ea8f62dd6917fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4089ca812b00a9e7ab8b2ba1f148d929de9837384e6b0b64ea8f62dd6917fee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vpj8c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:58Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.626520 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:58Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.636940 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6nv8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7972115-bfc1-42ee-b756-e394806eed51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://597dabc5893cced827268c6dc222b2f1535c93e6086c25cec52e7f612952eb65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vd96\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6nv8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:58Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.649523 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.649719 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.649728 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.649743 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.649752 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:58Z","lastTransitionTime":"2026-02-19T09:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.651232 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nsjqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aef896286f2619adf09fb4e2f4f25543b1d0d69c90fb4d301fb1c215e9b78f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4tp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nsjqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:58Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.664245 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pjxbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3965f16-f751-4de2-9f58-db2070fc99b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d81fdde65dd95b5dd26fd2bccb3c26f4491eee9891d4e837fd01338432057878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pjxbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:58Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.752066 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.752112 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.752123 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.752141 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.752153 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:58Z","lastTransitionTime":"2026-02-19T09:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.860169 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.860452 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.860542 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.860640 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.860715 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:58Z","lastTransitionTime":"2026-02-19T09:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.963321 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.963359 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.963370 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.963389 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:58 crc kubenswrapper[4965]: I0219 09:42:58.963401 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:58Z","lastTransitionTime":"2026-02-19T09:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:59 crc kubenswrapper[4965]: I0219 09:42:59.065899 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:59 crc kubenswrapper[4965]: I0219 09:42:59.066156 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:59 crc kubenswrapper[4965]: I0219 09:42:59.066269 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:59 crc kubenswrapper[4965]: I0219 09:42:59.066341 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:59 crc kubenswrapper[4965]: I0219 09:42:59.066404 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:59Z","lastTransitionTime":"2026-02-19T09:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:59 crc kubenswrapper[4965]: I0219 09:42:59.163622 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 07:21:42.5117024 +0000 UTC Feb 19 09:42:59 crc kubenswrapper[4965]: I0219 09:42:59.175965 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:59 crc kubenswrapper[4965]: I0219 09:42:59.176011 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:59 crc kubenswrapper[4965]: I0219 09:42:59.176023 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:59 crc kubenswrapper[4965]: I0219 09:42:59.176055 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:59 crc kubenswrapper[4965]: I0219 09:42:59.176069 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:59Z","lastTransitionTime":"2026-02-19T09:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:59 crc kubenswrapper[4965]: I0219 09:42:59.279519 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:59 crc kubenswrapper[4965]: I0219 09:42:59.279579 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:59 crc kubenswrapper[4965]: I0219 09:42:59.279599 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:59 crc kubenswrapper[4965]: I0219 09:42:59.279625 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:59 crc kubenswrapper[4965]: I0219 09:42:59.279641 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:59Z","lastTransitionTime":"2026-02-19T09:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:59 crc kubenswrapper[4965]: I0219 09:42:59.382005 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:59 crc kubenswrapper[4965]: I0219 09:42:59.382053 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:59 crc kubenswrapper[4965]: I0219 09:42:59.382066 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:59 crc kubenswrapper[4965]: I0219 09:42:59.382100 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:59 crc kubenswrapper[4965]: I0219 09:42:59.382112 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:59Z","lastTransitionTime":"2026-02-19T09:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:59 crc kubenswrapper[4965]: I0219 09:42:59.445562 4965 generic.go:334] "Generic (PLEG): container finished" podID="26ce37d0-9ace-438a-bdd4-6bb30e41bac8" containerID="5c0fceec5800537c79268d8bad66cd51cedd7e6442e8f08ea259dd5714334a81" exitCode=0 Feb 19 09:42:59 crc kubenswrapper[4965]: I0219 09:42:59.445600 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vpj8c" event={"ID":"26ce37d0-9ace-438a-bdd4-6bb30e41bac8","Type":"ContainerDied","Data":"5c0fceec5800537c79268d8bad66cd51cedd7e6442e8f08ea259dd5714334a81"} Feb 19 09:42:59 crc kubenswrapper[4965]: I0219 09:42:59.465769 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"210f2216-544c-43a1-813b-68e47da7447e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31cd85d70b58724b5ed5071b085e5b20b31083087fc7847dfadccc52cfd717c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d403f58f80ef84bfe879f95390ffc186d6516ce879f1ca593e9d06b6fdaa9003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86861693b1090906a34be8e134571c684c24b327ed62db509ca470e79c70c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65b4ac45cd6766a2144308a99817f41ef69812519680ebae6edf2d6c72f1c97e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c0288960f3e7739ec0587fcefc29e57c0e351c4903326474454df7b6b57a29c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:42:39Z\\\",\\\"message\\\":\\\"W0219 09:42:28.573633 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 09:42:28.575071 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771494148 cert, and key in /tmp/serving-cert-427400488/serving-signer.crt, /tmp/serving-cert-427400488/serving-signer.key\\\\nI0219 09:42:29.117984 1 observer_polling.go:159] Starting file observer\\\\nW0219 09:42:29.120780 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 09:42:29.121009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:42:29.122010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-427400488/tls.crt::/tmp/serving-cert-427400488/tls.key\\\\\\\"\\\\nF0219 09:42:39.487179 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2dcb23379da52333bd115902957e4051df5472eb6d909d45f4781487e8d6e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:59Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:59 crc kubenswrapper[4965]: I0219 09:42:59.484280 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85d200ad-dc81-4825-a3e0-976c042ebfd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e2ac875fca92d3c631dc7856cd9f72b9abbf3f2edcbc7efeb49ce1c03ac52a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d4ac252f5069500eef4e1579559c883095bf1c21a29cb96a36a4aab507a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29999666cb6f12b3a4a394a38d4304dd636fe7106b771ca4ef541693fbfc76a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fb1ec6375fa0345ae67191ebc522471cabd2510440f8051132b833c0fa595e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:59Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:59 crc kubenswrapper[4965]: I0219 09:42:59.485956 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:59 crc kubenswrapper[4965]: I0219 09:42:59.485996 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:59 crc kubenswrapper[4965]: I0219 09:42:59.486011 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:59 crc kubenswrapper[4965]: I0219 09:42:59.486030 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:59 crc kubenswrapper[4965]: I0219 09:42:59.486044 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:59Z","lastTransitionTime":"2026-02-19T09:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:59 crc kubenswrapper[4965]: I0219 09:42:59.503409 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:59Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:59 crc kubenswrapper[4965]: I0219 09:42:59.518695 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ca9c67a49c188984680f98e96b659087034f30727c1fcdad7dfc298157745c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:59Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:59 crc kubenswrapper[4965]: I0219 09:42:59.541613 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vpj8c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26ce37d0-9ace-438a-bdd4-6bb30e41bac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3b065fcc4e11f5576c86810e3af403b4336878950bb02f61e1e461b38b009e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c3b065fcc4e11f5576c86810e3af403b4336878950bb02f61e1e461b38b009e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4089ca812b00a9e7ab8b2ba1f148d929de9837384e6b0b64ea8f62dd6917fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4089ca812b00a9e7ab8b2ba1f148d929de9837384e6b0b64ea8f62dd6917fee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0fceec5800537c79268d8bad66cd51cedd7e6442e8f08ea259dd5714334a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0fceec5800537c79268d8bad66cd51cedd7e6442e8f08ea259dd5714334a81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vpj8c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:59Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:59 crc kubenswrapper[4965]: I0219 09:42:59.564584 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:59Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:59 crc kubenswrapper[4965]: I0219 09:42:59.577367 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6nv8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7972115-bfc1-42ee-b756-e394806eed51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://597dabc5893cced827268c6dc222b2f1535c93e6086c25cec52e7f612952eb65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vd96\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6nv8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:59Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:59 crc kubenswrapper[4965]: I0219 09:42:59.589529 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:59 crc kubenswrapper[4965]: I0219 09:42:59.589571 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:59 crc kubenswrapper[4965]: I0219 09:42:59.589580 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:59 crc kubenswrapper[4965]: I0219 09:42:59.589596 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:59 crc kubenswrapper[4965]: I0219 09:42:59.589606 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:59Z","lastTransitionTime":"2026-02-19T09:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:59 crc kubenswrapper[4965]: I0219 09:42:59.601412 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nsjqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aef896286f2619adf09fb4e2f4f25543b1d0d69c90fb4d301fb1c215e9b78f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4tp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nsjqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:59Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:59 crc kubenswrapper[4965]: I0219 09:42:59.617208 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pjxbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3965f16-f751-4de2-9f58-db2070fc99b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d81fdde65dd95b5dd26fd2bccb3c26f4491eee9891d4e837fd01338432057878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pjxbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:59Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:59 crc kubenswrapper[4965]: I0219 09:42:59.643035 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd2c04f5bfa6800521c39502b241dfea1a0b9d3ddde4eb92d501d28bcfad1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:59Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:59 crc kubenswrapper[4965]: I0219 09:42:59.658022 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:59Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:59 crc kubenswrapper[4965]: I0219 09:42:59.672393 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed9a04147ac88af087b35406b7fc4e1261b034a9fbfa0014446cdc08743f7184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27905a4c42a1d28d582484efe02020cd2b7d5a5af7c53787412705c7a6da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:59Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:59 crc kubenswrapper[4965]: I0219 09:42:59.685749 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ef3eb8-6103-492d-b6ef-f16081d15e83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://107d47a2c3ddc138ad383ab20f81dabe2c31af50f7bd66c31b66df79488ba837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ff237da7e509d3b4a25e8042c384a768ef0123d1687b574502f769bde3121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mhh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:59Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:59 crc kubenswrapper[4965]: I0219 09:42:59.692010 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:59 crc kubenswrapper[4965]: I0219 09:42:59.692049 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:59 crc kubenswrapper[4965]: I0219 09:42:59.692059 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:59 crc kubenswrapper[4965]: I0219 09:42:59.692107 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:59 crc kubenswrapper[4965]: I0219 09:42:59.692118 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:59Z","lastTransitionTime":"2026-02-19T09:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:59 crc kubenswrapper[4965]: I0219 09:42:59.710708 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c788dfa-1923-4a2b-9619-73acf92ec849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa60b6875cede631c9383845eb085f96d62a6365609f1f98b84165b54e0872a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ebb933d7238665138ec7e854756522607a2814b48116b2ce4474869b39344c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac7fd5095ec7fd8ce98b9150bd5c0a642004e2c1239a6fa1ff002efa67471df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51316b32af59fe23cdf832fbc0b37b11f74d3a57d01eed32ca30a196d4c7e2c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccba1acfe523175d218c25c2f59a6f9874426235c9cba981a80cc53aca12408a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc418c94085bcd4ed93250cce9eb6bc122cd045035b72800df2bdf4b364d6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93e915205e75c0aa802775af4b27f25846f264025ef038251463546485cf2acf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://533452e14c9d0d57a451ec0dd06097f87f60658a8f008203b29c31b2b5310eb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dcfpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:42:59Z is after 2025-08-24T17:21:41Z" Feb 19 09:42:59 crc kubenswrapper[4965]: I0219 09:42:59.795666 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:59 crc kubenswrapper[4965]: I0219 09:42:59.795710 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:59 crc kubenswrapper[4965]: I0219 09:42:59.795721 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:59 crc kubenswrapper[4965]: I0219 09:42:59.795738 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:59 crc kubenswrapper[4965]: I0219 09:42:59.795748 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:59Z","lastTransitionTime":"2026-02-19T09:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:42:59 crc kubenswrapper[4965]: I0219 09:42:59.898249 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:42:59 crc kubenswrapper[4965]: I0219 09:42:59.898280 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:42:59 crc kubenswrapper[4965]: I0219 09:42:59.898291 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:42:59 crc kubenswrapper[4965]: I0219 09:42:59.898305 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:42:59 crc kubenswrapper[4965]: I0219 09:42:59.898315 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:42:59Z","lastTransitionTime":"2026-02-19T09:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.000924 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.000975 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.000989 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.001008 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.001035 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:00Z","lastTransitionTime":"2026-02-19T09:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.103949 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.103994 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.104008 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.104026 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.104039 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:00Z","lastTransitionTime":"2026-02-19T09:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.164789 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 15:11:34.714655274 +0000 UTC Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.197101 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.197124 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.197178 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:43:00 crc kubenswrapper[4965]: E0219 09:43:00.197277 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:43:00 crc kubenswrapper[4965]: E0219 09:43:00.197459 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:43:00 crc kubenswrapper[4965]: E0219 09:43:00.197551 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.206283 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.206333 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.206383 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.206403 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.206415 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:00Z","lastTransitionTime":"2026-02-19T09:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.309214 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.309250 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.309259 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.309275 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.309286 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:00Z","lastTransitionTime":"2026-02-19T09:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.334938 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.335104 4965 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 09:43:00 crc kubenswrapper[4965]: E0219 09:43:00.336007 4965 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 93e915205e75c0aa802775af4b27f25846f264025ef038251463546485cf2acf is running failed: container process not found" containerID="93e915205e75c0aa802775af4b27f25846f264025ef038251463546485cf2acf" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Feb 19 09:43:00 crc kubenswrapper[4965]: E0219 09:43:00.336696 4965 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 93e915205e75c0aa802775af4b27f25846f264025ef038251463546485cf2acf is running failed: container process not found" containerID="93e915205e75c0aa802775af4b27f25846f264025ef038251463546485cf2acf" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Feb 19 09:43:00 crc kubenswrapper[4965]: E0219 09:43:00.337064 4965 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 93e915205e75c0aa802775af4b27f25846f264025ef038251463546485cf2acf is running failed: container process not found" containerID="93e915205e75c0aa802775af4b27f25846f264025ef038251463546485cf2acf" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Feb 19 09:43:00 crc kubenswrapper[4965]: E0219 09:43:00.337163 4965 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 93e915205e75c0aa802775af4b27f25846f264025ef038251463546485cf2acf is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" podUID="7c788dfa-1923-4a2b-9619-73acf92ec849" containerName="ovnkube-controller" Feb 19 09:43:00 crc kubenswrapper[4965]: E0219 09:43:00.337686 4965 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 93e915205e75c0aa802775af4b27f25846f264025ef038251463546485cf2acf is running failed: container process not found" containerID="93e915205e75c0aa802775af4b27f25846f264025ef038251463546485cf2acf" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Feb 19 09:43:00 crc kubenswrapper[4965]: E0219 09:43:00.337993 4965 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 93e915205e75c0aa802775af4b27f25846f264025ef038251463546485cf2acf is running failed: container process not found" containerID="93e915205e75c0aa802775af4b27f25846f264025ef038251463546485cf2acf" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Feb 19 09:43:00 crc kubenswrapper[4965]: E0219 09:43:00.338389 4965 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 93e915205e75c0aa802775af4b27f25846f264025ef038251463546485cf2acf is running failed: container process not found" containerID="93e915205e75c0aa802775af4b27f25846f264025ef038251463546485cf2acf" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Feb 19 09:43:00 crc kubenswrapper[4965]: E0219 09:43:00.338437 4965 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 93e915205e75c0aa802775af4b27f25846f264025ef038251463546485cf2acf is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" podUID="7c788dfa-1923-4a2b-9619-73acf92ec849" containerName="ovnkube-controller" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.411814 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.411871 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.411881 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.411899 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.411910 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:00Z","lastTransitionTime":"2026-02-19T09:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.450145 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dcfpx_7c788dfa-1923-4a2b-9619-73acf92ec849/ovnkube-controller/0.log" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.452931 4965 generic.go:334] "Generic (PLEG): container finished" podID="7c788dfa-1923-4a2b-9619-73acf92ec849" containerID="93e915205e75c0aa802775af4b27f25846f264025ef038251463546485cf2acf" exitCode=1 Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.452978 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" event={"ID":"7c788dfa-1923-4a2b-9619-73acf92ec849","Type":"ContainerDied","Data":"93e915205e75c0aa802775af4b27f25846f264025ef038251463546485cf2acf"} Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.453859 4965 scope.go:117] "RemoveContainer" containerID="93e915205e75c0aa802775af4b27f25846f264025ef038251463546485cf2acf" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.457538 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vpj8c" event={"ID":"26ce37d0-9ace-438a-bdd4-6bb30e41bac8","Type":"ContainerStarted","Data":"cab682da53d115c9e5ce5dca08aae544673283d03b3e11ba9d28ca7896fd4103"} Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.471127 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:00Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.485778 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6nv8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7972115-bfc1-42ee-b756-e394806eed51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://597dabc5893cced827268c6dc222b2f1535c93e6086c25cec52e7f612952eb65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vd96\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6nv8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:00Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.502749 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nsjqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aef896286f2619adf09fb4e2f4f25543b1d0d69c90fb4d301fb1c215e9b78f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4tp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nsjqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:00Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.516168 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.516229 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.516247 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.516268 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.516282 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:00Z","lastTransitionTime":"2026-02-19T09:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.516405 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pjxbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3965f16-f751-4de2-9f58-db2070fc99b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d81fdde65dd95b5dd26fd2bccb3c26f4491eee9891d4e837fd01338432057878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pjxbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:00Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.532045 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd2c04f5bfa6800521c39502b241dfea1a0b9d3ddde4eb92d501d28bcfad1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:00Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.548136 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:00Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.561079 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed9a04147ac88af087b35406b7fc4e1261b034a9fbfa0014446cdc08743f7184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27905a4c42a1d28d582484efe02020cd2b7d5a5af7c53787412705c7a6da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:00Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.572257 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ef3eb8-6103-492d-b6ef-f16081d15e83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://107d47a2c3ddc138ad383ab20f81dabe2c31af50f7bd66c31b66df79488ba837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ff237da7e509d3b4a25e8042c384a768ef0123d1687b574502f769bde3121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mhh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:00Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.591319 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c788dfa-1923-4a2b-9619-73acf92ec849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa60b6875cede631c9383845eb085f96d62a6365609f1f98b84165b54e0872a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ebb933d7238665138ec7e854756522607a2814b48116b2ce4474869b39344c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac7fd5095ec7fd8ce98b9150bd5c0a642004e2c1239a6fa1ff002efa67471df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51316b32af59fe23cdf832fbc0b37b11f74d3a57d01eed32ca30a196d4c7e2c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccba1acfe523175d218c25c2f59a6f9874426235c9cba981a80cc53aca12408a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc418c94085bcd4ed93250cce9eb6bc122cd045035b72800df2bdf4b364d6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93e915205e75c0aa802775af4b27f25846f264025ef038251463546485cf2acf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93e915205e75c0aa802775af4b27f25846f264025ef038251463546485cf2acf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:43:00Z\\\",\\\"message\\\":\\\" 6182 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 09:43:00.162470 6182 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 09:43:00.162523 6182 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 09:43:00.162632 6182 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 09:43:00.163137 6182 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0219 09:43:00.163214 6182 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 09:43:00.163277 6182 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 09:43:00.163855 6182 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://533452e14c9d0d57a451ec0dd06097f87f60658a8f008203b29c31b2b5310eb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dcfpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:00Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.606258 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ca9c67a49c188984680f98e96b659087034f30727c1fcdad7dfc298157745c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:00Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.619352 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.619398 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.619408 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.619428 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.619442 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:00Z","lastTransitionTime":"2026-02-19T09:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.625316 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vpj8c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26ce37d0-9ace-438a-bdd4-6bb30e41bac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3b065fcc4e11f5576c86810e3af403b4336878950bb02f61e1e461b38b009e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c3b065fcc4e11f5576c86810e3af403b4336878950bb02f61e1e461b38b009e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4089ca812b00a9e7ab8b2ba1f148d929de9837384e6b0b64ea8f62dd6917fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4089ca812b00a9e7ab8b2ba1f148d929de9837384e6b0b64ea8f62dd6917fee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0fceec5800537c79268d8bad66cd51cedd7e6442e8f08ea259dd5714334a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0fceec5800537c79268d8bad66cd51cedd7e6442e8f08ea259dd5714334a81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vpj8c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:00Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.647066 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"210f2216-544c-43a1-813b-68e47da7447e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31cd85d70b58724b5ed5071b085e5b20b31083087fc7847dfadccc52cfd717c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d403f58f80ef84bfe879f95390ffc186d6516ce879f1ca593e9d06b6fdaa9003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86861693b1090906a34be8e134571c684c24b327ed62db509ca470e79c70c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65b4ac45cd6766a2144308a99817f41ef69812519680ebae6edf2d6c72f1c97e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c0288960f3e7739ec0587fcefc29e57c0e351c4903326474454df7b6b57a29c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:42:39Z\\\",\\\"message\\\":\\\"W0219 09:42:28.573633 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 09:42:28.575071 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771494148 cert, and key in /tmp/serving-cert-427400488/serving-signer.crt, /tmp/serving-cert-427400488/serving-signer.key\\\\nI0219 09:42:29.117984 1 observer_polling.go:159] Starting file observer\\\\nW0219 09:42:29.120780 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 09:42:29.121009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:42:29.122010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-427400488/tls.crt::/tmp/serving-cert-427400488/tls.key\\\\\\\"\\\\nF0219 09:42:39.487179 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2dcb23379da52333bd115902957e4051df5472eb6d909d45f4781487e8d6e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:00Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.665269 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85d200ad-dc81-4825-a3e0-976c042ebfd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e2ac875fca92d3c631dc7856cd9f72b9abbf3f2edcbc7efeb49ce1c03ac52a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d4ac252f5069500eef4e1579559c883095bf1c21a29cb96a36a4aab507a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29999666cb6f12b3a4a394a38d4304dd636fe7106b771ca4ef541693fbfc76a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fb1ec6375fa0345ae67191ebc522471cabd2510440f8051132b833c0fa595e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:00Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.679897 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:00Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.694472 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:00Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.706960 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6nv8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7972115-bfc1-42ee-b756-e394806eed51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://597dabc5893cced827268c6dc222b2f1535c93e6086c25cec52e7f612952eb65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vd96\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6nv8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:00Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.722572 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.722632 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.722644 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.722667 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.722680 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:00Z","lastTransitionTime":"2026-02-19T09:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.723626 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nsjqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aef896286f2619adf09fb4e2f4f25543b1d0d69c90fb4d301fb1c215e9b78f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4tp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nsjqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:00Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.735715 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pjxbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3965f16-f751-4de2-9f58-db2070fc99b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d81fdde65dd95b5dd26fd2bccb3c26f4491eee9891d4e837fd01338432057878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pjxbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:00Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.752280 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd2c04f5bfa6800521c39502b241dfea1a0b9d3ddde4eb92d501d28bcfad1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:00Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.766387 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:00Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.780116 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed9a04147ac88af087b35406b7fc4e1261b034a9fbfa0014446cdc08743f7184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27905a4c42a1d28d582484efe02020cd2b7d5a5af7c53787412705c7a6da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:00Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.791642 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ef3eb8-6103-492d-b6ef-f16081d15e83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://107d47a2c3ddc138ad383ab20f81dabe2c31af50f7bd66c31b66df79488ba837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ff237da7e509d3b4a25e8042c384a768ef0123d1687b574502f769bde3121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mhh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:00Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.809358 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c788dfa-1923-4a2b-9619-73acf92ec849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa60b6875cede631c9383845eb085f96d62a6365609f1f98b84165b54e0872a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ebb933d7238665138ec7e854756522607a2814b48116b2ce4474869b39344c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac7fd5095ec7fd8ce98b9150bd5c0a642004e2c1239a6fa1ff002efa67471df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51316b32af59fe23cdf832fbc0b37b11f74d3a57d01eed32ca30a196d4c7e2c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccba1acfe523175d218c25c2f59a6f9874426235c9cba981a80cc53aca12408a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc418c94085bcd4ed93250cce9eb6bc122cd045035b72800df2bdf4b364d6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93e915205e75c0aa802775af4b27f25846f264025ef038251463546485cf2acf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93e915205e75c0aa802775af4b27f25846f264025ef038251463546485cf2acf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:43:00Z\\\",\\\"message\\\":\\\" 6182 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 09:43:00.162470 6182 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 09:43:00.162523 6182 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 09:43:00.162632 6182 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 09:43:00.163137 6182 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0219 09:43:00.163214 6182 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 09:43:00.163277 6182 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 09:43:00.163855 6182 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://533452e14c9d0d57a451ec0dd06097f87f60658a8f008203b29c31b2b5310eb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dcfpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:00Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.822078 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"210f2216-544c-43a1-813b-68e47da7447e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31cd85d70b58724b5ed5071b085e5b20b31083087fc7847dfadccc52cfd717c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d403f58f80ef84bfe879f95390ffc186d6516ce879f1ca593e9d06b6fdaa9003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86861693b1090906a34be8e134571c684c24b327ed62db509ca470e79c70c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65b4ac45cd6766a2144308a99817f41ef69812519680ebae6edf2d6c72f1c97e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c0288960f3e7739ec0587fcefc29e57c0e351c4903326474454df7b6b57a29c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:42:39Z\\\",\\\"message\\\":\\\"W0219 09:42:28.573633 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 09:42:28.575071 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771494148 cert, and key in /tmp/serving-cert-427400488/serving-signer.crt, /tmp/serving-cert-427400488/serving-signer.key\\\\nI0219 09:42:29.117984 1 observer_polling.go:159] Starting file observer\\\\nW0219 09:42:29.120780 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 09:42:29.121009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:42:29.122010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-427400488/tls.crt::/tmp/serving-cert-427400488/tls.key\\\\\\\"\\\\nF0219 09:42:39.487179 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2dcb23379da52333bd115902957e4051df5472eb6d909d45f4781487e8d6e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:00Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.825782 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.825827 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.825836 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.825853 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.825863 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:00Z","lastTransitionTime":"2026-02-19T09:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.839816 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85d200ad-dc81-4825-a3e0-976c042ebfd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e2ac875fca92d3c631dc7856cd9f72b9abbf3f2edcbc7efeb49ce1c03ac52a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d4ac252f5069500eef4e1579559c883095bf1c21a29cb96a36a4aab507a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29999666cb6f12b3a4a394a38d4304dd636fe7106b771ca4ef541693fbfc76a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fb1ec6375fa0345ae67191ebc522471cabd2510440f8051132b833c0fa595e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:00Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.852570 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:00Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.865419 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ca9c67a49c188984680f98e96b659087034f30727c1fcdad7dfc298157745c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:00Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.884325 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vpj8c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26ce37d0-9ace-438a-bdd4-6bb30e41bac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cab682da53d115c9e5ce5dca08aae544673283d03b3e11ba9d28ca7896fd4103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3b065fcc4e11f5576c86810e3af403b4336878950bb02f61e1e461b38b009e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c3b065fcc4e11f5576c86810e3af403b4336878950bb02f61e1e461b38b009e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4089ca812b00a9e7ab8b2ba1f148d929de9837384e6b0b64ea8f62dd6917fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4089ca812b00a9e7ab8b2ba1f148d929de9837384e6b0b64ea8f62dd6917fee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0fceec5800537c79268d8bad66cd51cedd7e6442e8f08ea259dd5714334a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0fceec5800537c79268d8bad66cd51cedd7e6442e8f08ea259dd5714334a81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vpj8c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:00Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.928243 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.928278 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.928287 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.928302 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:00 crc kubenswrapper[4965]: I0219 09:43:00.928312 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:00Z","lastTransitionTime":"2026-02-19T09:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.002244 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g5jnt"] Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.002880 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g5jnt" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.005345 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.005445 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.024696 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:01Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.030237 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.030283 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.030295 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.030313 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.030324 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:01Z","lastTransitionTime":"2026-02-19T09:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.037843 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6nv8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7972115-bfc1-42ee-b756-e394806eed51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://597dabc5893cced827268c6dc222b2f1535c93e6086c25cec52e7f612952eb65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vd96\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6nv8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:01Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.051952 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nsjqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aef896286f2619adf09fb4e2f4f25543b1d0d69c90fb4d301fb1c215e9b78f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4tp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nsjqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:01Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.064397 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pjxbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3965f16-f751-4de2-9f58-db2070fc99b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d81fdde65dd95b5dd26fd2bccb3c26f4491eee9891d4e837fd01338432057878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pjxbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:01Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.079636 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:01Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.093065 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed9a04147ac88af087b35406b7fc4e1261b034a9fbfa0014446cdc08743f7184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27905a4c42a1d28d582484efe02020cd2b7d5a5af7c53787412705c7a6da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:01Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.103963 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ef3eb8-6103-492d-b6ef-f16081d15e83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://107d47a2c3ddc138ad383ab20f81dabe2c31af50f7bd66c31b66df79488ba837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ff237da7e509d3b4a25e8042c384a768ef0123d1687b574502f769bde3121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mhh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:01Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.123174 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c788dfa-1923-4a2b-9619-73acf92ec849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa60b6875cede631c9383845eb085f96d62a6365609f1f98b84165b54e0872a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ebb933d7238665138ec7e854756522607a2814b48116b2ce4474869b39344c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac7fd5095ec7fd8ce98b9150bd5c0a642004e2c1239a6fa1ff002efa67471df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51316b32af59fe23cdf832fbc0b37b11f74d3a57d01eed32ca30a196d4c7e2c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccba1acfe523175d218c25c2f59a6f9874426235c9cba981a80cc53aca12408a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc418c94085bcd4ed93250cce9eb6bc122cd045035b72800df2bdf4b364d6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93e915205e75c0aa802775af4b27f25846f264025ef038251463546485cf2acf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93e915205e75c0aa802775af4b27f25846f264025ef038251463546485cf2acf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:43:00Z\\\",\\\"message\\\":\\\" 6182 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 09:43:00.162470 6182 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 09:43:00.162523 6182 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 09:43:00.162632 6182 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 09:43:00.163137 6182 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0219 09:43:00.163214 6182 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 09:43:00.163277 6182 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 09:43:00.163855 6182 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://533452e14c9d0d57a451ec0dd06097f87f60658a8f008203b29c31b2b5310eb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dcfpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:01Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.129587 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0ab24976-06f3-4373-825a-5234ff24f2cc-env-overrides\") pod \"ovnkube-control-plane-749d76644c-g5jnt\" (UID: \"0ab24976-06f3-4373-825a-5234ff24f2cc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g5jnt" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.129626 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0ab24976-06f3-4373-825a-5234ff24f2cc-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-g5jnt\" (UID: \"0ab24976-06f3-4373-825a-5234ff24f2cc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g5jnt" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.129668 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll7dd\" (UniqueName: \"kubernetes.io/projected/0ab24976-06f3-4373-825a-5234ff24f2cc-kube-api-access-ll7dd\") pod \"ovnkube-control-plane-749d76644c-g5jnt\" (UID: \"0ab24976-06f3-4373-825a-5234ff24f2cc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g5jnt" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.129702 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0ab24976-06f3-4373-825a-5234ff24f2cc-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-g5jnt\" (UID: \"0ab24976-06f3-4373-825a-5234ff24f2cc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g5jnt" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.133075 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.133108 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.133119 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.133140 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.133154 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:01Z","lastTransitionTime":"2026-02-19T09:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.136719 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g5jnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ab24976-06f3-4373-825a-5234ff24f2cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll7dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll7dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:43:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-g5jnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:01Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.150054 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd2c04f5bfa6800521c39502b241dfea1a0b9d3ddde4eb92d501d28bcfad1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:01Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.163521 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"210f2216-544c-43a1-813b-68e47da7447e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31cd85d70b58724b5ed5071b085e5b20b31083087fc7847dfadccc52cfd717c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d403f58f80ef84bfe879f95390ffc186d6516ce879f1ca593e9d06b6fdaa9003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86861693b1090906a34be8e134571c684c24b327ed62db509ca470e79c70c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65b4ac45cd6766a2144308a99817f41ef69812519680ebae6edf2d6c72f1c97e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c0288960f3e7739ec0587fcefc29e57c0e351c4903326474454df7b6b57a29c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:42:39Z\\\",\\\"message\\\":\\\"W0219 09:42:28.573633 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 09:42:28.575071 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771494148 cert, and key in /tmp/serving-cert-427400488/serving-signer.crt, /tmp/serving-cert-427400488/serving-signer.key\\\\nI0219 09:42:29.117984 1 observer_polling.go:159] Starting file observer\\\\nW0219 09:42:29.120780 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 09:42:29.121009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:42:29.122010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-427400488/tls.crt::/tmp/serving-cert-427400488/tls.key\\\\\\\"\\\\nF0219 09:42:39.487179 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2dcb23379da52333bd115902957e4051df5472eb6d909d45f4781487e8d6e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:01Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.165466 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 09:03:23.94914692 +0000 UTC Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.176315 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85d200ad-dc81-4825-a3e0-976c042ebfd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e2ac875fca92d3c631dc7856cd9f72b9abbf3f2edcbc7efeb49ce1c03ac52a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d4ac252f5069500eef4e1579559c883095bf1c21a29cb96a36a4aab507a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29999666cb6f12b3a4a394a38d4304dd636fe7106b771ca4ef541693fbfc76a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fb1ec6375fa0345ae67191ebc522471cabd2510440f8051132b833c0fa595e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:01Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.189017 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:01Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.201741 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ca9c67a49c188984680f98e96b659087034f30727c1fcdad7dfc298157745c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:01Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.215662 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vpj8c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26ce37d0-9ace-438a-bdd4-6bb30e41bac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cab682da53d115c9e5ce5dca08aae544673283d03b3e11ba9d28ca7896fd4103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3b065fcc4e11f5576c86810e3af403b4336878950bb02f61e1e461b38b009e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c3b065fcc4e11f5576c86810e3af403b4336878950bb02f61e1e461b38b009e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4089ca812b00a9e7ab8b2ba1f148d929de9837384e6b0b64ea8f62dd6917fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4089ca812b00a9e7ab8b2ba1f148d929de9837384e6b0b64ea8f62dd6917fee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0fceec5800537c79268d8bad66cd51cedd7e6442e8f08ea259dd5714334a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0fceec5800537c79268d8bad66cd51cedd7e6442e8f08ea259dd5714334a81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vpj8c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:01Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.230217 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0ab24976-06f3-4373-825a-5234ff24f2cc-env-overrides\") pod \"ovnkube-control-plane-749d76644c-g5jnt\" (UID: \"0ab24976-06f3-4373-825a-5234ff24f2cc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g5jnt" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.230250 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0ab24976-06f3-4373-825a-5234ff24f2cc-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-g5jnt\" (UID: \"0ab24976-06f3-4373-825a-5234ff24f2cc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g5jnt" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.230284 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ll7dd\" (UniqueName: \"kubernetes.io/projected/0ab24976-06f3-4373-825a-5234ff24f2cc-kube-api-access-ll7dd\") pod \"ovnkube-control-plane-749d76644c-g5jnt\" (UID: \"0ab24976-06f3-4373-825a-5234ff24f2cc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g5jnt" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.230303 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0ab24976-06f3-4373-825a-5234ff24f2cc-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-g5jnt\" (UID: \"0ab24976-06f3-4373-825a-5234ff24f2cc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g5jnt" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.230924 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0ab24976-06f3-4373-825a-5234ff24f2cc-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-g5jnt\" (UID: \"0ab24976-06f3-4373-825a-5234ff24f2cc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g5jnt" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.230999 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0ab24976-06f3-4373-825a-5234ff24f2cc-env-overrides\") pod \"ovnkube-control-plane-749d76644c-g5jnt\" (UID: \"0ab24976-06f3-4373-825a-5234ff24f2cc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g5jnt" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.236181 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.236260 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.236272 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.236292 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.236309 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:01Z","lastTransitionTime":"2026-02-19T09:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.236455 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0ab24976-06f3-4373-825a-5234ff24f2cc-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-g5jnt\" (UID: \"0ab24976-06f3-4373-825a-5234ff24f2cc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g5jnt" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.247397 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ll7dd\" (UniqueName: \"kubernetes.io/projected/0ab24976-06f3-4373-825a-5234ff24f2cc-kube-api-access-ll7dd\") pod \"ovnkube-control-plane-749d76644c-g5jnt\" (UID: \"0ab24976-06f3-4373-825a-5234ff24f2cc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g5jnt" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.315627 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g5jnt" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.338936 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.338975 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.338986 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.339004 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.339016 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:01Z","lastTransitionTime":"2026-02-19T09:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.441335 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.441383 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.441393 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.441409 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.441419 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:01Z","lastTransitionTime":"2026-02-19T09:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.465009 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dcfpx_7c788dfa-1923-4a2b-9619-73acf92ec849/ovnkube-controller/0.log" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.467028 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" event={"ID":"7c788dfa-1923-4a2b-9619-73acf92ec849","Type":"ContainerStarted","Data":"708efdbb32a469d559b54fa6c816d7a3ce1b7fc4bca9a81f08ce9400d6090f0c"} Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.468293 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g5jnt" event={"ID":"0ab24976-06f3-4373-825a-5234ff24f2cc","Type":"ContainerStarted","Data":"6d2b2a6bdd8187ad23010e928a4730240ffc978c55fed6041994e9c3616869f3"} Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.480347 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:01Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.489995 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6nv8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7972115-bfc1-42ee-b756-e394806eed51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://597dabc5893cced827268c6dc222b2f1535c93e6086c25cec52e7f612952eb65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vd96\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6nv8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:01Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.503667 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nsjqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aef896286f2619adf09fb4e2f4f25543b1d0d69c90fb4d301fb1c215e9b78f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4tp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nsjqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:01Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.514861 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pjxbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3965f16-f751-4de2-9f58-db2070fc99b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d81fdde65dd95b5dd26fd2bccb3c26f4491eee9891d4e837fd01338432057878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pjxbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:01Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.528117 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd2c04f5bfa6800521c39502b241dfea1a0b9d3ddde4eb92d501d28bcfad1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:01Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.539361 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:01Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.545106 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.545157 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.545169 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.545189 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.545223 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:01Z","lastTransitionTime":"2026-02-19T09:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.551824 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed9a04147ac88af087b35406b7fc4e1261b034a9fbfa0014446cdc08743f7184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27905a4c42a1d28d582484efe02020cd2b7d5a5af7c53787412705c7a6da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:01Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.564050 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ef3eb8-6103-492d-b6ef-f16081d15e83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://107d47a2c3ddc138ad383ab20f81dabe2c31af50f7bd66c31b66df79488ba837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ff237da7e509d3b4a25e8042c384a768ef0123d1687b574502f769bde3121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mhh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:01Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.583572 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c788dfa-1923-4a2b-9619-73acf92ec849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa60b6875cede631c9383845eb085f96d62a6365609f1f98b84165b54e0872a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ebb933d7238665138ec7e854756522607a2814b48116b2ce4474869b39344c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac7fd5095ec7fd8ce98b9150bd5c0a642004e2c1239a6fa1ff002efa67471df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51316b32af59fe23cdf832fbc0b37b11f74d3a57d01eed32ca30a196d4c7e2c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccba1acfe523175d218c25c2f59a6f9874426235c9cba981a80cc53aca12408a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc418c94085bcd4ed93250cce9eb6bc122cd045035b72800df2bdf4b364d6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://708efdbb32a469d559b54fa6c816d7a3ce1b7fc4bca9a81f08ce9400d6090f0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93e915205e75c0aa802775af4b27f25846f264025ef038251463546485cf2acf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:43:00Z\\\",\\\"message\\\":\\\" 6182 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 09:43:00.162470 6182 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 09:43:00.162523 6182 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 09:43:00.162632 6182 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 09:43:00.163137 6182 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0219 09:43:00.163214 6182 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 09:43:00.163277 6182 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 09:43:00.163855 6182 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://533452e14c9d0d57a451ec0dd06097f87f60658a8f008203b29c31b2b5310eb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dcfpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:01Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.596024 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g5jnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ab24976-06f3-4373-825a-5234ff24f2cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll7dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll7dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:43:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-g5jnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:01Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.612509 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"210f2216-544c-43a1-813b-68e47da7447e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31cd85d70b58724b5ed5071b085e5b20b31083087fc7847dfadccc52cfd717c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d403f58f80ef84bfe879f95390ffc186d6516ce879f1ca593e9d06b6fdaa9003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86861693b1090906a34be8e134571c684c24b327ed62db509ca470e79c70c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65b4ac45cd6766a2144308a99817f41ef69812519680ebae6edf2d6c72f1c97e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c0288960f3e7739ec0587fcefc29e57c0e351c4903326474454df7b6b57a29c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:42:39Z\\\",\\\"message\\\":\\\"W0219 09:42:28.573633 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 09:42:28.575071 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771494148 cert, and key in /tmp/serving-cert-427400488/serving-signer.crt, /tmp/serving-cert-427400488/serving-signer.key\\\\nI0219 09:42:29.117984 1 observer_polling.go:159] Starting file observer\\\\nW0219 09:42:29.120780 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 09:42:29.121009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:42:29.122010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-427400488/tls.crt::/tmp/serving-cert-427400488/tls.key\\\\\\\"\\\\nF0219 09:42:39.487179 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2dcb23379da52333bd115902957e4051df5472eb6d909d45f4781487e8d6e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:01Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.628776 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85d200ad-dc81-4825-a3e0-976c042ebfd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e2ac875fca92d3c631dc7856cd9f72b9abbf3f2edcbc7efeb49ce1c03ac52a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d4ac252f5069500eef4e1579559c883095bf1c21a29cb96a36a4aab507a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29999666cb6f12b3a4a394a38d4304dd636fe7106b771ca4ef541693fbfc76a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fb1ec6375fa0345ae67191ebc522471cabd2510440f8051132b833c0fa595e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:01Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.644158 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:01Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.647905 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.647935 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.647947 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.647965 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.647977 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:01Z","lastTransitionTime":"2026-02-19T09:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.658954 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ca9c67a49c188984680f98e96b659087034f30727c1fcdad7dfc298157745c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:01Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.677757 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vpj8c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26ce37d0-9ace-438a-bdd4-6bb30e41bac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cab682da53d115c9e5ce5dca08aae544673283d03b3e11ba9d28ca7896fd4103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3b065fcc4e11f5576c86810e3af403b4336878950bb02f61e1e461b38b009e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c3b065fcc4e11f5576c86810e3af403b4336878950bb02f61e1e461b38b009e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4089ca812b00a9e7ab8b2ba1f148d929de9837384e6b0b64ea8f62dd6917fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4089ca812b00a9e7ab8b2ba1f148d929de9837384e6b0b64ea8f62dd6917fee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0fceec5800537c79268d8bad66cd51cedd7e6442e8f08ea259dd5714334a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0fceec5800537c79268d8bad66cd51cedd7e6442e8f08ea259dd5714334a81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vpj8c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:01Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.750573 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.750616 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.750629 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.750646 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.750658 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:01Z","lastTransitionTime":"2026-02-19T09:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.837179 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:43:01 crc kubenswrapper[4965]: E0219 09:43:01.837698 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:43:17.837670281 +0000 UTC m=+53.458991591 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.853110 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.853141 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.853153 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.853169 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.853180 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:01Z","lastTransitionTime":"2026-02-19T09:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.938710 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.938800 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.938852 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.938903 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:43:01 crc kubenswrapper[4965]: E0219 09:43:01.939126 4965 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 09:43:01 crc kubenswrapper[4965]: E0219 09:43:01.939161 4965 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 09:43:01 crc kubenswrapper[4965]: E0219 09:43:01.939181 4965 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 09:43:01 crc kubenswrapper[4965]: E0219 09:43:01.939299 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 09:43:17.939276271 +0000 UTC m=+53.560597621 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 09:43:01 crc kubenswrapper[4965]: E0219 09:43:01.939419 4965 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 09:43:01 crc kubenswrapper[4965]: E0219 09:43:01.939517 4965 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 09:43:01 crc kubenswrapper[4965]: E0219 09:43:01.939564 4965 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 09:43:01 crc kubenswrapper[4965]: E0219 09:43:01.939610 4965 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 09:43:01 crc kubenswrapper[4965]: E0219 09:43:01.939533 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 09:43:17.939511567 +0000 UTC m=+53.560832877 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 09:43:01 crc kubenswrapper[4965]: E0219 09:43:01.939635 4965 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 09:43:01 crc kubenswrapper[4965]: E0219 09:43:01.939667 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 09:43:17.93964278 +0000 UTC m=+53.560964260 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 09:43:01 crc kubenswrapper[4965]: E0219 09:43:01.939716 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 09:43:17.939691801 +0000 UTC m=+53.561013301 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.955969 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.956015 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.956029 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.956050 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:01 crc kubenswrapper[4965]: I0219 09:43:01.956063 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:01Z","lastTransitionTime":"2026-02-19T09:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.058794 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.058861 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.058878 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.058904 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.058923 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:02Z","lastTransitionTime":"2026-02-19T09:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.128909 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-lwjwk"] Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.129597 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:43:02 crc kubenswrapper[4965]: E0219 09:43:02.129690 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lwjwk" podUID="1e1b431a-0390-4366-82d1-6cb782c7a9e8" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.140994 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdh66\" (UniqueName: \"kubernetes.io/projected/1e1b431a-0390-4366-82d1-6cb782c7a9e8-kube-api-access-vdh66\") pod \"network-metrics-daemon-lwjwk\" (UID: \"1e1b431a-0390-4366-82d1-6cb782c7a9e8\") " pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.141110 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1e1b431a-0390-4366-82d1-6cb782c7a9e8-metrics-certs\") pod \"network-metrics-daemon-lwjwk\" (UID: \"1e1b431a-0390-4366-82d1-6cb782c7a9e8\") " pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.161173 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd2c04f5bfa6800521c39502b241dfea1a0b9d3ddde4eb92d501d28bcfad1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:02Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.162347 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.162399 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.162411 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.162431 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.162444 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:02Z","lastTransitionTime":"2026-02-19T09:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.166502 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 11:17:34.415956178 +0000 UTC Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.186525 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:02Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.196859 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.196896 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.196872 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:43:02 crc kubenswrapper[4965]: E0219 09:43:02.197021 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:43:02 crc kubenswrapper[4965]: E0219 09:43:02.197151 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:43:02 crc kubenswrapper[4965]: E0219 09:43:02.197283 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.207492 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed9a04147ac88af087b35406b7fc4e1261b034a9fbfa0014446cdc08743f7184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27905a4c42a1d28d582484efe02020cd2b7d5a5af7c53787412705c7a6da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:02Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.219145 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ef3eb8-6103-492d-b6ef-f16081d15e83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://107d47a2c3ddc138ad383ab20f81dabe2c31af50f7bd66c31b66df79488ba837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ff237da7e509d3b4a25e8042c384a768ef0123d1687b574502f769bde3121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mhh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:02Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.237765 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c788dfa-1923-4a2b-9619-73acf92ec849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa60b6875cede631c9383845eb085f96d62a6365609f1f98b84165b54e0872a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ebb933d7238665138ec7e854756522607a2814b48116b2ce4474869b39344c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac7fd5095ec7fd8ce98b9150bd5c0a642004e2c1239a6fa1ff002efa67471df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51316b32af59fe23cdf832fbc0b37b11f74d3a57d01eed32ca30a196d4c7e2c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccba1acfe523175d218c25c2f59a6f9874426235c9cba981a80cc53aca12408a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc418c94085bcd4ed93250cce9eb6bc122cd045035b72800df2bdf4b364d6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://708efdbb32a469d559b54fa6c816d7a3ce1b7fc4bca9a81f08ce9400d6090f0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93e915205e75c0aa802775af4b27f25846f264025ef038251463546485cf2acf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:43:00Z\\\",\\\"message\\\":\\\" 6182 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 09:43:00.162470 6182 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 09:43:00.162523 6182 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 09:43:00.162632 6182 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 09:43:00.163137 6182 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0219 09:43:00.163214 6182 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 09:43:00.163277 6182 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 09:43:00.163855 6182 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://533452e14c9d0d57a451ec0dd06097f87f60658a8f008203b29c31b2b5310eb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dcfpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:02Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.241819 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1e1b431a-0390-4366-82d1-6cb782c7a9e8-metrics-certs\") pod \"network-metrics-daemon-lwjwk\" (UID: \"1e1b431a-0390-4366-82d1-6cb782c7a9e8\") " pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.241863 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdh66\" (UniqueName: \"kubernetes.io/projected/1e1b431a-0390-4366-82d1-6cb782c7a9e8-kube-api-access-vdh66\") pod \"network-metrics-daemon-lwjwk\" (UID: \"1e1b431a-0390-4366-82d1-6cb782c7a9e8\") " pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:43:02 crc kubenswrapper[4965]: E0219 09:43:02.242050 4965 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 09:43:02 crc kubenswrapper[4965]: E0219 09:43:02.242131 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e1b431a-0390-4366-82d1-6cb782c7a9e8-metrics-certs podName:1e1b431a-0390-4366-82d1-6cb782c7a9e8 nodeName:}" failed. No retries permitted until 2026-02-19 09:43:02.742111905 +0000 UTC m=+38.363433205 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1e1b431a-0390-4366-82d1-6cb782c7a9e8-metrics-certs") pod "network-metrics-daemon-lwjwk" (UID: "1e1b431a-0390-4366-82d1-6cb782c7a9e8") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.251750 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g5jnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ab24976-06f3-4373-825a-5234ff24f2cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll7dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll7dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:43:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-g5jnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:02Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.259543 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdh66\" (UniqueName: \"kubernetes.io/projected/1e1b431a-0390-4366-82d1-6cb782c7a9e8-kube-api-access-vdh66\") pod \"network-metrics-daemon-lwjwk\" (UID: \"1e1b431a-0390-4366-82d1-6cb782c7a9e8\") " pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.265222 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.265264 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.265279 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.265301 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.265314 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:02Z","lastTransitionTime":"2026-02-19T09:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.268687 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"210f2216-544c-43a1-813b-68e47da7447e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31cd85d70b58724b5ed5071b085e5b20b31083087fc7847dfadccc52cfd717c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d403f58f80ef84bfe879f95390ffc186d6516ce879f1ca593e9d06b6fdaa9003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86861693b1090906a34be8e134571c684c24b327ed62db509ca470e79c70c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65b4ac45cd6766a2144308a99817f41ef69812519680ebae6edf2d6c72f1c97e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c0288960f3e7739ec0587fcefc29e57c0e351c4903326474454df7b6b57a29c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:42:39Z\\\",\\\"message\\\":\\\"W0219 09:42:28.573633 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 09:42:28.575071 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771494148 cert, and key in /tmp/serving-cert-427400488/serving-signer.crt, /tmp/serving-cert-427400488/serving-signer.key\\\\nI0219 09:42:29.117984 1 observer_polling.go:159] Starting file observer\\\\nW0219 09:42:29.120780 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 09:42:29.121009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:42:29.122010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-427400488/tls.crt::/tmp/serving-cert-427400488/tls.key\\\\\\\"\\\\nF0219 09:42:39.487179 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2dcb23379da52333bd115902957e4051df5472eb6d909d45f4781487e8d6e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:02Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.284808 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85d200ad-dc81-4825-a3e0-976c042ebfd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e2ac875fca92d3c631dc7856cd9f72b9abbf3f2edcbc7efeb49ce1c03ac52a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d4ac252f5069500eef4e1579559c883095bf1c21a29cb96a36a4aab507a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29999666cb6f12b3a4a394a38d4304dd636fe7106b771ca4ef541693fbfc76a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fb1ec6375fa0345ae67191ebc522471cabd2510440f8051132b833c0fa595e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:02Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.300843 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:02Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.315033 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ca9c67a49c188984680f98e96b659087034f30727c1fcdad7dfc298157745c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:02Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.332281 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vpj8c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26ce37d0-9ace-438a-bdd4-6bb30e41bac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cab682da53d115c9e5ce5dca08aae544673283d03b3e11ba9d28ca7896fd4103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3b065fcc4e11f5576c86810e3af403b4336878950bb02f61e1e461b38b009e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c3b065fcc4e11f5576c86810e3af403b4336878950bb02f61e1e461b38b009e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4089ca812b00a9e7ab8b2ba1f148d929de9837384e6b0b64ea8f62dd6917fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4089ca812b00a9e7ab8b2ba1f148d929de9837384e6b0b64ea8f62dd6917fee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0fceec5800537c79268d8bad66cd51cedd7e6442e8f08ea259dd5714334a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0fceec5800537c79268d8bad66cd51cedd7e6442e8f08ea259dd5714334a81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vpj8c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:02Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.346080 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:02Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.357526 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6nv8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7972115-bfc1-42ee-b756-e394806eed51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://597dabc5893cced827268c6dc222b2f1535c93e6086c25cec52e7f612952eb65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vd96\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6nv8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:02Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.368544 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.368594 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.368607 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.368626 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.368642 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:02Z","lastTransitionTime":"2026-02-19T09:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.375979 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nsjqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aef896286f2619adf09fb4e2f4f25543b1d0d69c90fb4d301fb1c215e9b78f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4tp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nsjqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:02Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.389317 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pjxbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3965f16-f751-4de2-9f58-db2070fc99b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d81fdde65dd95b5dd26fd2bccb3c26f4491eee9891d4e837fd01338432057878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pjxbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:02Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.401055 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lwjwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e1b431a-0390-4366-82d1-6cb782c7a9e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdh66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdh66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:43:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lwjwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:02Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.470925 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.470973 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.470984 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.471006 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.471018 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:02Z","lastTransitionTime":"2026-02-19T09:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.483314 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g5jnt" event={"ID":"0ab24976-06f3-4373-825a-5234ff24f2cc","Type":"ContainerStarted","Data":"fcbd8e6b02f20a249ddb3fbf20ddd72a94b40fd420cb6ad4c59ea513994ac382"} Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.483418 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g5jnt" event={"ID":"0ab24976-06f3-4373-825a-5234ff24f2cc","Type":"ContainerStarted","Data":"ef52c51fd38bf34f1fc3eb014d85c40137dd15030237334159ffbb71e1d6c2a6"} Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.485604 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dcfpx_7c788dfa-1923-4a2b-9619-73acf92ec849/ovnkube-controller/1.log" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.486260 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dcfpx_7c788dfa-1923-4a2b-9619-73acf92ec849/ovnkube-controller/0.log" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.489570 4965 generic.go:334] "Generic (PLEG): container finished" podID="7c788dfa-1923-4a2b-9619-73acf92ec849" containerID="708efdbb32a469d559b54fa6c816d7a3ce1b7fc4bca9a81f08ce9400d6090f0c" exitCode=1 Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.489611 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" event={"ID":"7c788dfa-1923-4a2b-9619-73acf92ec849","Type":"ContainerDied","Data":"708efdbb32a469d559b54fa6c816d7a3ce1b7fc4bca9a81f08ce9400d6090f0c"} Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.489671 4965 scope.go:117] "RemoveContainer" containerID="93e915205e75c0aa802775af4b27f25846f264025ef038251463546485cf2acf" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.490309 4965 scope.go:117] "RemoveContainer" containerID="708efdbb32a469d559b54fa6c816d7a3ce1b7fc4bca9a81f08ce9400d6090f0c" Feb 19 09:43:02 crc kubenswrapper[4965]: E0219 09:43:02.490473 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-dcfpx_openshift-ovn-kubernetes(7c788dfa-1923-4a2b-9619-73acf92ec849)\"" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" podUID="7c788dfa-1923-4a2b-9619-73acf92ec849" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.499602 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:02Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.512368 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6nv8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7972115-bfc1-42ee-b756-e394806eed51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://597dabc5893cced827268c6dc222b2f1535c93e6086c25cec52e7f612952eb65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vd96\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6nv8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:02Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.528484 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nsjqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aef896286f2619adf09fb4e2f4f25543b1d0d69c90fb4d301fb1c215e9b78f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4tp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nsjqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:02Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.543245 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pjxbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3965f16-f751-4de2-9f58-db2070fc99b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d81fdde65dd95b5dd26fd2bccb3c26f4491eee9891d4e837fd01338432057878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pjxbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:02Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.556213 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lwjwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e1b431a-0390-4366-82d1-6cb782c7a9e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdh66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdh66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:43:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lwjwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:02Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.568402 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:02Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.578352 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.578474 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.578491 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.579131 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.579167 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:02Z","lastTransitionTime":"2026-02-19T09:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.581523 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed9a04147ac88af087b35406b7fc4e1261b034a9fbfa0014446cdc08743f7184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27905a4c42a1d28d582484efe02020cd2b7d5a5af7c53787412705c7a6da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:02Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.593364 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ef3eb8-6103-492d-b6ef-f16081d15e83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://107d47a2c3ddc138ad383ab20f81dabe2c31af50f7bd66c31b66df79488ba837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ff237da7e509d3b4a25e8042c384a768ef0123d1687b574502f769bde3121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mhh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:02Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.611806 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c788dfa-1923-4a2b-9619-73acf92ec849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa60b6875cede631c9383845eb085f96d62a6365609f1f98b84165b54e0872a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ebb933d7238665138ec7e854756522607a2814b48116b2ce4474869b39344c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac7fd5095ec7fd8ce98b9150bd5c0a642004e2c1239a6fa1ff002efa67471df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51316b32af59fe23cdf832fbc0b37b11f74d3a57d01eed32ca30a196d4c7e2c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccba1acfe523175d218c25c2f59a6f9874426235c9cba981a80cc53aca12408a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc418c94085bcd4ed93250cce9eb6bc122cd045035b72800df2bdf4b364d6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://708efdbb32a469d559b54fa6c816d7a3ce1b7fc4bca9a81f08ce9400d6090f0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93e915205e75c0aa802775af4b27f25846f264025ef038251463546485cf2acf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:43:00Z\\\",\\\"message\\\":\\\" 6182 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 09:43:00.162470 6182 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 09:43:00.162523 6182 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 09:43:00.162632 6182 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 09:43:00.163137 6182 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0219 09:43:00.163214 6182 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 09:43:00.163277 6182 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 09:43:00.163855 6182 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://533452e14c9d0d57a451ec0dd06097f87f60658a8f008203b29c31b2b5310eb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dcfpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:02Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.628489 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g5jnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ab24976-06f3-4373-825a-5234ff24f2cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef52c51fd38bf34f1fc3eb014d85c40137dd15030237334159ffbb71e1d6c2a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll7dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcbd8e6b02f20a249ddb3fbf20ddd72a94b40fd420cb6ad4c59ea513994ac382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll7dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:43:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-g5jnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:02Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.645562 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.645628 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.645641 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.645663 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.645676 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:02Z","lastTransitionTime":"2026-02-19T09:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.651552 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd2c04f5bfa6800521c39502b241dfea1a0b9d3ddde4eb92d501d28bcfad1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:02Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:02 crc kubenswrapper[4965]: E0219 09:43:02.661878 4965 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1c83089-21b1-454c-b8cd-3bf0aaa04cd0\\\",\\\"systemUUID\\\":\\\"70334fb7-3860-4c43-90b6-37f049faeb9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:02Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.666502 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.666539 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.666549 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.666566 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.666578 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:02Z","lastTransitionTime":"2026-02-19T09:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.669671 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"210f2216-544c-43a1-813b-68e47da7447e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31cd85d70b58724b5ed5071b085e5b20b31083087fc7847dfadccc52cfd717c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d403f58f80ef84bfe879f95390ffc186d6516ce879f1ca593e9d06b6fdaa9003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86861693b1090906a34be8e134571c684c24b327ed62db509ca470e79c70c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65b4ac45cd6766a2144308a99817f41ef69812519680ebae6edf2d6c72f1c97e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c0288960f3e7739ec0587fcefc29e57c0e351c4903326474454df7b6b57a29c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:42:39Z\\\",\\\"message\\\":\\\"W0219 09:42:28.573633 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 09:42:28.575071 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771494148 cert, and key in /tmp/serving-cert-427400488/serving-signer.crt, /tmp/serving-cert-427400488/serving-signer.key\\\\nI0219 09:42:29.117984 1 observer_polling.go:159] Starting file observer\\\\nW0219 09:42:29.120780 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 09:42:29.121009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:42:29.122010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-427400488/tls.crt::/tmp/serving-cert-427400488/tls.key\\\\\\\"\\\\nF0219 09:42:39.487179 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2dcb23379da52333bd115902957e4051df5472eb6d909d45f4781487e8d6e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:02Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:02 crc kubenswrapper[4965]: E0219 09:43:02.680970 4965 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1c83089-21b1-454c-b8cd-3bf0aaa04cd0\\\",\\\"systemUUID\\\":\\\"70334fb7-3860-4c43-90b6-37f049faeb9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:02Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.682800 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85d200ad-dc81-4825-a3e0-976c042ebfd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e2ac875fca92d3c631dc7856cd9f72b9abbf3f2edcbc7efeb49ce1c03ac52a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d4ac252f5069500eef4e1579559c883095bf1c21a29cb96a36a4aab507a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29999666cb6f12b3a4a394a38d4304dd636fe7106b771ca4ef541693fbfc76a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fb1ec6375fa0345ae67191ebc522471cabd2510440f8051132b833c0fa595e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:02Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.687449 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.687481 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.687495 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.687516 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.687530 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:02Z","lastTransitionTime":"2026-02-19T09:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.696727 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:02Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:02 crc kubenswrapper[4965]: E0219 09:43:02.698952 4965 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1c83089-21b1-454c-b8cd-3bf0aaa04cd0\\\",\\\"systemUUID\\\":\\\"70334fb7-3860-4c43-90b6-37f049faeb9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:02Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.702618 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.702684 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.702698 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.702713 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.702725 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:02Z","lastTransitionTime":"2026-02-19T09:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.709581 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ca9c67a49c188984680f98e96b659087034f30727c1fcdad7dfc298157745c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:02Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:02 crc kubenswrapper[4965]: E0219 09:43:02.714590 4965 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1c83089-21b1-454c-b8cd-3bf0aaa04cd0\\\",\\\"systemUUID\\\":\\\"70334fb7-3860-4c43-90b6-37f049faeb9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:02Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.718566 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.718715 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.718780 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.718873 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.718951 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:02Z","lastTransitionTime":"2026-02-19T09:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.728709 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vpj8c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26ce37d0-9ace-438a-bdd4-6bb30e41bac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cab682da53d115c9e5ce5dca08aae544673283d03b3e11ba9d28ca7896fd4103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3b065fcc4e11f5576c86810e3af403b4336878950bb02f61e1e461b38b009e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c3b065fcc4e11f5576c86810e3af403b4336878950bb02f61e1e461b38b009e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4089ca812b00a9e7ab8b2ba1f148d929de9837384e6b0b64ea8f62dd6917fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4089ca812b00a9e7ab8b2ba1f148d929de9837384e6b0b64ea8f62dd6917fee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0fceec5800537c79268d8bad66cd51cedd7e6442e8f08ea259dd5714334a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0fceec5800537c79268d8bad66cd51cedd7e6442e8f08ea259dd5714334a81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vpj8c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:02Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:02 crc kubenswrapper[4965]: E0219 09:43:02.730462 4965 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1c83089-21b1-454c-b8cd-3bf0aaa04cd0\\\",\\\"systemUUID\\\":\\\"70334fb7-3860-4c43-90b6-37f049faeb9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:02Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:02 crc kubenswrapper[4965]: E0219 09:43:02.730590 4965 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.732814 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.732865 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.732880 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.732902 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.732918 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:02Z","lastTransitionTime":"2026-02-19T09:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.743096 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:02Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.746263 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1e1b431a-0390-4366-82d1-6cb782c7a9e8-metrics-certs\") pod \"network-metrics-daemon-lwjwk\" (UID: \"1e1b431a-0390-4366-82d1-6cb782c7a9e8\") " pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:43:02 crc kubenswrapper[4965]: E0219 09:43:02.746378 4965 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 09:43:02 crc kubenswrapper[4965]: E0219 09:43:02.746422 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e1b431a-0390-4366-82d1-6cb782c7a9e8-metrics-certs podName:1e1b431a-0390-4366-82d1-6cb782c7a9e8 nodeName:}" failed. No retries permitted until 2026-02-19 09:43:03.746407238 +0000 UTC m=+39.367728548 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1e1b431a-0390-4366-82d1-6cb782c7a9e8-metrics-certs") pod "network-metrics-daemon-lwjwk" (UID: "1e1b431a-0390-4366-82d1-6cb782c7a9e8") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.757263 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6nv8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7972115-bfc1-42ee-b756-e394806eed51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://597dabc5893cced827268c6dc222b2f1535c93e6086c25cec52e7f612952eb65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vd96\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6nv8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:02Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.770963 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nsjqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aef896286f2619adf09fb4e2f4f25543b1d0d69c90fb4d301fb1c215e9b78f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4tp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nsjqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:02Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.781970 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pjxbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3965f16-f751-4de2-9f58-db2070fc99b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d81fdde65dd95b5dd26fd2bccb3c26f4491eee9891d4e837fd01338432057878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pjxbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:02Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.794914 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lwjwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e1b431a-0390-4366-82d1-6cb782c7a9e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdh66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdh66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:43:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lwjwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:02Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.811130 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd2c04f5bfa6800521c39502b241dfea1a0b9d3ddde4eb92d501d28bcfad1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:02Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.824948 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:02Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.835866 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.835917 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.835935 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.835954 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.835964 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:02Z","lastTransitionTime":"2026-02-19T09:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.838360 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed9a04147ac88af087b35406b7fc4e1261b034a9fbfa0014446cdc08743f7184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27905a4c42a1d28d582484efe02020cd2b7d5a5af7c53787412705c7a6da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:02Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.858402 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ef3eb8-6103-492d-b6ef-f16081d15e83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://107d47a2c3ddc138ad383ab20f81dabe2c31af50f7bd66c31b66df79488ba837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ff237da7e509d3b4a25e8042c384a768ef0123d1687b574502f769bde3121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mhh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:02Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.884085 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c788dfa-1923-4a2b-9619-73acf92ec849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa60b6875cede631c9383845eb085f96d62a6365609f1f98b84165b54e0872a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ebb933d7238665138ec7e854756522607a2814b48116b2ce4474869b39344c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac7fd5095ec7fd8ce98b9150bd5c0a642004e2c1239a6fa1ff002efa67471df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51316b32af59fe23cdf832fbc0b37b11f74d3a57d01eed32ca30a196d4c7e2c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccba1acfe523175d218c25c2f59a6f9874426235c9cba981a80cc53aca12408a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc418c94085bcd4ed93250cce9eb6bc122cd045035b72800df2bdf4b364d6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://708efdbb32a469d559b54fa6c816d7a3ce1b7fc4bca9a81f08ce9400d6090f0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93e915205e75c0aa802775af4b27f25846f264025ef038251463546485cf2acf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:43:00Z\\\",\\\"message\\\":\\\" 6182 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 09:43:00.162470 6182 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 09:43:00.162523 6182 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 09:43:00.162632 6182 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 09:43:00.163137 6182 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0219 09:43:00.163214 6182 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 09:43:00.163277 6182 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 09:43:00.163855 6182 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://708efdbb32a469d559b54fa6c816d7a3ce1b7fc4bca9a81f08ce9400d6090f0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:43:01Z\\\",\\\"message\\\":\\\"09:43:01.378240 6383 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-nsjqz in node crc\\\\nI0219 09:43:01.378187 6383 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0219 09:43:01.378247 6383 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-nsjqz after 0 failed attempt(s)\\\\nI0219 09:43:01.378253 6383 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-nsjqz\\\\nI0219 09:43:01.378135 6383 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0219 09:43:01.378267 6383 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0219 09:43:01.378269 6383 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nF0219 09:43:01.378277 6383 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to s\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://533452e14c9d0d57a451ec0dd06097f87f60658a8f008203b29c31b2b5310eb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dcfpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:02Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.899850 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g5jnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ab24976-06f3-4373-825a-5234ff24f2cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef52c51fd38bf34f1fc3eb014d85c40137dd15030237334159ffbb71e1d6c2a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll7dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcbd8e6b02f20a249ddb3fbf20ddd72a94b40fd420cb6ad4c59ea513994ac382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll7dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:43:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-g5jnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:02Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.916579 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"210f2216-544c-43a1-813b-68e47da7447e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31cd85d70b58724b5ed5071b085e5b20b31083087fc7847dfadccc52cfd717c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d403f58f80ef84bfe879f95390ffc186d6516ce879f1ca593e9d06b6fdaa9003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86861693b1090906a34be8e134571c684c24b327ed62db509ca470e79c70c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65b4ac45cd6766a2144308a99817f41ef69812519680ebae6edf2d6c72f1c97e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c0288960f3e7739ec0587fcefc29e57c0e351c4903326474454df7b6b57a29c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:42:39Z\\\",\\\"message\\\":\\\"W0219 09:42:28.573633 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 09:42:28.575071 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771494148 cert, and key in /tmp/serving-cert-427400488/serving-signer.crt, /tmp/serving-cert-427400488/serving-signer.key\\\\nI0219 09:42:29.117984 1 observer_polling.go:159] Starting file observer\\\\nW0219 09:42:29.120780 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 09:42:29.121009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:42:29.122010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-427400488/tls.crt::/tmp/serving-cert-427400488/tls.key\\\\\\\"\\\\nF0219 09:42:39.487179 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2dcb23379da52333bd115902957e4051df5472eb6d909d45f4781487e8d6e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:02Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.930934 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85d200ad-dc81-4825-a3e0-976c042ebfd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e2ac875fca92d3c631dc7856cd9f72b9abbf3f2edcbc7efeb49ce1c03ac52a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d4ac252f5069500eef4e1579559c883095bf1c21a29cb96a36a4aab507a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29999666cb6f12b3a4a394a38d4304dd636fe7106b771ca4ef541693fbfc76a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fb1ec6375fa0345ae67191ebc522471cabd2510440f8051132b833c0fa595e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:02Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.938798 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.938871 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.938885 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.938903 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.938916 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:02Z","lastTransitionTime":"2026-02-19T09:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.948001 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:02Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.963863 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ca9c67a49c188984680f98e96b659087034f30727c1fcdad7dfc298157745c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:02Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:02 crc kubenswrapper[4965]: I0219 09:43:02.980604 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vpj8c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26ce37d0-9ace-438a-bdd4-6bb30e41bac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cab682da53d115c9e5ce5dca08aae544673283d03b3e11ba9d28ca7896fd4103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3b065fcc4e11f5576c86810e3af403b4336878950bb02f61e1e461b38b009e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c3b065fcc4e11f5576c86810e3af403b4336878950bb02f61e1e461b38b009e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4089ca812b00a9e7ab8b2ba1f148d929de9837384e6b0b64ea8f62dd6917fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4089ca812b00a9e7ab8b2ba1f148d929de9837384e6b0b64ea8f62dd6917fee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0fceec5800537c79268d8bad66cd51cedd7e6442e8f08ea259dd5714334a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0fceec5800537c79268d8bad66cd51cedd7e6442e8f08ea259dd5714334a81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vpj8c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:02Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:03 crc kubenswrapper[4965]: I0219 09:43:03.041780 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:03 crc kubenswrapper[4965]: I0219 09:43:03.041842 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:03 crc kubenswrapper[4965]: I0219 09:43:03.041859 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:03 crc kubenswrapper[4965]: I0219 09:43:03.041880 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:03 crc kubenswrapper[4965]: I0219 09:43:03.041893 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:03Z","lastTransitionTime":"2026-02-19T09:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:03 crc kubenswrapper[4965]: I0219 09:43:03.145560 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:03 crc kubenswrapper[4965]: I0219 09:43:03.145605 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:03 crc kubenswrapper[4965]: I0219 09:43:03.145619 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:03 crc kubenswrapper[4965]: I0219 09:43:03.145680 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:03 crc kubenswrapper[4965]: I0219 09:43:03.145695 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:03Z","lastTransitionTime":"2026-02-19T09:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:03 crc kubenswrapper[4965]: I0219 09:43:03.167414 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 18:23:40.5048407 +0000 UTC Feb 19 09:43:03 crc kubenswrapper[4965]: I0219 09:43:03.249706 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:03 crc kubenswrapper[4965]: I0219 09:43:03.249778 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:03 crc kubenswrapper[4965]: I0219 09:43:03.249790 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:03 crc kubenswrapper[4965]: I0219 09:43:03.249817 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:03 crc kubenswrapper[4965]: I0219 09:43:03.249834 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:03Z","lastTransitionTime":"2026-02-19T09:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:03 crc kubenswrapper[4965]: I0219 09:43:03.352243 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:03 crc kubenswrapper[4965]: I0219 09:43:03.352327 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:03 crc kubenswrapper[4965]: I0219 09:43:03.352346 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:03 crc kubenswrapper[4965]: I0219 09:43:03.352374 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:03 crc kubenswrapper[4965]: I0219 09:43:03.352394 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:03Z","lastTransitionTime":"2026-02-19T09:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:03 crc kubenswrapper[4965]: I0219 09:43:03.455140 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:03 crc kubenswrapper[4965]: I0219 09:43:03.455184 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:03 crc kubenswrapper[4965]: I0219 09:43:03.455215 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:03 crc kubenswrapper[4965]: I0219 09:43:03.455233 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:03 crc kubenswrapper[4965]: I0219 09:43:03.455247 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:03Z","lastTransitionTime":"2026-02-19T09:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:03 crc kubenswrapper[4965]: I0219 09:43:03.495596 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dcfpx_7c788dfa-1923-4a2b-9619-73acf92ec849/ovnkube-controller/1.log" Feb 19 09:43:03 crc kubenswrapper[4965]: I0219 09:43:03.558165 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:03 crc kubenswrapper[4965]: I0219 09:43:03.558229 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:03 crc kubenswrapper[4965]: I0219 09:43:03.558249 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:03 crc kubenswrapper[4965]: I0219 09:43:03.558274 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:03 crc kubenswrapper[4965]: I0219 09:43:03.558285 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:03Z","lastTransitionTime":"2026-02-19T09:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:03 crc kubenswrapper[4965]: I0219 09:43:03.660850 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:03 crc kubenswrapper[4965]: I0219 09:43:03.660904 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:03 crc kubenswrapper[4965]: I0219 09:43:03.660915 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:03 crc kubenswrapper[4965]: I0219 09:43:03.660933 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:03 crc kubenswrapper[4965]: I0219 09:43:03.660945 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:03Z","lastTransitionTime":"2026-02-19T09:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:03 crc kubenswrapper[4965]: I0219 09:43:03.756995 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1e1b431a-0390-4366-82d1-6cb782c7a9e8-metrics-certs\") pod \"network-metrics-daemon-lwjwk\" (UID: \"1e1b431a-0390-4366-82d1-6cb782c7a9e8\") " pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:43:03 crc kubenswrapper[4965]: E0219 09:43:03.757373 4965 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 09:43:03 crc kubenswrapper[4965]: E0219 09:43:03.757515 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e1b431a-0390-4366-82d1-6cb782c7a9e8-metrics-certs podName:1e1b431a-0390-4366-82d1-6cb782c7a9e8 nodeName:}" failed. No retries permitted until 2026-02-19 09:43:05.757483834 +0000 UTC m=+41.378805174 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1e1b431a-0390-4366-82d1-6cb782c7a9e8-metrics-certs") pod "network-metrics-daemon-lwjwk" (UID: "1e1b431a-0390-4366-82d1-6cb782c7a9e8") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 09:43:03 crc kubenswrapper[4965]: I0219 09:43:03.764184 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:03 crc kubenswrapper[4965]: I0219 09:43:03.764276 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:03 crc kubenswrapper[4965]: I0219 09:43:03.764296 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:03 crc kubenswrapper[4965]: I0219 09:43:03.764328 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:03 crc kubenswrapper[4965]: I0219 09:43:03.764348 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:03Z","lastTransitionTime":"2026-02-19T09:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:03 crc kubenswrapper[4965]: I0219 09:43:03.867459 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:03 crc kubenswrapper[4965]: I0219 09:43:03.867541 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:03 crc kubenswrapper[4965]: I0219 09:43:03.867566 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:03 crc kubenswrapper[4965]: I0219 09:43:03.867597 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:03 crc kubenswrapper[4965]: I0219 09:43:03.867620 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:03Z","lastTransitionTime":"2026-02-19T09:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:03 crc kubenswrapper[4965]: I0219 09:43:03.971269 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:03 crc kubenswrapper[4965]: I0219 09:43:03.971325 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:03 crc kubenswrapper[4965]: I0219 09:43:03.971339 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:03 crc kubenswrapper[4965]: I0219 09:43:03.971362 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:03 crc kubenswrapper[4965]: I0219 09:43:03.971376 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:03Z","lastTransitionTime":"2026-02-19T09:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:04 crc kubenswrapper[4965]: I0219 09:43:04.074249 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:04 crc kubenswrapper[4965]: I0219 09:43:04.074342 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:04 crc kubenswrapper[4965]: I0219 09:43:04.074364 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:04 crc kubenswrapper[4965]: I0219 09:43:04.074393 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:04 crc kubenswrapper[4965]: I0219 09:43:04.074413 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:04Z","lastTransitionTime":"2026-02-19T09:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:04 crc kubenswrapper[4965]: I0219 09:43:04.168411 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 14:45:38.956383311 +0000 UTC Feb 19 09:43:04 crc kubenswrapper[4965]: I0219 09:43:04.177769 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:04 crc kubenswrapper[4965]: I0219 09:43:04.177820 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:04 crc kubenswrapper[4965]: I0219 09:43:04.177837 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:04 crc kubenswrapper[4965]: I0219 09:43:04.177858 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:04 crc kubenswrapper[4965]: I0219 09:43:04.177872 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:04Z","lastTransitionTime":"2026-02-19T09:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:04 crc kubenswrapper[4965]: I0219 09:43:04.197298 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:43:04 crc kubenswrapper[4965]: I0219 09:43:04.197329 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:43:04 crc kubenswrapper[4965]: E0219 09:43:04.197608 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:43:04 crc kubenswrapper[4965]: E0219 09:43:04.197669 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:43:04 crc kubenswrapper[4965]: I0219 09:43:04.197336 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:43:04 crc kubenswrapper[4965]: E0219 09:43:04.197920 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:43:04 crc kubenswrapper[4965]: I0219 09:43:04.197385 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:43:04 crc kubenswrapper[4965]: E0219 09:43:04.198012 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lwjwk" podUID="1e1b431a-0390-4366-82d1-6cb782c7a9e8" Feb 19 09:43:04 crc kubenswrapper[4965]: I0219 09:43:04.281118 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:04 crc kubenswrapper[4965]: I0219 09:43:04.281240 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:04 crc kubenswrapper[4965]: I0219 09:43:04.281260 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:04 crc kubenswrapper[4965]: I0219 09:43:04.281294 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:04 crc kubenswrapper[4965]: I0219 09:43:04.281314 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:04Z","lastTransitionTime":"2026-02-19T09:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:04 crc kubenswrapper[4965]: I0219 09:43:04.384575 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:04 crc kubenswrapper[4965]: I0219 09:43:04.384641 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:04 crc kubenswrapper[4965]: I0219 09:43:04.384656 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:04 crc kubenswrapper[4965]: I0219 09:43:04.384680 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:04 crc kubenswrapper[4965]: I0219 09:43:04.384696 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:04Z","lastTransitionTime":"2026-02-19T09:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:04 crc kubenswrapper[4965]: I0219 09:43:04.488356 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:04 crc kubenswrapper[4965]: I0219 09:43:04.488442 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:04 crc kubenswrapper[4965]: I0219 09:43:04.488463 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:04 crc kubenswrapper[4965]: I0219 09:43:04.488493 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:04 crc kubenswrapper[4965]: I0219 09:43:04.488514 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:04Z","lastTransitionTime":"2026-02-19T09:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:04 crc kubenswrapper[4965]: I0219 09:43:04.591118 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:04 crc kubenswrapper[4965]: I0219 09:43:04.591228 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:04 crc kubenswrapper[4965]: I0219 09:43:04.591250 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:04 crc kubenswrapper[4965]: I0219 09:43:04.591275 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:04 crc kubenswrapper[4965]: I0219 09:43:04.591292 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:04Z","lastTransitionTime":"2026-02-19T09:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:04 crc kubenswrapper[4965]: I0219 09:43:04.697310 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:04 crc kubenswrapper[4965]: I0219 09:43:04.698343 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:04 crc kubenswrapper[4965]: I0219 09:43:04.698402 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:04 crc kubenswrapper[4965]: I0219 09:43:04.698429 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:04 crc kubenswrapper[4965]: I0219 09:43:04.698448 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:04Z","lastTransitionTime":"2026-02-19T09:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:04 crc kubenswrapper[4965]: I0219 09:43:04.801407 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:04 crc kubenswrapper[4965]: I0219 09:43:04.801445 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:04 crc kubenswrapper[4965]: I0219 09:43:04.801455 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:04 crc kubenswrapper[4965]: I0219 09:43:04.801470 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:04 crc kubenswrapper[4965]: I0219 09:43:04.801480 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:04Z","lastTransitionTime":"2026-02-19T09:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:04 crc kubenswrapper[4965]: I0219 09:43:04.903703 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:04 crc kubenswrapper[4965]: I0219 09:43:04.903751 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:04 crc kubenswrapper[4965]: I0219 09:43:04.903762 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:04 crc kubenswrapper[4965]: I0219 09:43:04.903781 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:04 crc kubenswrapper[4965]: I0219 09:43:04.903795 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:04Z","lastTransitionTime":"2026-02-19T09:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.006318 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.006612 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.006713 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.006788 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.006870 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:05Z","lastTransitionTime":"2026-02-19T09:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.109903 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.109973 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.109990 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.110020 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.110036 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:05Z","lastTransitionTime":"2026-02-19T09:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.169596 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 17:21:14.053533888 +0000 UTC Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.212302 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.212409 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.212429 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.212455 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.212474 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:05Z","lastTransitionTime":"2026-02-19T09:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.215058 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lwjwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e1b431a-0390-4366-82d1-6cb782c7a9e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdh66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdh66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:43:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lwjwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:05Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.228857 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:05Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.239988 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6nv8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7972115-bfc1-42ee-b756-e394806eed51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://597dabc5893cced827268c6dc222b2f1535c93e6086c25cec52e7f612952eb65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vd96\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6nv8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:05Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.256499 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nsjqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aef896286f2619adf09fb4e2f4f25543b1d0d69c90fb4d301fb1c215e9b78f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4tp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nsjqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:05Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.269337 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pjxbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3965f16-f751-4de2-9f58-db2070fc99b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d81fdde65dd95b5dd26fd2bccb3c26f4491eee9891d4e837fd01338432057878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pjxbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:05Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.282329 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g5jnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ab24976-06f3-4373-825a-5234ff24f2cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef52c51fd38bf34f1fc3eb014d85c40137dd15030237334159ffbb71e1d6c2a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll7dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcbd8e6b02f20a249ddb3fbf20ddd72a94b40fd420cb6ad4c59ea513994ac382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll7dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:43:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-g5jnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:05Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.299543 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd2c04f5bfa6800521c39502b241dfea1a0b9d3ddde4eb92d501d28bcfad1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:05Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.313378 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:05Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.314837 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.314908 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.314931 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.314962 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.314980 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:05Z","lastTransitionTime":"2026-02-19T09:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.326174 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed9a04147ac88af087b35406b7fc4e1261b034a9fbfa0014446cdc08743f7184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27905a4c42a1d28d582484efe02020cd2b7d5a5af7c53787412705c7a6da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:05Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.340032 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ef3eb8-6103-492d-b6ef-f16081d15e83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://107d47a2c3ddc138ad383ab20f81dabe2c31af50f7bd66c31b66df79488ba837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ff237da7e509d3b4a25e8042c384a768ef0123d1687b574502f769bde3121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mhh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:05Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.362367 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c788dfa-1923-4a2b-9619-73acf92ec849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa60b6875cede631c9383845eb085f96d62a6365609f1f98b84165b54e0872a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ebb933d7238665138ec7e854756522607a2814b48116b2ce4474869b39344c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac7fd5095ec7fd8ce98b9150bd5c0a642004e2c1239a6fa1ff002efa67471df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51316b32af59fe23cdf832fbc0b37b11f74d3a57d01eed32ca30a196d4c7e2c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccba1acfe523175d218c25c2f59a6f9874426235c9cba981a80cc53aca12408a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc418c94085bcd4ed93250cce9eb6bc122cd045035b72800df2bdf4b364d6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://708efdbb32a469d559b54fa6c816d7a3ce1b7fc4bca9a81f08ce9400d6090f0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93e915205e75c0aa802775af4b27f25846f264025ef038251463546485cf2acf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:43:00Z\\\",\\\"message\\\":\\\" 6182 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 09:43:00.162470 6182 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 09:43:00.162523 6182 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 09:43:00.162632 6182 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 09:43:00.163137 6182 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0219 09:43:00.163214 6182 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 09:43:00.163277 6182 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 09:43:00.163855 6182 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://708efdbb32a469d559b54fa6c816d7a3ce1b7fc4bca9a81f08ce9400d6090f0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:43:01Z\\\",\\\"message\\\":\\\"09:43:01.378240 6383 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-nsjqz in node crc\\\\nI0219 09:43:01.378187 6383 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0219 09:43:01.378247 6383 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-nsjqz after 0 failed attempt(s)\\\\nI0219 09:43:01.378253 6383 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-nsjqz\\\\nI0219 09:43:01.378135 6383 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0219 09:43:01.378267 6383 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0219 09:43:01.378269 6383 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nF0219 09:43:01.378277 6383 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to s\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://533452e14c9d0d57a451ec0dd06097f87f60658a8f008203b29c31b2b5310eb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dcfpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:05Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.381275 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ca9c67a49c188984680f98e96b659087034f30727c1fcdad7dfc298157745c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:05Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.404848 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vpj8c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26ce37d0-9ace-438a-bdd4-6bb30e41bac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cab682da53d115c9e5ce5dca08aae544673283d03b3e11ba9d28ca7896fd4103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3b065fcc4e11f5576c86810e3af403b4336878950bb02f61e1e461b38b009e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c3b065fcc4e11f5576c86810e3af403b4336878950bb02f61e1e461b38b009e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4089ca812b00a9e7ab8b2ba1f148d929de9837384e6b0b64ea8f62dd6917fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4089ca812b00a9e7ab8b2ba1f148d929de9837384e6b0b64ea8f62dd6917fee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0fceec5800537c79268d8bad66cd51cedd7e6442e8f08ea259dd5714334a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0fceec5800537c79268d8bad66cd51cedd7e6442e8f08ea259dd5714334a81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vpj8c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:05Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.417650 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.417704 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.417722 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.417745 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.417758 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:05Z","lastTransitionTime":"2026-02-19T09:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.422049 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"210f2216-544c-43a1-813b-68e47da7447e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31cd85d70b58724b5ed5071b085e5b20b31083087fc7847dfadccc52cfd717c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d403f58f80ef84bfe879f95390ffc186d6516ce879f1ca593e9d06b6fdaa9003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86861693b1090906a34be8e134571c684c24b327ed62db509ca470e79c70c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65b4ac45cd6766a2144308a99817f41ef69812519680ebae6edf2d6c72f1c97e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c0288960f3e7739ec0587fcefc29e57c0e351c4903326474454df7b6b57a29c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:42:39Z\\\",\\\"message\\\":\\\"W0219 09:42:28.573633 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 09:42:28.575071 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771494148 cert, and key in /tmp/serving-cert-427400488/serving-signer.crt, /tmp/serving-cert-427400488/serving-signer.key\\\\nI0219 09:42:29.117984 1 observer_polling.go:159] Starting file observer\\\\nW0219 09:42:29.120780 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 09:42:29.121009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:42:29.122010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-427400488/tls.crt::/tmp/serving-cert-427400488/tls.key\\\\\\\"\\\\nF0219 09:42:39.487179 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2dcb23379da52333bd115902957e4051df5472eb6d909d45f4781487e8d6e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:05Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.439139 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85d200ad-dc81-4825-a3e0-976c042ebfd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e2ac875fca92d3c631dc7856cd9f72b9abbf3f2edcbc7efeb49ce1c03ac52a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d4ac252f5069500eef4e1579559c883095bf1c21a29cb96a36a4aab507a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29999666cb6f12b3a4a394a38d4304dd636fe7106b771ca4ef541693fbfc76a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fb1ec6375fa0345ae67191ebc522471cabd2510440f8051132b833c0fa595e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:05Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.458618 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:05Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.521365 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.521718 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.521782 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.521861 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.521940 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:05Z","lastTransitionTime":"2026-02-19T09:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.624879 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.624928 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.624937 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.624954 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.624965 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:05Z","lastTransitionTime":"2026-02-19T09:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.727943 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.728015 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.728027 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.728046 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.728057 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:05Z","lastTransitionTime":"2026-02-19T09:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.782558 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1e1b431a-0390-4366-82d1-6cb782c7a9e8-metrics-certs\") pod \"network-metrics-daemon-lwjwk\" (UID: \"1e1b431a-0390-4366-82d1-6cb782c7a9e8\") " pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:43:05 crc kubenswrapper[4965]: E0219 09:43:05.782777 4965 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 09:43:05 crc kubenswrapper[4965]: E0219 09:43:05.783070 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e1b431a-0390-4366-82d1-6cb782c7a9e8-metrics-certs podName:1e1b431a-0390-4366-82d1-6cb782c7a9e8 nodeName:}" failed. No retries permitted until 2026-02-19 09:43:09.783046986 +0000 UTC m=+45.404368296 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1e1b431a-0390-4366-82d1-6cb782c7a9e8-metrics-certs") pod "network-metrics-daemon-lwjwk" (UID: "1e1b431a-0390-4366-82d1-6cb782c7a9e8") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.831347 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.831407 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.831447 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.831468 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.831480 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:05Z","lastTransitionTime":"2026-02-19T09:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.933986 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.934040 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.934055 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.934076 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:05 crc kubenswrapper[4965]: I0219 09:43:05.934092 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:05Z","lastTransitionTime":"2026-02-19T09:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:06 crc kubenswrapper[4965]: I0219 09:43:06.036830 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:06 crc kubenswrapper[4965]: I0219 09:43:06.036874 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:06 crc kubenswrapper[4965]: I0219 09:43:06.036882 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:06 crc kubenswrapper[4965]: I0219 09:43:06.036898 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:06 crc kubenswrapper[4965]: I0219 09:43:06.036909 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:06Z","lastTransitionTime":"2026-02-19T09:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:06 crc kubenswrapper[4965]: I0219 09:43:06.140306 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:06 crc kubenswrapper[4965]: I0219 09:43:06.140360 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:06 crc kubenswrapper[4965]: I0219 09:43:06.140374 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:06 crc kubenswrapper[4965]: I0219 09:43:06.140400 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:06 crc kubenswrapper[4965]: I0219 09:43:06.140417 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:06Z","lastTransitionTime":"2026-02-19T09:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:06 crc kubenswrapper[4965]: I0219 09:43:06.169905 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 09:11:18.084761035 +0000 UTC Feb 19 09:43:06 crc kubenswrapper[4965]: I0219 09:43:06.197806 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:43:06 crc kubenswrapper[4965]: I0219 09:43:06.197914 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:43:06 crc kubenswrapper[4965]: E0219 09:43:06.197984 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:43:06 crc kubenswrapper[4965]: I0219 09:43:06.197923 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:43:06 crc kubenswrapper[4965]: E0219 09:43:06.198071 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lwjwk" podUID="1e1b431a-0390-4366-82d1-6cb782c7a9e8" Feb 19 09:43:06 crc kubenswrapper[4965]: E0219 09:43:06.198182 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:43:06 crc kubenswrapper[4965]: I0219 09:43:06.198481 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:43:06 crc kubenswrapper[4965]: E0219 09:43:06.198779 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:43:06 crc kubenswrapper[4965]: I0219 09:43:06.243484 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:06 crc kubenswrapper[4965]: I0219 09:43:06.243539 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:06 crc kubenswrapper[4965]: I0219 09:43:06.243553 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:06 crc kubenswrapper[4965]: I0219 09:43:06.243571 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:06 crc kubenswrapper[4965]: I0219 09:43:06.243585 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:06Z","lastTransitionTime":"2026-02-19T09:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:06 crc kubenswrapper[4965]: I0219 09:43:06.347012 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:06 crc kubenswrapper[4965]: I0219 09:43:06.347075 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:06 crc kubenswrapper[4965]: I0219 09:43:06.347090 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:06 crc kubenswrapper[4965]: I0219 09:43:06.347113 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:06 crc kubenswrapper[4965]: I0219 09:43:06.347128 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:06Z","lastTransitionTime":"2026-02-19T09:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:06 crc kubenswrapper[4965]: I0219 09:43:06.450029 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:06 crc kubenswrapper[4965]: I0219 09:43:06.450109 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:06 crc kubenswrapper[4965]: I0219 09:43:06.450134 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:06 crc kubenswrapper[4965]: I0219 09:43:06.450163 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:06 crc kubenswrapper[4965]: I0219 09:43:06.450186 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:06Z","lastTransitionTime":"2026-02-19T09:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:06 crc kubenswrapper[4965]: I0219 09:43:06.553139 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:06 crc kubenswrapper[4965]: I0219 09:43:06.553211 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:06 crc kubenswrapper[4965]: I0219 09:43:06.553223 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:06 crc kubenswrapper[4965]: I0219 09:43:06.553240 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:06 crc kubenswrapper[4965]: I0219 09:43:06.553251 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:06Z","lastTransitionTime":"2026-02-19T09:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:06 crc kubenswrapper[4965]: I0219 09:43:06.656123 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:06 crc kubenswrapper[4965]: I0219 09:43:06.656176 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:06 crc kubenswrapper[4965]: I0219 09:43:06.656215 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:06 crc kubenswrapper[4965]: I0219 09:43:06.656243 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:06 crc kubenswrapper[4965]: I0219 09:43:06.656259 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:06Z","lastTransitionTime":"2026-02-19T09:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:06 crc kubenswrapper[4965]: I0219 09:43:06.761655 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:06 crc kubenswrapper[4965]: I0219 09:43:06.761697 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:06 crc kubenswrapper[4965]: I0219 09:43:06.761709 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:06 crc kubenswrapper[4965]: I0219 09:43:06.761727 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:06 crc kubenswrapper[4965]: I0219 09:43:06.761739 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:06Z","lastTransitionTime":"2026-02-19T09:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:06 crc kubenswrapper[4965]: I0219 09:43:06.863778 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:06 crc kubenswrapper[4965]: I0219 09:43:06.863822 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:06 crc kubenswrapper[4965]: I0219 09:43:06.863831 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:06 crc kubenswrapper[4965]: I0219 09:43:06.863845 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:06 crc kubenswrapper[4965]: I0219 09:43:06.863854 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:06Z","lastTransitionTime":"2026-02-19T09:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:06 crc kubenswrapper[4965]: I0219 09:43:06.966814 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:06 crc kubenswrapper[4965]: I0219 09:43:06.967263 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:06 crc kubenswrapper[4965]: I0219 09:43:06.967346 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:06 crc kubenswrapper[4965]: I0219 09:43:06.967417 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:06 crc kubenswrapper[4965]: I0219 09:43:06.967505 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:06Z","lastTransitionTime":"2026-02-19T09:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:07 crc kubenswrapper[4965]: I0219 09:43:07.070566 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:07 crc kubenswrapper[4965]: I0219 09:43:07.070900 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:07 crc kubenswrapper[4965]: I0219 09:43:07.071011 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:07 crc kubenswrapper[4965]: I0219 09:43:07.071084 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:07 crc kubenswrapper[4965]: I0219 09:43:07.071150 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:07Z","lastTransitionTime":"2026-02-19T09:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:07 crc kubenswrapper[4965]: I0219 09:43:07.170512 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 03:46:02.432161297 +0000 UTC Feb 19 09:43:07 crc kubenswrapper[4965]: I0219 09:43:07.173274 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:07 crc kubenswrapper[4965]: I0219 09:43:07.173373 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:07 crc kubenswrapper[4965]: I0219 09:43:07.173433 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:07 crc kubenswrapper[4965]: I0219 09:43:07.173518 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:07 crc kubenswrapper[4965]: I0219 09:43:07.173587 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:07Z","lastTransitionTime":"2026-02-19T09:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:07 crc kubenswrapper[4965]: I0219 09:43:07.277704 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:07 crc kubenswrapper[4965]: I0219 09:43:07.278068 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:07 crc kubenswrapper[4965]: I0219 09:43:07.278248 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:07 crc kubenswrapper[4965]: I0219 09:43:07.278400 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:07 crc kubenswrapper[4965]: I0219 09:43:07.278479 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:07Z","lastTransitionTime":"2026-02-19T09:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:07 crc kubenswrapper[4965]: I0219 09:43:07.382785 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:07 crc kubenswrapper[4965]: I0219 09:43:07.382833 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:07 crc kubenswrapper[4965]: I0219 09:43:07.382842 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:07 crc kubenswrapper[4965]: I0219 09:43:07.382857 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:07 crc kubenswrapper[4965]: I0219 09:43:07.382869 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:07Z","lastTransitionTime":"2026-02-19T09:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:07 crc kubenswrapper[4965]: I0219 09:43:07.486060 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:07 crc kubenswrapper[4965]: I0219 09:43:07.486117 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:07 crc kubenswrapper[4965]: I0219 09:43:07.486131 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:07 crc kubenswrapper[4965]: I0219 09:43:07.486154 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:07 crc kubenswrapper[4965]: I0219 09:43:07.486171 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:07Z","lastTransitionTime":"2026-02-19T09:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:07 crc kubenswrapper[4965]: I0219 09:43:07.588799 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:07 crc kubenswrapper[4965]: I0219 09:43:07.588853 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:07 crc kubenswrapper[4965]: I0219 09:43:07.588866 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:07 crc kubenswrapper[4965]: I0219 09:43:07.588885 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:07 crc kubenswrapper[4965]: I0219 09:43:07.588897 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:07Z","lastTransitionTime":"2026-02-19T09:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:07 crc kubenswrapper[4965]: I0219 09:43:07.691920 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:07 crc kubenswrapper[4965]: I0219 09:43:07.691982 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:07 crc kubenswrapper[4965]: I0219 09:43:07.691996 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:07 crc kubenswrapper[4965]: I0219 09:43:07.692020 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:07 crc kubenswrapper[4965]: I0219 09:43:07.692035 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:07Z","lastTransitionTime":"2026-02-19T09:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:07 crc kubenswrapper[4965]: I0219 09:43:07.797783 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:07 crc kubenswrapper[4965]: I0219 09:43:07.797872 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:07 crc kubenswrapper[4965]: I0219 09:43:07.797925 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:07 crc kubenswrapper[4965]: I0219 09:43:07.797951 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:07 crc kubenswrapper[4965]: I0219 09:43:07.797977 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:07Z","lastTransitionTime":"2026-02-19T09:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:07 crc kubenswrapper[4965]: I0219 09:43:07.901644 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:07 crc kubenswrapper[4965]: I0219 09:43:07.901702 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:07 crc kubenswrapper[4965]: I0219 09:43:07.901714 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:07 crc kubenswrapper[4965]: I0219 09:43:07.901733 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:07 crc kubenswrapper[4965]: I0219 09:43:07.901744 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:07Z","lastTransitionTime":"2026-02-19T09:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:08 crc kubenswrapper[4965]: I0219 09:43:08.004908 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:08 crc kubenswrapper[4965]: I0219 09:43:08.004984 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:08 crc kubenswrapper[4965]: I0219 09:43:08.005002 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:08 crc kubenswrapper[4965]: I0219 09:43:08.005028 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:08 crc kubenswrapper[4965]: I0219 09:43:08.005051 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:08Z","lastTransitionTime":"2026-02-19T09:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:08 crc kubenswrapper[4965]: I0219 09:43:08.108644 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:08 crc kubenswrapper[4965]: I0219 09:43:08.108709 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:08 crc kubenswrapper[4965]: I0219 09:43:08.108723 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:08 crc kubenswrapper[4965]: I0219 09:43:08.108748 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:08 crc kubenswrapper[4965]: I0219 09:43:08.108766 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:08Z","lastTransitionTime":"2026-02-19T09:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:08 crc kubenswrapper[4965]: I0219 09:43:08.172186 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 14:17:40.704945319 +0000 UTC Feb 19 09:43:08 crc kubenswrapper[4965]: I0219 09:43:08.196808 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:43:08 crc kubenswrapper[4965]: I0219 09:43:08.196864 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:43:08 crc kubenswrapper[4965]: I0219 09:43:08.196940 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:43:08 crc kubenswrapper[4965]: E0219 09:43:08.196979 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:43:08 crc kubenswrapper[4965]: I0219 09:43:08.197031 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:43:08 crc kubenswrapper[4965]: E0219 09:43:08.197118 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:43:08 crc kubenswrapper[4965]: E0219 09:43:08.197350 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:43:08 crc kubenswrapper[4965]: E0219 09:43:08.197477 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lwjwk" podUID="1e1b431a-0390-4366-82d1-6cb782c7a9e8" Feb 19 09:43:08 crc kubenswrapper[4965]: I0219 09:43:08.212281 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:08 crc kubenswrapper[4965]: I0219 09:43:08.212376 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:08 crc kubenswrapper[4965]: I0219 09:43:08.212400 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:08 crc kubenswrapper[4965]: I0219 09:43:08.212435 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:08 crc kubenswrapper[4965]: I0219 09:43:08.212458 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:08Z","lastTransitionTime":"2026-02-19T09:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:08 crc kubenswrapper[4965]: I0219 09:43:08.315703 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:08 crc kubenswrapper[4965]: I0219 09:43:08.315741 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:08 crc kubenswrapper[4965]: I0219 09:43:08.315752 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:08 crc kubenswrapper[4965]: I0219 09:43:08.315769 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:08 crc kubenswrapper[4965]: I0219 09:43:08.315782 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:08Z","lastTransitionTime":"2026-02-19T09:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:08 crc kubenswrapper[4965]: I0219 09:43:08.419164 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:08 crc kubenswrapper[4965]: I0219 09:43:08.419254 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:08 crc kubenswrapper[4965]: I0219 09:43:08.419272 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:08 crc kubenswrapper[4965]: I0219 09:43:08.419297 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:08 crc kubenswrapper[4965]: I0219 09:43:08.419314 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:08Z","lastTransitionTime":"2026-02-19T09:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:08 crc kubenswrapper[4965]: I0219 09:43:08.521295 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:08 crc kubenswrapper[4965]: I0219 09:43:08.521357 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:08 crc kubenswrapper[4965]: I0219 09:43:08.521371 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:08 crc kubenswrapper[4965]: I0219 09:43:08.521389 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:08 crc kubenswrapper[4965]: I0219 09:43:08.521405 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:08Z","lastTransitionTime":"2026-02-19T09:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:08 crc kubenswrapper[4965]: I0219 09:43:08.624020 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:08 crc kubenswrapper[4965]: I0219 09:43:08.624068 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:08 crc kubenswrapper[4965]: I0219 09:43:08.624089 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:08 crc kubenswrapper[4965]: I0219 09:43:08.624109 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:08 crc kubenswrapper[4965]: I0219 09:43:08.624121 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:08Z","lastTransitionTime":"2026-02-19T09:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:08 crc kubenswrapper[4965]: I0219 09:43:08.727386 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:08 crc kubenswrapper[4965]: I0219 09:43:08.727871 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:08 crc kubenswrapper[4965]: I0219 09:43:08.727961 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:08 crc kubenswrapper[4965]: I0219 09:43:08.728102 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:08 crc kubenswrapper[4965]: I0219 09:43:08.728257 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:08Z","lastTransitionTime":"2026-02-19T09:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:08 crc kubenswrapper[4965]: I0219 09:43:08.831387 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:08 crc kubenswrapper[4965]: I0219 09:43:08.831453 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:08 crc kubenswrapper[4965]: I0219 09:43:08.831471 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:08 crc kubenswrapper[4965]: I0219 09:43:08.831495 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:08 crc kubenswrapper[4965]: I0219 09:43:08.831514 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:08Z","lastTransitionTime":"2026-02-19T09:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:08 crc kubenswrapper[4965]: I0219 09:43:08.935417 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:08 crc kubenswrapper[4965]: I0219 09:43:08.935483 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:08 crc kubenswrapper[4965]: I0219 09:43:08.935501 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:08 crc kubenswrapper[4965]: I0219 09:43:08.935524 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:08 crc kubenswrapper[4965]: I0219 09:43:08.935543 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:08Z","lastTransitionTime":"2026-02-19T09:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:09 crc kubenswrapper[4965]: I0219 09:43:09.039253 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:09 crc kubenswrapper[4965]: I0219 09:43:09.039306 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:09 crc kubenswrapper[4965]: I0219 09:43:09.039324 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:09 crc kubenswrapper[4965]: I0219 09:43:09.039348 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:09 crc kubenswrapper[4965]: I0219 09:43:09.039366 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:09Z","lastTransitionTime":"2026-02-19T09:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:09 crc kubenswrapper[4965]: I0219 09:43:09.142295 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:09 crc kubenswrapper[4965]: I0219 09:43:09.142355 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:09 crc kubenswrapper[4965]: I0219 09:43:09.142373 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:09 crc kubenswrapper[4965]: I0219 09:43:09.142396 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:09 crc kubenswrapper[4965]: I0219 09:43:09.142414 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:09Z","lastTransitionTime":"2026-02-19T09:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:09 crc kubenswrapper[4965]: I0219 09:43:09.172742 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 13:22:35.148242314 +0000 UTC Feb 19 09:43:09 crc kubenswrapper[4965]: I0219 09:43:09.246059 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:09 crc kubenswrapper[4965]: I0219 09:43:09.246130 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:09 crc kubenswrapper[4965]: I0219 09:43:09.246145 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:09 crc kubenswrapper[4965]: I0219 09:43:09.246165 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:09 crc kubenswrapper[4965]: I0219 09:43:09.246184 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:09Z","lastTransitionTime":"2026-02-19T09:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:09 crc kubenswrapper[4965]: I0219 09:43:09.349459 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:09 crc kubenswrapper[4965]: I0219 09:43:09.349546 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:09 crc kubenswrapper[4965]: I0219 09:43:09.349571 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:09 crc kubenswrapper[4965]: I0219 09:43:09.349605 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:09 crc kubenswrapper[4965]: I0219 09:43:09.349631 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:09Z","lastTransitionTime":"2026-02-19T09:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:09 crc kubenswrapper[4965]: I0219 09:43:09.453253 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:09 crc kubenswrapper[4965]: I0219 09:43:09.453346 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:09 crc kubenswrapper[4965]: I0219 09:43:09.453373 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:09 crc kubenswrapper[4965]: I0219 09:43:09.453401 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:09 crc kubenswrapper[4965]: I0219 09:43:09.453424 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:09Z","lastTransitionTime":"2026-02-19T09:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:09 crc kubenswrapper[4965]: I0219 09:43:09.556414 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:09 crc kubenswrapper[4965]: I0219 09:43:09.556474 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:09 crc kubenswrapper[4965]: I0219 09:43:09.556489 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:09 crc kubenswrapper[4965]: I0219 09:43:09.556511 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:09 crc kubenswrapper[4965]: I0219 09:43:09.556526 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:09Z","lastTransitionTime":"2026-02-19T09:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:09 crc kubenswrapper[4965]: I0219 09:43:09.660585 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:09 crc kubenswrapper[4965]: I0219 09:43:09.661115 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:09 crc kubenswrapper[4965]: I0219 09:43:09.661350 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:09 crc kubenswrapper[4965]: I0219 09:43:09.661736 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:09 crc kubenswrapper[4965]: I0219 09:43:09.661879 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:09Z","lastTransitionTime":"2026-02-19T09:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:09 crc kubenswrapper[4965]: I0219 09:43:09.765675 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:09 crc kubenswrapper[4965]: I0219 09:43:09.765734 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:09 crc kubenswrapper[4965]: I0219 09:43:09.765747 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:09 crc kubenswrapper[4965]: I0219 09:43:09.765770 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:09 crc kubenswrapper[4965]: I0219 09:43:09.765784 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:09Z","lastTransitionTime":"2026-02-19T09:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:09 crc kubenswrapper[4965]: I0219 09:43:09.830621 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1e1b431a-0390-4366-82d1-6cb782c7a9e8-metrics-certs\") pod \"network-metrics-daemon-lwjwk\" (UID: \"1e1b431a-0390-4366-82d1-6cb782c7a9e8\") " pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:43:09 crc kubenswrapper[4965]: E0219 09:43:09.830878 4965 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 09:43:09 crc kubenswrapper[4965]: E0219 09:43:09.831039 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e1b431a-0390-4366-82d1-6cb782c7a9e8-metrics-certs podName:1e1b431a-0390-4366-82d1-6cb782c7a9e8 nodeName:}" failed. No retries permitted until 2026-02-19 09:43:17.831005155 +0000 UTC m=+53.452326645 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1e1b431a-0390-4366-82d1-6cb782c7a9e8-metrics-certs") pod "network-metrics-daemon-lwjwk" (UID: "1e1b431a-0390-4366-82d1-6cb782c7a9e8") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 09:43:09 crc kubenswrapper[4965]: I0219 09:43:09.869222 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:09 crc kubenswrapper[4965]: I0219 09:43:09.869272 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:09 crc kubenswrapper[4965]: I0219 09:43:09.869285 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:09 crc kubenswrapper[4965]: I0219 09:43:09.869307 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:09 crc kubenswrapper[4965]: I0219 09:43:09.869321 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:09Z","lastTransitionTime":"2026-02-19T09:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:09 crc kubenswrapper[4965]: I0219 09:43:09.972337 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:09 crc kubenswrapper[4965]: I0219 09:43:09.972409 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:09 crc kubenswrapper[4965]: I0219 09:43:09.972424 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:09 crc kubenswrapper[4965]: I0219 09:43:09.972447 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:09 crc kubenswrapper[4965]: I0219 09:43:09.972461 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:09Z","lastTransitionTime":"2026-02-19T09:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:10 crc kubenswrapper[4965]: I0219 09:43:10.076447 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:10 crc kubenswrapper[4965]: I0219 09:43:10.076504 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:10 crc kubenswrapper[4965]: I0219 09:43:10.076513 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:10 crc kubenswrapper[4965]: I0219 09:43:10.076536 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:10 crc kubenswrapper[4965]: I0219 09:43:10.076547 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:10Z","lastTransitionTime":"2026-02-19T09:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:10 crc kubenswrapper[4965]: I0219 09:43:10.173177 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 20:28:25.577512003 +0000 UTC Feb 19 09:43:10 crc kubenswrapper[4965]: I0219 09:43:10.180985 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:10 crc kubenswrapper[4965]: I0219 09:43:10.181044 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:10 crc kubenswrapper[4965]: I0219 09:43:10.181060 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:10 crc kubenswrapper[4965]: I0219 09:43:10.181090 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:10 crc kubenswrapper[4965]: I0219 09:43:10.181108 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:10Z","lastTransitionTime":"2026-02-19T09:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:10 crc kubenswrapper[4965]: I0219 09:43:10.197479 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:43:10 crc kubenswrapper[4965]: I0219 09:43:10.197543 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:43:10 crc kubenswrapper[4965]: I0219 09:43:10.197562 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:43:10 crc kubenswrapper[4965]: E0219 09:43:10.197639 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lwjwk" podUID="1e1b431a-0390-4366-82d1-6cb782c7a9e8" Feb 19 09:43:10 crc kubenswrapper[4965]: I0219 09:43:10.197499 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:43:10 crc kubenswrapper[4965]: E0219 09:43:10.197822 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:43:10 crc kubenswrapper[4965]: E0219 09:43:10.197992 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:43:10 crc kubenswrapper[4965]: E0219 09:43:10.198112 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:43:10 crc kubenswrapper[4965]: I0219 09:43:10.284772 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:10 crc kubenswrapper[4965]: I0219 09:43:10.284826 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:10 crc kubenswrapper[4965]: I0219 09:43:10.284837 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:10 crc kubenswrapper[4965]: I0219 09:43:10.284859 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:10 crc kubenswrapper[4965]: I0219 09:43:10.284875 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:10Z","lastTransitionTime":"2026-02-19T09:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:10 crc kubenswrapper[4965]: I0219 09:43:10.388740 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:10 crc kubenswrapper[4965]: I0219 09:43:10.388813 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:10 crc kubenswrapper[4965]: I0219 09:43:10.388831 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:10 crc kubenswrapper[4965]: I0219 09:43:10.388854 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:10 crc kubenswrapper[4965]: I0219 09:43:10.388869 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:10Z","lastTransitionTime":"2026-02-19T09:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:10 crc kubenswrapper[4965]: I0219 09:43:10.492516 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:10 crc kubenswrapper[4965]: I0219 09:43:10.492584 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:10 crc kubenswrapper[4965]: I0219 09:43:10.492594 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:10 crc kubenswrapper[4965]: I0219 09:43:10.492613 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:10 crc kubenswrapper[4965]: I0219 09:43:10.492625 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:10Z","lastTransitionTime":"2026-02-19T09:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:10 crc kubenswrapper[4965]: I0219 09:43:10.596160 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:10 crc kubenswrapper[4965]: I0219 09:43:10.596231 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:10 crc kubenswrapper[4965]: I0219 09:43:10.596241 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:10 crc kubenswrapper[4965]: I0219 09:43:10.596258 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:10 crc kubenswrapper[4965]: I0219 09:43:10.596268 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:10Z","lastTransitionTime":"2026-02-19T09:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:10 crc kubenswrapper[4965]: I0219 09:43:10.699124 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:10 crc kubenswrapper[4965]: I0219 09:43:10.699576 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:10 crc kubenswrapper[4965]: I0219 09:43:10.699599 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:10 crc kubenswrapper[4965]: I0219 09:43:10.699619 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:10 crc kubenswrapper[4965]: I0219 09:43:10.699630 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:10Z","lastTransitionTime":"2026-02-19T09:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:10 crc kubenswrapper[4965]: I0219 09:43:10.801973 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:10 crc kubenswrapper[4965]: I0219 09:43:10.802034 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:10 crc kubenswrapper[4965]: I0219 09:43:10.802047 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:10 crc kubenswrapper[4965]: I0219 09:43:10.802073 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:10 crc kubenswrapper[4965]: I0219 09:43:10.802087 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:10Z","lastTransitionTime":"2026-02-19T09:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:10 crc kubenswrapper[4965]: I0219 09:43:10.905731 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:10 crc kubenswrapper[4965]: I0219 09:43:10.905798 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:10 crc kubenswrapper[4965]: I0219 09:43:10.905812 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:10 crc kubenswrapper[4965]: I0219 09:43:10.905833 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:10 crc kubenswrapper[4965]: I0219 09:43:10.905846 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:10Z","lastTransitionTime":"2026-02-19T09:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:11 crc kubenswrapper[4965]: I0219 09:43:11.009438 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:11 crc kubenswrapper[4965]: I0219 09:43:11.009479 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:11 crc kubenswrapper[4965]: I0219 09:43:11.009491 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:11 crc kubenswrapper[4965]: I0219 09:43:11.009508 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:11 crc kubenswrapper[4965]: I0219 09:43:11.009520 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:11Z","lastTransitionTime":"2026-02-19T09:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:11 crc kubenswrapper[4965]: I0219 09:43:11.112127 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:11 crc kubenswrapper[4965]: I0219 09:43:11.112184 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:11 crc kubenswrapper[4965]: I0219 09:43:11.112242 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:11 crc kubenswrapper[4965]: I0219 09:43:11.112268 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:11 crc kubenswrapper[4965]: I0219 09:43:11.112286 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:11Z","lastTransitionTime":"2026-02-19T09:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:11 crc kubenswrapper[4965]: I0219 09:43:11.173897 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 11:04:46.459475158 +0000 UTC Feb 19 09:43:11 crc kubenswrapper[4965]: I0219 09:43:11.215702 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:11 crc kubenswrapper[4965]: I0219 09:43:11.215754 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:11 crc kubenswrapper[4965]: I0219 09:43:11.215765 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:11 crc kubenswrapper[4965]: I0219 09:43:11.215786 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:11 crc kubenswrapper[4965]: I0219 09:43:11.215798 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:11Z","lastTransitionTime":"2026-02-19T09:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:11 crc kubenswrapper[4965]: I0219 09:43:11.318463 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:11 crc kubenswrapper[4965]: I0219 09:43:11.318553 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:11 crc kubenswrapper[4965]: I0219 09:43:11.318580 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:11 crc kubenswrapper[4965]: I0219 09:43:11.318617 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:11 crc kubenswrapper[4965]: I0219 09:43:11.318645 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:11Z","lastTransitionTime":"2026-02-19T09:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:11 crc kubenswrapper[4965]: I0219 09:43:11.423789 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:11 crc kubenswrapper[4965]: I0219 09:43:11.423851 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:11 crc kubenswrapper[4965]: I0219 09:43:11.423871 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:11 crc kubenswrapper[4965]: I0219 09:43:11.423896 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:11 crc kubenswrapper[4965]: I0219 09:43:11.423917 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:11Z","lastTransitionTime":"2026-02-19T09:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:11 crc kubenswrapper[4965]: I0219 09:43:11.526355 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:11 crc kubenswrapper[4965]: I0219 09:43:11.526413 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:11 crc kubenswrapper[4965]: I0219 09:43:11.526425 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:11 crc kubenswrapper[4965]: I0219 09:43:11.526445 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:11 crc kubenswrapper[4965]: I0219 09:43:11.526462 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:11Z","lastTransitionTime":"2026-02-19T09:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:11 crc kubenswrapper[4965]: I0219 09:43:11.629428 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:11 crc kubenswrapper[4965]: I0219 09:43:11.629473 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:11 crc kubenswrapper[4965]: I0219 09:43:11.629486 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:11 crc kubenswrapper[4965]: I0219 09:43:11.629502 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:11 crc kubenswrapper[4965]: I0219 09:43:11.629514 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:11Z","lastTransitionTime":"2026-02-19T09:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:11 crc kubenswrapper[4965]: I0219 09:43:11.732240 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:11 crc kubenswrapper[4965]: I0219 09:43:11.732334 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:11 crc kubenswrapper[4965]: I0219 09:43:11.732356 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:11 crc kubenswrapper[4965]: I0219 09:43:11.732385 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:11 crc kubenswrapper[4965]: I0219 09:43:11.732404 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:11Z","lastTransitionTime":"2026-02-19T09:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:11 crc kubenswrapper[4965]: I0219 09:43:11.835545 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:11 crc kubenswrapper[4965]: I0219 09:43:11.835618 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:11 crc kubenswrapper[4965]: I0219 09:43:11.835644 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:11 crc kubenswrapper[4965]: I0219 09:43:11.835676 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:11 crc kubenswrapper[4965]: I0219 09:43:11.835705 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:11Z","lastTransitionTime":"2026-02-19T09:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:11 crc kubenswrapper[4965]: I0219 09:43:11.939331 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:11 crc kubenswrapper[4965]: I0219 09:43:11.939406 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:11 crc kubenswrapper[4965]: I0219 09:43:11.939420 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:11 crc kubenswrapper[4965]: I0219 09:43:11.939441 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:11 crc kubenswrapper[4965]: I0219 09:43:11.939455 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:11Z","lastTransitionTime":"2026-02-19T09:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:12 crc kubenswrapper[4965]: I0219 09:43:12.043186 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:12 crc kubenswrapper[4965]: I0219 09:43:12.043287 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:12 crc kubenswrapper[4965]: I0219 09:43:12.043308 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:12 crc kubenswrapper[4965]: I0219 09:43:12.043341 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:12 crc kubenswrapper[4965]: I0219 09:43:12.043364 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:12Z","lastTransitionTime":"2026-02-19T09:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:12 crc kubenswrapper[4965]: I0219 09:43:12.146422 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:12 crc kubenswrapper[4965]: I0219 09:43:12.146503 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:12 crc kubenswrapper[4965]: I0219 09:43:12.146517 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:12 crc kubenswrapper[4965]: I0219 09:43:12.146537 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:12 crc kubenswrapper[4965]: I0219 09:43:12.146550 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:12Z","lastTransitionTime":"2026-02-19T09:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:12 crc kubenswrapper[4965]: I0219 09:43:12.175121 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 12:35:11.673465337 +0000 UTC Feb 19 09:43:12 crc kubenswrapper[4965]: I0219 09:43:12.197529 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:43:12 crc kubenswrapper[4965]: E0219 09:43:12.197681 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:43:12 crc kubenswrapper[4965]: I0219 09:43:12.198065 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:43:12 crc kubenswrapper[4965]: E0219 09:43:12.198117 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:43:12 crc kubenswrapper[4965]: I0219 09:43:12.198156 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:43:12 crc kubenswrapper[4965]: E0219 09:43:12.198232 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:43:12 crc kubenswrapper[4965]: I0219 09:43:12.198292 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:43:12 crc kubenswrapper[4965]: E0219 09:43:12.198444 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lwjwk" podUID="1e1b431a-0390-4366-82d1-6cb782c7a9e8" Feb 19 09:43:12 crc kubenswrapper[4965]: I0219 09:43:12.249825 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:12 crc kubenswrapper[4965]: I0219 09:43:12.249891 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:12 crc kubenswrapper[4965]: I0219 09:43:12.249905 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:12 crc kubenswrapper[4965]: I0219 09:43:12.249937 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:12 crc kubenswrapper[4965]: I0219 09:43:12.249994 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:12Z","lastTransitionTime":"2026-02-19T09:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:12 crc kubenswrapper[4965]: I0219 09:43:12.353652 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:12 crc kubenswrapper[4965]: I0219 09:43:12.353693 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:12 crc kubenswrapper[4965]: I0219 09:43:12.353705 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:12 crc kubenswrapper[4965]: I0219 09:43:12.353722 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:12 crc kubenswrapper[4965]: I0219 09:43:12.353734 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:12Z","lastTransitionTime":"2026-02-19T09:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:12 crc kubenswrapper[4965]: I0219 09:43:12.457314 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:12 crc kubenswrapper[4965]: I0219 09:43:12.457388 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:12 crc kubenswrapper[4965]: I0219 09:43:12.457397 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:12 crc kubenswrapper[4965]: I0219 09:43:12.457434 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:12 crc kubenswrapper[4965]: I0219 09:43:12.457445 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:12Z","lastTransitionTime":"2026-02-19T09:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:12 crc kubenswrapper[4965]: I0219 09:43:12.560165 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:12 crc kubenswrapper[4965]: I0219 09:43:12.560244 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:12 crc kubenswrapper[4965]: I0219 09:43:12.560259 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:12 crc kubenswrapper[4965]: I0219 09:43:12.560281 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:12 crc kubenswrapper[4965]: I0219 09:43:12.560298 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:12Z","lastTransitionTime":"2026-02-19T09:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:12 crc kubenswrapper[4965]: I0219 09:43:12.663903 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:12 crc kubenswrapper[4965]: I0219 09:43:12.663963 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:12 crc kubenswrapper[4965]: I0219 09:43:12.663981 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:12 crc kubenswrapper[4965]: I0219 09:43:12.664009 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:12 crc kubenswrapper[4965]: I0219 09:43:12.664028 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:12Z","lastTransitionTime":"2026-02-19T09:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:12 crc kubenswrapper[4965]: I0219 09:43:12.767933 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:12 crc kubenswrapper[4965]: I0219 09:43:12.768024 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:12 crc kubenswrapper[4965]: I0219 09:43:12.768048 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:12 crc kubenswrapper[4965]: I0219 09:43:12.768084 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:12 crc kubenswrapper[4965]: I0219 09:43:12.768116 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:12Z","lastTransitionTime":"2026-02-19T09:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:12 crc kubenswrapper[4965]: I0219 09:43:12.871518 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:12 crc kubenswrapper[4965]: I0219 09:43:12.871578 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:12 crc kubenswrapper[4965]: I0219 09:43:12.871591 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:12 crc kubenswrapper[4965]: I0219 09:43:12.871633 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:12 crc kubenswrapper[4965]: I0219 09:43:12.871646 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:12Z","lastTransitionTime":"2026-02-19T09:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:12 crc kubenswrapper[4965]: I0219 09:43:12.974373 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:12 crc kubenswrapper[4965]: I0219 09:43:12.974423 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:12 crc kubenswrapper[4965]: I0219 09:43:12.974433 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:12 crc kubenswrapper[4965]: I0219 09:43:12.974451 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:12 crc kubenswrapper[4965]: I0219 09:43:12.974464 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:12Z","lastTransitionTime":"2026-02-19T09:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:12 crc kubenswrapper[4965]: I0219 09:43:12.998117 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:12 crc kubenswrapper[4965]: I0219 09:43:12.998160 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:12 crc kubenswrapper[4965]: I0219 09:43:12.998170 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:12 crc kubenswrapper[4965]: I0219 09:43:12.998188 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:12 crc kubenswrapper[4965]: I0219 09:43:12.998222 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:12Z","lastTransitionTime":"2026-02-19T09:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:13 crc kubenswrapper[4965]: E0219 09:43:13.014472 4965 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1c83089-21b1-454c-b8cd-3bf0aaa04cd0\\\",\\\"systemUUID\\\":\\\"70334fb7-3860-4c43-90b6-37f049faeb9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:13Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:13 crc kubenswrapper[4965]: I0219 09:43:13.019263 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:13 crc kubenswrapper[4965]: I0219 09:43:13.019371 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:13 crc kubenswrapper[4965]: I0219 09:43:13.019386 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:13 crc kubenswrapper[4965]: I0219 09:43:13.019403 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:13 crc kubenswrapper[4965]: I0219 09:43:13.019417 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:13Z","lastTransitionTime":"2026-02-19T09:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:13 crc kubenswrapper[4965]: E0219 09:43:13.040469 4965 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1c83089-21b1-454c-b8cd-3bf0aaa04cd0\\\",\\\"systemUUID\\\":\\\"70334fb7-3860-4c43-90b6-37f049faeb9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:13Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:13 crc kubenswrapper[4965]: I0219 09:43:13.045819 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:13 crc kubenswrapper[4965]: I0219 09:43:13.045873 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:13 crc kubenswrapper[4965]: I0219 09:43:13.045890 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:13 crc kubenswrapper[4965]: I0219 09:43:13.045914 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:13 crc kubenswrapper[4965]: I0219 09:43:13.045930 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:13Z","lastTransitionTime":"2026-02-19T09:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:13 crc kubenswrapper[4965]: E0219 09:43:13.064103 4965 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1c83089-21b1-454c-b8cd-3bf0aaa04cd0\\\",\\\"systemUUID\\\":\\\"70334fb7-3860-4c43-90b6-37f049faeb9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:13Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:13 crc kubenswrapper[4965]: I0219 09:43:13.069370 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:13 crc kubenswrapper[4965]: I0219 09:43:13.069417 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:13 crc kubenswrapper[4965]: I0219 09:43:13.069431 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:13 crc kubenswrapper[4965]: I0219 09:43:13.069450 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:13 crc kubenswrapper[4965]: I0219 09:43:13.069463 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:13Z","lastTransitionTime":"2026-02-19T09:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:13 crc kubenswrapper[4965]: E0219 09:43:13.090135 4965 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1c83089-21b1-454c-b8cd-3bf0aaa04cd0\\\",\\\"systemUUID\\\":\\\"70334fb7-3860-4c43-90b6-37f049faeb9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:13Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:13 crc kubenswrapper[4965]: I0219 09:43:13.094719 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:13 crc kubenswrapper[4965]: I0219 09:43:13.094766 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:13 crc kubenswrapper[4965]: I0219 09:43:13.094779 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:13 crc kubenswrapper[4965]: I0219 09:43:13.094798 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:13 crc kubenswrapper[4965]: I0219 09:43:13.094811 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:13Z","lastTransitionTime":"2026-02-19T09:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:13 crc kubenswrapper[4965]: E0219 09:43:13.112815 4965 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1c83089-21b1-454c-b8cd-3bf0aaa04cd0\\\",\\\"systemUUID\\\":\\\"70334fb7-3860-4c43-90b6-37f049faeb9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:13Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:13 crc kubenswrapper[4965]: E0219 09:43:13.113093 4965 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 09:43:13 crc kubenswrapper[4965]: I0219 09:43:13.115532 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:13 crc kubenswrapper[4965]: I0219 09:43:13.115574 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:13 crc kubenswrapper[4965]: I0219 09:43:13.115614 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:13 crc kubenswrapper[4965]: I0219 09:43:13.115637 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:13 crc kubenswrapper[4965]: I0219 09:43:13.115653 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:13Z","lastTransitionTime":"2026-02-19T09:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:13 crc kubenswrapper[4965]: I0219 09:43:13.176054 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 01:48:00.497064967 +0000 UTC Feb 19 09:43:13 crc kubenswrapper[4965]: I0219 09:43:13.219307 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:13 crc kubenswrapper[4965]: I0219 09:43:13.219374 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:13 crc kubenswrapper[4965]: I0219 09:43:13.219392 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:13 crc kubenswrapper[4965]: I0219 09:43:13.219417 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:13 crc kubenswrapper[4965]: I0219 09:43:13.219434 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:13Z","lastTransitionTime":"2026-02-19T09:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:13 crc kubenswrapper[4965]: I0219 09:43:13.323124 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:13 crc kubenswrapper[4965]: I0219 09:43:13.323181 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:13 crc kubenswrapper[4965]: I0219 09:43:13.323291 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:13 crc kubenswrapper[4965]: I0219 09:43:13.323318 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:13 crc kubenswrapper[4965]: I0219 09:43:13.323332 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:13Z","lastTransitionTime":"2026-02-19T09:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:13 crc kubenswrapper[4965]: I0219 09:43:13.426631 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:13 crc kubenswrapper[4965]: I0219 09:43:13.427149 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:13 crc kubenswrapper[4965]: I0219 09:43:13.427441 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:13 crc kubenswrapper[4965]: I0219 09:43:13.427705 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:13 crc kubenswrapper[4965]: I0219 09:43:13.427865 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:13Z","lastTransitionTime":"2026-02-19T09:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:13 crc kubenswrapper[4965]: I0219 09:43:13.532363 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:13 crc kubenswrapper[4965]: I0219 09:43:13.532681 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:13 crc kubenswrapper[4965]: I0219 09:43:13.532807 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:13 crc kubenswrapper[4965]: I0219 09:43:13.532899 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:13 crc kubenswrapper[4965]: I0219 09:43:13.532992 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:13Z","lastTransitionTime":"2026-02-19T09:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:13 crc kubenswrapper[4965]: I0219 09:43:13.637428 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:13 crc kubenswrapper[4965]: I0219 09:43:13.637478 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:13 crc kubenswrapper[4965]: I0219 09:43:13.637491 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:13 crc kubenswrapper[4965]: I0219 09:43:13.637508 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:13 crc kubenswrapper[4965]: I0219 09:43:13.637522 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:13Z","lastTransitionTime":"2026-02-19T09:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:13 crc kubenswrapper[4965]: I0219 09:43:13.740506 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:13 crc kubenswrapper[4965]: I0219 09:43:13.740579 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:13 crc kubenswrapper[4965]: I0219 09:43:13.740598 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:13 crc kubenswrapper[4965]: I0219 09:43:13.740625 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:13 crc kubenswrapper[4965]: I0219 09:43:13.740649 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:13Z","lastTransitionTime":"2026-02-19T09:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:13 crc kubenswrapper[4965]: I0219 09:43:13.843665 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:13 crc kubenswrapper[4965]: I0219 09:43:13.843713 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:13 crc kubenswrapper[4965]: I0219 09:43:13.843725 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:13 crc kubenswrapper[4965]: I0219 09:43:13.843745 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:13 crc kubenswrapper[4965]: I0219 09:43:13.843758 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:13Z","lastTransitionTime":"2026-02-19T09:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:13 crc kubenswrapper[4965]: I0219 09:43:13.947309 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:13 crc kubenswrapper[4965]: I0219 09:43:13.947365 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:13 crc kubenswrapper[4965]: I0219 09:43:13.947402 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:13 crc kubenswrapper[4965]: I0219 09:43:13.947427 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:13 crc kubenswrapper[4965]: I0219 09:43:13.947447 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:13Z","lastTransitionTime":"2026-02-19T09:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:14 crc kubenswrapper[4965]: I0219 09:43:14.058107 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:14 crc kubenswrapper[4965]: I0219 09:43:14.058234 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:14 crc kubenswrapper[4965]: I0219 09:43:14.058265 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:14 crc kubenswrapper[4965]: I0219 09:43:14.058308 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:14 crc kubenswrapper[4965]: I0219 09:43:14.058335 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:14Z","lastTransitionTime":"2026-02-19T09:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:14 crc kubenswrapper[4965]: I0219 09:43:14.161799 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:14 crc kubenswrapper[4965]: I0219 09:43:14.161861 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:14 crc kubenswrapper[4965]: I0219 09:43:14.161875 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:14 crc kubenswrapper[4965]: I0219 09:43:14.161901 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:14 crc kubenswrapper[4965]: I0219 09:43:14.161916 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:14Z","lastTransitionTime":"2026-02-19T09:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:14 crc kubenswrapper[4965]: I0219 09:43:14.177141 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 16:03:00.258944527 +0000 UTC Feb 19 09:43:14 crc kubenswrapper[4965]: I0219 09:43:14.197821 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:43:14 crc kubenswrapper[4965]: I0219 09:43:14.197930 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:43:14 crc kubenswrapper[4965]: I0219 09:43:14.197993 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:43:14 crc kubenswrapper[4965]: E0219 09:43:14.198030 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:43:14 crc kubenswrapper[4965]: I0219 09:43:14.198055 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:43:14 crc kubenswrapper[4965]: E0219 09:43:14.198233 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:43:14 crc kubenswrapper[4965]: E0219 09:43:14.198330 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:43:14 crc kubenswrapper[4965]: E0219 09:43:14.198447 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lwjwk" podUID="1e1b431a-0390-4366-82d1-6cb782c7a9e8" Feb 19 09:43:14 crc kubenswrapper[4965]: I0219 09:43:14.265447 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:14 crc kubenswrapper[4965]: I0219 09:43:14.265504 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:14 crc kubenswrapper[4965]: I0219 09:43:14.265514 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:14 crc kubenswrapper[4965]: I0219 09:43:14.265532 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:14 crc kubenswrapper[4965]: I0219 09:43:14.265542 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:14Z","lastTransitionTime":"2026-02-19T09:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:14 crc kubenswrapper[4965]: I0219 09:43:14.368806 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:14 crc kubenswrapper[4965]: I0219 09:43:14.368873 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:14 crc kubenswrapper[4965]: I0219 09:43:14.368886 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:14 crc kubenswrapper[4965]: I0219 09:43:14.368908 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:14 crc kubenswrapper[4965]: I0219 09:43:14.368926 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:14Z","lastTransitionTime":"2026-02-19T09:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:14 crc kubenswrapper[4965]: I0219 09:43:14.472164 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:14 crc kubenswrapper[4965]: I0219 09:43:14.472261 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:14 crc kubenswrapper[4965]: I0219 09:43:14.472276 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:14 crc kubenswrapper[4965]: I0219 09:43:14.472294 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:14 crc kubenswrapper[4965]: I0219 09:43:14.472310 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:14Z","lastTransitionTime":"2026-02-19T09:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:14 crc kubenswrapper[4965]: I0219 09:43:14.575105 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:14 crc kubenswrapper[4965]: I0219 09:43:14.575164 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:14 crc kubenswrapper[4965]: I0219 09:43:14.575180 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:14 crc kubenswrapper[4965]: I0219 09:43:14.575227 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:14 crc kubenswrapper[4965]: I0219 09:43:14.575247 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:14Z","lastTransitionTime":"2026-02-19T09:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:14 crc kubenswrapper[4965]: I0219 09:43:14.678497 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:14 crc kubenswrapper[4965]: I0219 09:43:14.678547 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:14 crc kubenswrapper[4965]: I0219 09:43:14.678560 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:14 crc kubenswrapper[4965]: I0219 09:43:14.678580 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:14 crc kubenswrapper[4965]: I0219 09:43:14.678592 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:14Z","lastTransitionTime":"2026-02-19T09:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:14 crc kubenswrapper[4965]: I0219 09:43:14.782420 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:14 crc kubenswrapper[4965]: I0219 09:43:14.782509 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:14 crc kubenswrapper[4965]: I0219 09:43:14.782530 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:14 crc kubenswrapper[4965]: I0219 09:43:14.782563 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:14 crc kubenswrapper[4965]: I0219 09:43:14.782583 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:14Z","lastTransitionTime":"2026-02-19T09:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:14 crc kubenswrapper[4965]: I0219 09:43:14.887010 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:14 crc kubenswrapper[4965]: I0219 09:43:14.887091 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:14 crc kubenswrapper[4965]: I0219 09:43:14.887118 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:14 crc kubenswrapper[4965]: I0219 09:43:14.887154 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:14 crc kubenswrapper[4965]: I0219 09:43:14.887228 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:14Z","lastTransitionTime":"2026-02-19T09:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:14 crc kubenswrapper[4965]: I0219 09:43:14.991362 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:14 crc kubenswrapper[4965]: I0219 09:43:14.991432 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:14 crc kubenswrapper[4965]: I0219 09:43:14.991460 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:14 crc kubenswrapper[4965]: I0219 09:43:14.991493 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:14 crc kubenswrapper[4965]: I0219 09:43:14.991517 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:14Z","lastTransitionTime":"2026-02-19T09:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.095132 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.095236 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.095256 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.095284 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.095301 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:15Z","lastTransitionTime":"2026-02-19T09:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.177988 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 14:40:18.484477465 +0000 UTC Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.199075 4965 scope.go:117] "RemoveContainer" containerID="708efdbb32a469d559b54fa6c816d7a3ce1b7fc4bca9a81f08ce9400d6090f0c" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.199346 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.199387 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.199396 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.199414 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.199428 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:15Z","lastTransitionTime":"2026-02-19T09:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.216595 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pjxbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3965f16-f751-4de2-9f58-db2070fc99b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d81fdde65dd95b5dd26fd2bccb3c26f4491eee9891d4e837fd01338432057878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pjxbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:15Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.244386 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lwjwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e1b431a-0390-4366-82d1-6cb782c7a9e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdh66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdh66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:43:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lwjwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:15Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.262903 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:15Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.278238 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6nv8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7972115-bfc1-42ee-b756-e394806eed51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://597dabc5893cced827268c6dc222b2f1535c93e6086c25cec52e7f612952eb65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vd96\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6nv8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:15Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.302891 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nsjqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aef896286f2619adf09fb4e2f4f25543b1d0d69c90fb4d301fb1c215e9b78f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4tp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nsjqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:15Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.303539 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.303574 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.303589 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.303609 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.303624 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:15Z","lastTransitionTime":"2026-02-19T09:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.336530 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c788dfa-1923-4a2b-9619-73acf92ec849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa60b6875cede631c9383845eb085f96d62a6365609f1f98b84165b54e0872a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ebb933d7238665138ec7e854756522607a2814b48116b2ce4474869b39344c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac7fd5095ec7fd8ce98b9150bd5c0a642004e2c1239a6fa1ff002efa67471df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51316b32af59fe23cdf832fbc0b37b11f74d3a57d01eed32ca30a196d4c7e2c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccba1acfe523175d218c25c2f59a6f9874426235c9cba981a80cc53aca12408a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc418c94085bcd4ed93250cce9eb6bc122cd045035b72800df2bdf4b364d6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://708efdbb32a469d559b54fa6c816d7a3ce1b7fc4bca9a81f08ce9400d6090f0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93e915205e75c0aa802775af4b27f25846f264025ef038251463546485cf2acf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:43:00Z\\\",\\\"message\\\":\\\" 6182 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 09:43:00.162470 6182 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 09:43:00.162523 6182 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 09:43:00.162632 6182 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 09:43:00.163137 6182 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0219 09:43:00.163214 6182 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 09:43:00.163277 6182 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 09:43:00.163855 6182 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://708efdbb32a469d559b54fa6c816d7a3ce1b7fc4bca9a81f08ce9400d6090f0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:43:01Z\\\",\\\"message\\\":\\\"09:43:01.378240 6383 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-nsjqz in node crc\\\\nI0219 09:43:01.378187 6383 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0219 09:43:01.378247 6383 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-nsjqz after 0 failed attempt(s)\\\\nI0219 09:43:01.378253 6383 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-nsjqz\\\\nI0219 09:43:01.378135 6383 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0219 09:43:01.378267 6383 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0219 09:43:01.378269 6383 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nF0219 09:43:01.378277 6383 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to s\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://533452e14c9d0d57a451ec0dd06097f87f60658a8f008203b29c31b2b5310eb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dcfpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:15Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.353488 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g5jnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ab24976-06f3-4373-825a-5234ff24f2cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef52c51fd38bf34f1fc3eb014d85c40137dd15030237334159ffbb71e1d6c2a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll7dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcbd8e6b02f20a249ddb3fbf20ddd72a94b40fd420cb6ad4c59ea513994ac382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll7dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:43:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-g5jnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:15Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.374110 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd2c04f5bfa6800521c39502b241dfea1a0b9d3ddde4eb92d501d28bcfad1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:15Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.394556 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:15Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.407634 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.408757 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.408872 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.408904 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.408918 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:15Z","lastTransitionTime":"2026-02-19T09:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.422154 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed9a04147ac88af087b35406b7fc4e1261b034a9fbfa0014446cdc08743f7184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27905a4c42a1d28d582484efe02020cd2b7d5a5af7c53787412705c7a6da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:15Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.453877 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ef3eb8-6103-492d-b6ef-f16081d15e83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://107d47a2c3ddc138ad383ab20f81dabe2c31af50f7bd66c31b66df79488ba837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ff237da7e509d3b4a25e8042c384a768ef0123d1687b574502f769bde3121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mhh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:15Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.470245 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:15Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.483721 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ca9c67a49c188984680f98e96b659087034f30727c1fcdad7dfc298157745c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:15Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.504050 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vpj8c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26ce37d0-9ace-438a-bdd4-6bb30e41bac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cab682da53d115c9e5ce5dca08aae544673283d03b3e11ba9d28ca7896fd4103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3b065fcc4e11f5576c86810e3af403b4336878950bb02f61e1e461b38b009e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c3b065fcc4e11f5576c86810e3af403b4336878950bb02f61e1e461b38b009e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4089ca812b00a9e7ab8b2ba1f148d929de9837384e6b0b64ea8f62dd6917fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4089ca812b00a9e7ab8b2ba1f148d929de9837384e6b0b64ea8f62dd6917fee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0fceec5800537c79268d8bad66cd51cedd7e6442e8f08ea259dd5714334a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0fceec5800537c79268d8bad66cd51cedd7e6442e8f08ea259dd5714334a81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vpj8c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:15Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.512337 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.512404 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.512421 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.512444 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.512460 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:15Z","lastTransitionTime":"2026-02-19T09:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.527209 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"210f2216-544c-43a1-813b-68e47da7447e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31cd85d70b58724b5ed5071b085e5b20b31083087fc7847dfadccc52cfd717c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d403f58f80ef84bfe879f95390ffc186d6516ce879f1ca593e9d06b6fdaa9003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86861693b1090906a34be8e134571c684c24b327ed62db509ca470e79c70c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65b4ac45cd6766a2144308a99817f41ef69812519680ebae6edf2d6c72f1c97e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c0288960f3e7739ec0587fcefc29e57c0e351c4903326474454df7b6b57a29c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:42:39Z\\\",\\\"message\\\":\\\"W0219 09:42:28.573633 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 09:42:28.575071 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771494148 cert, and key in /tmp/serving-cert-427400488/serving-signer.crt, /tmp/serving-cert-427400488/serving-signer.key\\\\nI0219 09:42:29.117984 1 observer_polling.go:159] Starting file observer\\\\nW0219 09:42:29.120780 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 09:42:29.121009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:42:29.122010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-427400488/tls.crt::/tmp/serving-cert-427400488/tls.key\\\\\\\"\\\\nF0219 09:42:39.487179 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2dcb23379da52333bd115902957e4051df5472eb6d909d45f4781487e8d6e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:15Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.541169 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85d200ad-dc81-4825-a3e0-976c042ebfd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e2ac875fca92d3c631dc7856cd9f72b9abbf3f2edcbc7efeb49ce1c03ac52a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d4ac252f5069500eef4e1579559c883095bf1c21a29cb96a36a4aab507a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29999666cb6f12b3a4a394a38d4304dd636fe7106b771ca4ef541693fbfc76a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fb1ec6375fa0345ae67191ebc522471cabd2510440f8051132b833c0fa595e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:15Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.557076 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dcfpx_7c788dfa-1923-4a2b-9619-73acf92ec849/ovnkube-controller/1.log" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.560175 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" event={"ID":"7c788dfa-1923-4a2b-9619-73acf92ec849","Type":"ContainerStarted","Data":"656d2c48a8186b05aa2582865c3075e2a72238ef8cfd816187f2737062be98ee"} Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.560723 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.560897 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:15Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.573684 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6nv8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7972115-bfc1-42ee-b756-e394806eed51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://597dabc5893cced827268c6dc222b2f1535c93e6086c25cec52e7f612952eb65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vd96\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6nv8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:15Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.590334 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nsjqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aef896286f2619adf09fb4e2f4f25543b1d0d69c90fb4d301fb1c215e9b78f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4tp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nsjqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:15Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.609209 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pjxbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3965f16-f751-4de2-9f58-db2070fc99b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d81fdde65dd95b5dd26fd2bccb3c26f4491eee9891d4e837fd01338432057878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pjxbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:15Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.615703 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.615798 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.615817 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.615844 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.615863 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:15Z","lastTransitionTime":"2026-02-19T09:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.638306 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lwjwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e1b431a-0390-4366-82d1-6cb782c7a9e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdh66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdh66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:43:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lwjwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:15Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.665176 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:15Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.688357 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed9a04147ac88af087b35406b7fc4e1261b034a9fbfa0014446cdc08743f7184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27905a4c42a1d28d582484efe02020cd2b7d5a5af7c53787412705c7a6da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:15Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.700488 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ef3eb8-6103-492d-b6ef-f16081d15e83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://107d47a2c3ddc138ad383ab20f81dabe2c31af50f7bd66c31b66df79488ba837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ff237da7e509d3b4a25e8042c384a768ef0123d1687b574502f769bde3121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mhh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:15Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.718111 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.718148 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.718160 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.718178 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.718212 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:15Z","lastTransitionTime":"2026-02-19T09:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.724016 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c788dfa-1923-4a2b-9619-73acf92ec849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa60b6875cede631c9383845eb085f96d62a6365609f1f98b84165b54e0872a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ebb933d7238665138ec7e854756522607a2814b48116b2ce4474869b39344c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac7fd5095ec7fd8ce98b9150bd5c0a642004e2c1239a6fa1ff002efa67471df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51316b32af59fe23cdf832fbc0b37b11f74d3a57d01eed32ca30a196d4c7e2c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccba1acfe523175d218c25c2f59a6f9874426235c9cba981a80cc53aca12408a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc418c94085bcd4ed93250cce9eb6bc122cd045035b72800df2bdf4b364d6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://708efdbb32a469d559b54fa6c816d7a3ce1b7fc4bca9a81f08ce9400d6090f0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://708efdbb32a469d559b54fa6c816d7a3ce1b7fc4bca9a81f08ce9400d6090f0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:43:01Z\\\",\\\"message\\\":\\\"09:43:01.378240 6383 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-nsjqz in node crc\\\\nI0219 09:43:01.378187 6383 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0219 09:43:01.378247 6383 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-nsjqz after 0 failed attempt(s)\\\\nI0219 09:43:01.378253 6383 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-nsjqz\\\\nI0219 09:43:01.378135 6383 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0219 09:43:01.378267 6383 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0219 09:43:01.378269 6383 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nF0219 09:43:01.378277 6383 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to s\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:43:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-dcfpx_openshift-ovn-kubernetes(7c788dfa-1923-4a2b-9619-73acf92ec849)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://533452e14c9d0d57a451ec0dd06097f87f60658a8f008203b29c31b2b5310eb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dcfpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:15Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.743680 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g5jnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ab24976-06f3-4373-825a-5234ff24f2cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef52c51fd38bf34f1fc3eb014d85c40137dd15030237334159ffbb71e1d6c2a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll7dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcbd8e6b02f20a249ddb3fbf20ddd72a94b40fd420cb6ad4c59ea513994ac382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll7dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:43:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-g5jnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:15Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.779539 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd2c04f5bfa6800521c39502b241dfea1a0b9d3ddde4eb92d501d28bcfad1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:15Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.805291 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"210f2216-544c-43a1-813b-68e47da7447e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31cd85d70b58724b5ed5071b085e5b20b31083087fc7847dfadccc52cfd717c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d403f58f80ef84bfe879f95390ffc186d6516ce879f1ca593e9d06b6fdaa9003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86861693b1090906a34be8e134571c684c24b327ed62db509ca470e79c70c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65b4ac45cd6766a2144308a99817f41ef69812519680ebae6edf2d6c72f1c97e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c0288960f3e7739ec0587fcefc29e57c0e351c4903326474454df7b6b57a29c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:42:39Z\\\",\\\"message\\\":\\\"W0219 09:42:28.573633 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 09:42:28.575071 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771494148 cert, and key in /tmp/serving-cert-427400488/serving-signer.crt, /tmp/serving-cert-427400488/serving-signer.key\\\\nI0219 09:42:29.117984 1 observer_polling.go:159] Starting file observer\\\\nW0219 09:42:29.120780 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 09:42:29.121009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:42:29.122010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-427400488/tls.crt::/tmp/serving-cert-427400488/tls.key\\\\\\\"\\\\nF0219 09:42:39.487179 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2dcb23379da52333bd115902957e4051df5472eb6d909d45f4781487e8d6e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:15Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.820813 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.820870 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.820894 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.820912 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.820923 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:15Z","lastTransitionTime":"2026-02-19T09:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.832903 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85d200ad-dc81-4825-a3e0-976c042ebfd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e2ac875fca92d3c631dc7856cd9f72b9abbf3f2edcbc7efeb49ce1c03ac52a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d4ac252f5069500eef4e1579559c883095bf1c21a29cb96a36a4aab507a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29999666cb6f12b3a4a394a38d4304dd636fe7106b771ca4ef541693fbfc76a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fb1ec6375fa0345ae67191ebc522471cabd2510440f8051132b833c0fa595e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:15Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.848688 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:15Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.862145 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ca9c67a49c188984680f98e96b659087034f30727c1fcdad7dfc298157745c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:15Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.880221 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vpj8c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26ce37d0-9ace-438a-bdd4-6bb30e41bac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cab682da53d115c9e5ce5dca08aae544673283d03b3e11ba9d28ca7896fd4103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3b065fcc4e11f5576c86810e3af403b4336878950bb02f61e1e461b38b009e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c3b065fcc4e11f5576c86810e3af403b4336878950bb02f61e1e461b38b009e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4089ca812b00a9e7ab8b2ba1f148d929de9837384e6b0b64ea8f62dd6917fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4089ca812b00a9e7ab8b2ba1f148d929de9837384e6b0b64ea8f62dd6917fee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0fceec5800537c79268d8bad66cd51cedd7e6442e8f08ea259dd5714334a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0fceec5800537c79268d8bad66cd51cedd7e6442e8f08ea259dd5714334a81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vpj8c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:15Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.896214 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"210f2216-544c-43a1-813b-68e47da7447e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31cd85d70b58724b5ed5071b085e5b20b31083087fc7847dfadccc52cfd717c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d403f58f80ef84bfe879f95390ffc186d6516ce879f1ca593e9d06b6fdaa9003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86861693b1090906a34be8e134571c684c24b327ed62db509ca470e79c70c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65b4ac45cd6766a2144308a99817f41ef69812519680ebae6edf2d6c72f1c97e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c0288960f3e7739ec0587fcefc29e57c0e351c4903326474454df7b6b57a29c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:42:39Z\\\",\\\"message\\\":\\\"W0219 09:42:28.573633 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 09:42:28.575071 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771494148 cert, and key in /tmp/serving-cert-427400488/serving-signer.crt, /tmp/serving-cert-427400488/serving-signer.key\\\\nI0219 09:42:29.117984 1 observer_polling.go:159] Starting file observer\\\\nW0219 09:42:29.120780 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 09:42:29.121009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:42:29.122010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-427400488/tls.crt::/tmp/serving-cert-427400488/tls.key\\\\\\\"\\\\nF0219 09:42:39.487179 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2dcb23379da52333bd115902957e4051df5472eb6d909d45f4781487e8d6e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:15Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.917312 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85d200ad-dc81-4825-a3e0-976c042ebfd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e2ac875fca92d3c631dc7856cd9f72b9abbf3f2edcbc7efeb49ce1c03ac52a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d4ac252f5069500eef4e1579559c883095bf1c21a29cb96a36a4aab507a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29999666cb6f12b3a4a394a38d4304dd636fe7106b771ca4ef541693fbfc76a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fb1ec6375fa0345ae67191ebc522471cabd2510440f8051132b833c0fa595e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:15Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.923480 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.923522 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.923538 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.923565 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.923578 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:15Z","lastTransitionTime":"2026-02-19T09:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.933375 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:15Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.947485 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ca9c67a49c188984680f98e96b659087034f30727c1fcdad7dfc298157745c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:15Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.965416 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vpj8c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26ce37d0-9ace-438a-bdd4-6bb30e41bac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cab682da53d115c9e5ce5dca08aae544673283d03b3e11ba9d28ca7896fd4103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3b065fcc4e11f5576c86810e3af403b4336878950bb02f61e1e461b38b009e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c3b065fcc4e11f5576c86810e3af403b4336878950bb02f61e1e461b38b009e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4089ca812b00a9e7ab8b2ba1f148d929de9837384e6b0b64ea8f62dd6917fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4089ca812b00a9e7ab8b2ba1f148d929de9837384e6b0b64ea8f62dd6917fee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0fceec5800537c79268d8bad66cd51cedd7e6442e8f08ea259dd5714334a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0fceec5800537c79268d8bad66cd51cedd7e6442e8f08ea259dd5714334a81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vpj8c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:15Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.980862 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:15Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:15 crc kubenswrapper[4965]: I0219 09:43:15.996368 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6nv8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7972115-bfc1-42ee-b756-e394806eed51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://597dabc5893cced827268c6dc222b2f1535c93e6086c25cec52e7f612952eb65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vd96\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6nv8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:15Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.018107 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nsjqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aef896286f2619adf09fb4e2f4f25543b1d0d69c90fb4d301fb1c215e9b78f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4tp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nsjqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:16Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.026143 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.026179 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.026227 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.026242 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.026254 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:16Z","lastTransitionTime":"2026-02-19T09:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.030841 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pjxbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3965f16-f751-4de2-9f58-db2070fc99b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d81fdde65dd95b5dd26fd2bccb3c26f4491eee9891d4e837fd01338432057878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pjxbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:16Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.043753 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lwjwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e1b431a-0390-4366-82d1-6cb782c7a9e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdh66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdh66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:43:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lwjwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:16Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.060063 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:16Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.076653 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed9a04147ac88af087b35406b7fc4e1261b034a9fbfa0014446cdc08743f7184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27905a4c42a1d28d582484efe02020cd2b7d5a5af7c53787412705c7a6da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:16Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.092588 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ef3eb8-6103-492d-b6ef-f16081d15e83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://107d47a2c3ddc138ad383ab20f81dabe2c31af50f7bd66c31b66df79488ba837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ff237da7e509d3b4a25e8042c384a768ef0123d1687b574502f769bde3121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mhh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:16Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.114270 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c788dfa-1923-4a2b-9619-73acf92ec849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa60b6875cede631c9383845eb085f96d62a6365609f1f98b84165b54e0872a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ebb933d7238665138ec7e854756522607a2814b48116b2ce4474869b39344c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac7fd5095ec7fd8ce98b9150bd5c0a642004e2c1239a6fa1ff002efa67471df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51316b32af59fe23cdf832fbc0b37b11f74d3a57d01eed32ca30a196d4c7e2c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccba1acfe523175d218c25c2f59a6f9874426235c9cba981a80cc53aca12408a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc418c94085bcd4ed93250cce9eb6bc122cd045035b72800df2bdf4b364d6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://656d2c48a8186b05aa2582865c3075e2a72238ef8cfd816187f2737062be98ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://708efdbb32a469d559b54fa6c816d7a3ce1b7fc4bca9a81f08ce9400d6090f0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:43:01Z\\\",\\\"message\\\":\\\"09:43:01.378240 6383 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-nsjqz in node crc\\\\nI0219 09:43:01.378187 6383 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0219 09:43:01.378247 6383 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-nsjqz after 0 failed attempt(s)\\\\nI0219 09:43:01.378253 6383 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-nsjqz\\\\nI0219 09:43:01.378135 6383 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0219 09:43:01.378267 6383 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0219 09:43:01.378269 6383 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nF0219 09:43:01.378277 6383 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to s\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:43:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://533452e14c9d0d57a451ec0dd06097f87f60658a8f008203b29c31b2b5310eb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dcfpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:16Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.128029 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g5jnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ab24976-06f3-4373-825a-5234ff24f2cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef52c51fd38bf34f1fc3eb014d85c40137dd15030237334159ffbb71e1d6c2a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll7dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcbd8e6b02f20a249ddb3fbf20ddd72a94b40fd420cb6ad4c59ea513994ac382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll7dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:43:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-g5jnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:16Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.128954 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.129027 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.129045 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.129070 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.129088 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:16Z","lastTransitionTime":"2026-02-19T09:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.144717 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd2c04f5bfa6800521c39502b241dfea1a0b9d3ddde4eb92d501d28bcfad1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:16Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.179044 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 02:58:18.713987493 +0000 UTC Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.197466 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.197514 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.197596 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.197606 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:43:16 crc kubenswrapper[4965]: E0219 09:43:16.197718 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lwjwk" podUID="1e1b431a-0390-4366-82d1-6cb782c7a9e8" Feb 19 09:43:16 crc kubenswrapper[4965]: E0219 09:43:16.197846 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:43:16 crc kubenswrapper[4965]: E0219 09:43:16.198024 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:43:16 crc kubenswrapper[4965]: E0219 09:43:16.198133 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.232129 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.232173 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.232185 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.232221 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.232237 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:16Z","lastTransitionTime":"2026-02-19T09:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.335379 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.335422 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.335435 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.335453 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.335467 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:16Z","lastTransitionTime":"2026-02-19T09:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.437337 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.437379 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.437389 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.437403 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.437413 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:16Z","lastTransitionTime":"2026-02-19T09:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.540257 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.540318 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.540330 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.540350 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.540363 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:16Z","lastTransitionTime":"2026-02-19T09:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.576181 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dcfpx_7c788dfa-1923-4a2b-9619-73acf92ec849/ovnkube-controller/2.log" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.576946 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dcfpx_7c788dfa-1923-4a2b-9619-73acf92ec849/ovnkube-controller/1.log" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.579456 4965 generic.go:334] "Generic (PLEG): container finished" podID="7c788dfa-1923-4a2b-9619-73acf92ec849" containerID="656d2c48a8186b05aa2582865c3075e2a72238ef8cfd816187f2737062be98ee" exitCode=1 Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.579501 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" event={"ID":"7c788dfa-1923-4a2b-9619-73acf92ec849","Type":"ContainerDied","Data":"656d2c48a8186b05aa2582865c3075e2a72238ef8cfd816187f2737062be98ee"} Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.579556 4965 scope.go:117] "RemoveContainer" containerID="708efdbb32a469d559b54fa6c816d7a3ce1b7fc4bca9a81f08ce9400d6090f0c" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.580547 4965 scope.go:117] "RemoveContainer" containerID="656d2c48a8186b05aa2582865c3075e2a72238ef8cfd816187f2737062be98ee" Feb 19 09:43:16 crc kubenswrapper[4965]: E0219 09:43:16.580754 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-dcfpx_openshift-ovn-kubernetes(7c788dfa-1923-4a2b-9619-73acf92ec849)\"" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" podUID="7c788dfa-1923-4a2b-9619-73acf92ec849" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.595511 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:16Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.610677 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6nv8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7972115-bfc1-42ee-b756-e394806eed51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://597dabc5893cced827268c6dc222b2f1535c93e6086c25cec52e7f612952eb65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vd96\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6nv8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:16Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.624471 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nsjqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aef896286f2619adf09fb4e2f4f25543b1d0d69c90fb4d301fb1c215e9b78f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4tp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nsjqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:16Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.636348 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pjxbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3965f16-f751-4de2-9f58-db2070fc99b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d81fdde65dd95b5dd26fd2bccb3c26f4491eee9891d4e837fd01338432057878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pjxbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:16Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.643087 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.643118 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.643129 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.643149 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.643160 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:16Z","lastTransitionTime":"2026-02-19T09:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.649964 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lwjwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e1b431a-0390-4366-82d1-6cb782c7a9e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdh66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdh66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:43:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lwjwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:16Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.663420 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd2c04f5bfa6800521c39502b241dfea1a0b9d3ddde4eb92d501d28bcfad1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:16Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.676757 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:16Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.689237 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed9a04147ac88af087b35406b7fc4e1261b034a9fbfa0014446cdc08743f7184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27905a4c42a1d28d582484efe02020cd2b7d5a5af7c53787412705c7a6da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:16Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.701219 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ef3eb8-6103-492d-b6ef-f16081d15e83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://107d47a2c3ddc138ad383ab20f81dabe2c31af50f7bd66c31b66df79488ba837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ff237da7e509d3b4a25e8042c384a768ef0123d1687b574502f769bde3121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mhh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:16Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.722617 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c788dfa-1923-4a2b-9619-73acf92ec849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa60b6875cede631c9383845eb085f96d62a6365609f1f98b84165b54e0872a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ebb933d7238665138ec7e854756522607a2814b48116b2ce4474869b39344c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac7fd5095ec7fd8ce98b9150bd5c0a642004e2c1239a6fa1ff002efa67471df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51316b32af59fe23cdf832fbc0b37b11f74d3a57d01eed32ca30a196d4c7e2c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccba1acfe523175d218c25c2f59a6f9874426235c9cba981a80cc53aca12408a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc418c94085bcd4ed93250cce9eb6bc122cd045035b72800df2bdf4b364d6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://656d2c48a8186b05aa2582865c3075e2a72238ef8cfd816187f2737062be98ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://708efdbb32a469d559b54fa6c816d7a3ce1b7fc4bca9a81f08ce9400d6090f0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:43:01Z\\\",\\\"message\\\":\\\"09:43:01.378240 6383 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-nsjqz in node crc\\\\nI0219 09:43:01.378187 6383 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0219 09:43:01.378247 6383 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-nsjqz after 0 failed attempt(s)\\\\nI0219 09:43:01.378253 6383 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-nsjqz\\\\nI0219 09:43:01.378135 6383 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0219 09:43:01.378267 6383 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0219 09:43:01.378269 6383 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nF0219 09:43:01.378277 6383 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to s\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:43:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://656d2c48a8186b05aa2582865c3075e2a72238ef8cfd816187f2737062be98ee\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:43:16Z\\\",\\\"message\\\":\\\"map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0219 09:43:16.349938 6589 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0219 09:43:16.349981 6589 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0219 09:43:16.350026 6589 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0219 09:43:16.350041 6589 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://533452e14c9d0d57a451ec0dd06097f87f60658a8f008203b29c31b2b5310eb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dcfpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:16Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.736247 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g5jnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ab24976-06f3-4373-825a-5234ff24f2cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef52c51fd38bf34f1fc3eb014d85c40137dd15030237334159ffbb71e1d6c2a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll7dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcbd8e6b02f20a249ddb3fbf20ddd72a94b40fd420cb6ad4c59ea513994ac382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll7dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:43:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-g5jnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:16Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.746208 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.746450 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.746522 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.746614 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.746674 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:16Z","lastTransitionTime":"2026-02-19T09:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.756953 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vpj8c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26ce37d0-9ace-438a-bdd4-6bb30e41bac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cab682da53d115c9e5ce5dca08aae544673283d03b3e11ba9d28ca7896fd4103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3b065fcc4e11f5576c86810e3af403b4336878950bb02f61e1e461b38b009e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c3b065fcc4e11f5576c86810e3af403b4336878950bb02f61e1e461b38b009e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4089ca812b00a9e7ab8b2ba1f148d929de9837384e6b0b64ea8f62dd6917fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4089ca812b00a9e7ab8b2ba1f148d929de9837384e6b0b64ea8f62dd6917fee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0fceec5800537c79268d8bad66cd51cedd7e6442e8f08ea259dd5714334a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0fceec5800537c79268d8bad66cd51cedd7e6442e8f08ea259dd5714334a81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vpj8c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:16Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.771468 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"210f2216-544c-43a1-813b-68e47da7447e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31cd85d70b58724b5ed5071b085e5b20b31083087fc7847dfadccc52cfd717c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d403f58f80ef84bfe879f95390ffc186d6516ce879f1ca593e9d06b6fdaa9003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86861693b1090906a34be8e134571c684c24b327ed62db509ca470e79c70c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65b4ac45cd6766a2144308a99817f41ef69812519680ebae6edf2d6c72f1c97e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c0288960f3e7739ec0587fcefc29e57c0e351c4903326474454df7b6b57a29c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:42:39Z\\\",\\\"message\\\":\\\"W0219 09:42:28.573633 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 09:42:28.575071 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771494148 cert, and key in /tmp/serving-cert-427400488/serving-signer.crt, /tmp/serving-cert-427400488/serving-signer.key\\\\nI0219 09:42:29.117984 1 observer_polling.go:159] Starting file observer\\\\nW0219 09:42:29.120780 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 09:42:29.121009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:42:29.122010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-427400488/tls.crt::/tmp/serving-cert-427400488/tls.key\\\\\\\"\\\\nF0219 09:42:39.487179 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2dcb23379da52333bd115902957e4051df5472eb6d909d45f4781487e8d6e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:16Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.786809 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85d200ad-dc81-4825-a3e0-976c042ebfd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e2ac875fca92d3c631dc7856cd9f72b9abbf3f2edcbc7efeb49ce1c03ac52a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d4ac252f5069500eef4e1579559c883095bf1c21a29cb96a36a4aab507a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29999666cb6f12b3a4a394a38d4304dd636fe7106b771ca4ef541693fbfc76a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fb1ec6375fa0345ae67191ebc522471cabd2510440f8051132b833c0fa595e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:16Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.802491 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:16Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.818428 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ca9c67a49c188984680f98e96b659087034f30727c1fcdad7dfc298157745c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:16Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.854911 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.855029 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.855056 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.855091 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.855111 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:16Z","lastTransitionTime":"2026-02-19T09:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.958349 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.958400 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.958413 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.958436 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:16 crc kubenswrapper[4965]: I0219 09:43:16.958451 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:16Z","lastTransitionTime":"2026-02-19T09:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.061852 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.061912 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.061926 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.061949 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.061964 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:17Z","lastTransitionTime":"2026-02-19T09:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.165139 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.165606 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.165716 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.165835 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.165932 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:17Z","lastTransitionTime":"2026-02-19T09:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.179545 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 21:01:26.788356882 +0000 UTC Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.268775 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.268862 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.268901 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.268938 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.268966 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:17Z","lastTransitionTime":"2026-02-19T09:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.372481 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.372534 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.372548 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.372570 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.372585 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:17Z","lastTransitionTime":"2026-02-19T09:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.476533 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.476590 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.476604 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.476624 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.476639 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:17Z","lastTransitionTime":"2026-02-19T09:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.579548 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.579616 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.579636 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.579661 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.579683 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:17Z","lastTransitionTime":"2026-02-19T09:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.584895 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dcfpx_7c788dfa-1923-4a2b-9619-73acf92ec849/ovnkube-controller/2.log" Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.589465 4965 scope.go:117] "RemoveContainer" containerID="656d2c48a8186b05aa2582865c3075e2a72238ef8cfd816187f2737062be98ee" Feb 19 09:43:17 crc kubenswrapper[4965]: E0219 09:43:17.589789 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-dcfpx_openshift-ovn-kubernetes(7c788dfa-1923-4a2b-9619-73acf92ec849)\"" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" podUID="7c788dfa-1923-4a2b-9619-73acf92ec849" Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.612503 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"210f2216-544c-43a1-813b-68e47da7447e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31cd85d70b58724b5ed5071b085e5b20b31083087fc7847dfadccc52cfd717c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d403f58f80ef84bfe879f95390ffc186d6516ce879f1ca593e9d06b6fdaa9003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86861693b1090906a34be8e134571c684c24b327ed62db509ca470e79c70c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65b4ac45cd6766a2144308a99817f41ef69812519680ebae6edf2d6c72f1c97e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c0288960f3e7739ec0587fcefc29e57c0e351c4903326474454df7b6b57a29c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:42:39Z\\\",\\\"message\\\":\\\"W0219 09:42:28.573633 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 09:42:28.575071 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771494148 cert, and key in /tmp/serving-cert-427400488/serving-signer.crt, /tmp/serving-cert-427400488/serving-signer.key\\\\nI0219 09:42:29.117984 1 observer_polling.go:159] Starting file observer\\\\nW0219 09:42:29.120780 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 09:42:29.121009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:42:29.122010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-427400488/tls.crt::/tmp/serving-cert-427400488/tls.key\\\\\\\"\\\\nF0219 09:42:39.487179 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2dcb23379da52333bd115902957e4051df5472eb6d909d45f4781487e8d6e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:17Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.635824 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85d200ad-dc81-4825-a3e0-976c042ebfd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e2ac875fca92d3c631dc7856cd9f72b9abbf3f2edcbc7efeb49ce1c03ac52a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d4ac252f5069500eef4e1579559c883095bf1c21a29cb96a36a4aab507a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29999666cb6f12b3a4a394a38d4304dd636fe7106b771ca4ef541693fbfc76a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fb1ec6375fa0345ae67191ebc522471cabd2510440f8051132b833c0fa595e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:17Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.653167 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:17Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.670341 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ca9c67a49c188984680f98e96b659087034f30727c1fcdad7dfc298157745c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:17Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.683439 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.683554 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.683614 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.683685 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.683758 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:17Z","lastTransitionTime":"2026-02-19T09:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.695793 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vpj8c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26ce37d0-9ace-438a-bdd4-6bb30e41bac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cab682da53d115c9e5ce5dca08aae544673283d03b3e11ba9d28ca7896fd4103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3b065fcc4e11f5576c86810e3af403b4336878950bb02f61e1e461b38b009e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c3b065fcc4e11f5576c86810e3af403b4336878950bb02f61e1e461b38b009e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4089ca812b00a9e7ab8b2ba1f148d929de9837384e6b0b64ea8f62dd6917fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4089ca812b00a9e7ab8b2ba1f148d929de9837384e6b0b64ea8f62dd6917fee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0fceec5800537c79268d8bad66cd51cedd7e6442e8f08ea259dd5714334a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0fceec5800537c79268d8bad66cd51cedd7e6442e8f08ea259dd5714334a81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vpj8c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:17Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.717743 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:17Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.736813 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6nv8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7972115-bfc1-42ee-b756-e394806eed51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://597dabc5893cced827268c6dc222b2f1535c93e6086c25cec52e7f612952eb65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vd96\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6nv8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:17Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.758114 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nsjqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aef896286f2619adf09fb4e2f4f25543b1d0d69c90fb4d301fb1c215e9b78f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4tp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nsjqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:17Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.774974 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pjxbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3965f16-f751-4de2-9f58-db2070fc99b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d81fdde65dd95b5dd26fd2bccb3c26f4491eee9891d4e837fd01338432057878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pjxbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:17Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.787499 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.787611 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.787632 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.787698 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.787732 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:17Z","lastTransitionTime":"2026-02-19T09:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.791268 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lwjwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e1b431a-0390-4366-82d1-6cb782c7a9e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdh66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdh66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:43:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lwjwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:17Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.810737 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:17Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.827652 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed9a04147ac88af087b35406b7fc4e1261b034a9fbfa0014446cdc08743f7184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27905a4c42a1d28d582484efe02020cd2b7d5a5af7c53787412705c7a6da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:17Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.844259 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ef3eb8-6103-492d-b6ef-f16081d15e83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://107d47a2c3ddc138ad383ab20f81dabe2c31af50f7bd66c31b66df79488ba837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ff237da7e509d3b4a25e8042c384a768ef0123d1687b574502f769bde3121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mhh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:17Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.863546 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c788dfa-1923-4a2b-9619-73acf92ec849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa60b6875cede631c9383845eb085f96d62a6365609f1f98b84165b54e0872a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ebb933d7238665138ec7e854756522607a2814b48116b2ce4474869b39344c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac7fd5095ec7fd8ce98b9150bd5c0a642004e2c1239a6fa1ff002efa67471df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51316b32af59fe23cdf832fbc0b37b11f74d3a57d01eed32ca30a196d4c7e2c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccba1acfe523175d218c25c2f59a6f9874426235c9cba981a80cc53aca12408a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc418c94085bcd4ed93250cce9eb6bc122cd045035b72800df2bdf4b364d6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://656d2c48a8186b05aa2582865c3075e2a72238ef8cfd816187f2737062be98ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://656d2c48a8186b05aa2582865c3075e2a72238ef8cfd816187f2737062be98ee\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:43:16Z\\\",\\\"message\\\":\\\"map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0219 09:43:16.349938 6589 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0219 09:43:16.349981 6589 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0219 09:43:16.350026 6589 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0219 09:43:16.350041 6589 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:43:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-dcfpx_openshift-ovn-kubernetes(7c788dfa-1923-4a2b-9619-73acf92ec849)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://533452e14c9d0d57a451ec0dd06097f87f60658a8f008203b29c31b2b5310eb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dcfpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:17Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.877741 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g5jnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ab24976-06f3-4373-825a-5234ff24f2cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef52c51fd38bf34f1fc3eb014d85c40137dd15030237334159ffbb71e1d6c2a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll7dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcbd8e6b02f20a249ddb3fbf20ddd72a94b40fd420cb6ad4c59ea513994ac382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll7dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:43:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-g5jnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:17Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.890747 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.890815 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.890830 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.890856 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.890875 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:17Z","lastTransitionTime":"2026-02-19T09:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.894536 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd2c04f5bfa6800521c39502b241dfea1a0b9d3ddde4eb92d501d28bcfad1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:17Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.928248 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.928454 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1e1b431a-0390-4366-82d1-6cb782c7a9e8-metrics-certs\") pod \"network-metrics-daemon-lwjwk\" (UID: \"1e1b431a-0390-4366-82d1-6cb782c7a9e8\") " pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:43:17 crc kubenswrapper[4965]: E0219 09:43:17.928580 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:43:49.928545219 +0000 UTC m=+85.549866539 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:43:17 crc kubenswrapper[4965]: E0219 09:43:17.928608 4965 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 09:43:17 crc kubenswrapper[4965]: E0219 09:43:17.928675 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e1b431a-0390-4366-82d1-6cb782c7a9e8-metrics-certs podName:1e1b431a-0390-4366-82d1-6cb782c7a9e8 nodeName:}" failed. No retries permitted until 2026-02-19 09:43:33.928658613 +0000 UTC m=+69.549979923 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1e1b431a-0390-4366-82d1-6cb782c7a9e8-metrics-certs") pod "network-metrics-daemon-lwjwk" (UID: "1e1b431a-0390-4366-82d1-6cb782c7a9e8") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.994446 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.994510 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.994528 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.994551 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:17 crc kubenswrapper[4965]: I0219 09:43:17.994568 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:17Z","lastTransitionTime":"2026-02-19T09:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.029424 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.029557 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.029630 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:43:18 crc kubenswrapper[4965]: E0219 09:43:18.029762 4965 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 09:43:18 crc kubenswrapper[4965]: E0219 09:43:18.029827 4965 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 09:43:18 crc kubenswrapper[4965]: E0219 09:43:18.029844 4965 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 09:43:18 crc kubenswrapper[4965]: E0219 09:43:18.029880 4965 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 09:43:18 crc kubenswrapper[4965]: E0219 09:43:18.029924 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 09:43:50.029901035 +0000 UTC m=+85.651222355 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 09:43:18 crc kubenswrapper[4965]: E0219 09:43:18.030023 4965 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.030209 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:43:18 crc kubenswrapper[4965]: E0219 09:43:18.030314 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 09:43:50.030277514 +0000 UTC m=+85.651598864 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 09:43:18 crc kubenswrapper[4965]: E0219 09:43:18.030409 4965 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 09:43:18 crc kubenswrapper[4965]: E0219 09:43:18.030434 4965 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 09:43:18 crc kubenswrapper[4965]: E0219 09:43:18.030454 4965 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 09:43:18 crc kubenswrapper[4965]: E0219 09:43:18.030518 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 09:43:50.03049953 +0000 UTC m=+85.651820850 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 09:43:18 crc kubenswrapper[4965]: E0219 09:43:18.030561 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 09:43:50.030552001 +0000 UTC m=+85.651873321 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.097871 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.097939 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.097954 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.097977 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.097996 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:18Z","lastTransitionTime":"2026-02-19T09:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.179940 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 18:03:02.858950153 +0000 UTC Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.197185 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.197284 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.197233 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.197233 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:43:18 crc kubenswrapper[4965]: E0219 09:43:18.197372 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lwjwk" podUID="1e1b431a-0390-4366-82d1-6cb782c7a9e8" Feb 19 09:43:18 crc kubenswrapper[4965]: E0219 09:43:18.197558 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:43:18 crc kubenswrapper[4965]: E0219 09:43:18.197596 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:43:18 crc kubenswrapper[4965]: E0219 09:43:18.197667 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.201446 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.201480 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.201492 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.201512 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.201527 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:18Z","lastTransitionTime":"2026-02-19T09:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.305578 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.305728 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.305740 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.305761 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.305779 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:18Z","lastTransitionTime":"2026-02-19T09:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.409044 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.409116 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.409130 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.409151 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.409165 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:18Z","lastTransitionTime":"2026-02-19T09:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.512600 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.512655 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.512666 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.512686 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.512698 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:18Z","lastTransitionTime":"2026-02-19T09:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.642516 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.642596 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.642630 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.642651 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.642663 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:18Z","lastTransitionTime":"2026-02-19T09:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.706164 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.716934 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.721874 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6nv8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7972115-bfc1-42ee-b756-e394806eed51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://597dabc5893cced827268c6dc222b2f1535c93e6086c25cec52e7f612952eb65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vd96\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6nv8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.736426 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nsjqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aef896286f2619adf09fb4e2f4f25543b1d0d69c90fb4d301fb1c215e9b78f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4tp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nsjqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.745358 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.745453 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.745481 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.745516 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.745545 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:18Z","lastTransitionTime":"2026-02-19T09:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.756596 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pjxbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3965f16-f751-4de2-9f58-db2070fc99b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d81fdde65dd95b5dd26fd2bccb3c26f4491eee9891d4e837fd01338432057878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pjxbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.774437 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lwjwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e1b431a-0390-4366-82d1-6cb782c7a9e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdh66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdh66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:43:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lwjwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.795743 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.812273 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed9a04147ac88af087b35406b7fc4e1261b034a9fbfa0014446cdc08743f7184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27905a4c42a1d28d582484efe02020cd2b7d5a5af7c53787412705c7a6da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.825957 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ef3eb8-6103-492d-b6ef-f16081d15e83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://107d47a2c3ddc138ad383ab20f81dabe2c31af50f7bd66c31b66df79488ba837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ff237da7e509d3b4a25e8042c384a768ef0123d1687b574502f769bde3121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mhh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.847860 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.847931 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.847954 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.847985 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.848004 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:18Z","lastTransitionTime":"2026-02-19T09:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.859103 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c788dfa-1923-4a2b-9619-73acf92ec849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa60b6875cede631c9383845eb085f96d62a6365609f1f98b84165b54e0872a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ebb933d7238665138ec7e854756522607a2814b48116b2ce4474869b39344c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac7fd5095ec7fd8ce98b9150bd5c0a642004e2c1239a6fa1ff002efa67471df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51316b32af59fe23cdf832fbc0b37b11f74d3a57d01eed32ca30a196d4c7e2c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccba1acfe523175d218c25c2f59a6f9874426235c9cba981a80cc53aca12408a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc418c94085bcd4ed93250cce9eb6bc122cd045035b72800df2bdf4b364d6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://656d2c48a8186b05aa2582865c3075e2a72238ef8cfd816187f2737062be98ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://656d2c48a8186b05aa2582865c3075e2a72238ef8cfd816187f2737062be98ee\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:43:16Z\\\",\\\"message\\\":\\\"map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0219 09:43:16.349938 6589 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0219 09:43:16.349981 6589 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0219 09:43:16.350026 6589 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0219 09:43:16.350041 6589 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:43:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-dcfpx_openshift-ovn-kubernetes(7c788dfa-1923-4a2b-9619-73acf92ec849)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://533452e14c9d0d57a451ec0dd06097f87f60658a8f008203b29c31b2b5310eb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dcfpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.875672 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g5jnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ab24976-06f3-4373-825a-5234ff24f2cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef52c51fd38bf34f1fc3eb014d85c40137dd15030237334159ffbb71e1d6c2a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll7dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcbd8e6b02f20a249ddb3fbf20ddd72a94b40fd420cb6ad4c59ea513994ac382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll7dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:43:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-g5jnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.896827 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd2c04f5bfa6800521c39502b241dfea1a0b9d3ddde4eb92d501d28bcfad1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.917747 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.932075 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"210f2216-544c-43a1-813b-68e47da7447e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31cd85d70b58724b5ed5071b085e5b20b31083087fc7847dfadccc52cfd717c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d403f58f80ef84bfe879f95390ffc186d6516ce879f1ca593e9d06b6fdaa9003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86861693b1090906a34be8e134571c684c24b327ed62db509ca470e79c70c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65b4ac45cd6766a2144308a99817f41ef69812519680ebae6edf2d6c72f1c97e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c0288960f3e7739ec0587fcefc29e57c0e351c4903326474454df7b6b57a29c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:42:39Z\\\",\\\"message\\\":\\\"W0219 09:42:28.573633 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 09:42:28.575071 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771494148 cert, and key in /tmp/serving-cert-427400488/serving-signer.crt, /tmp/serving-cert-427400488/serving-signer.key\\\\nI0219 09:42:29.117984 1 observer_polling.go:159] Starting file observer\\\\nW0219 09:42:29.120780 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 09:42:29.121009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:42:29.122010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-427400488/tls.crt::/tmp/serving-cert-427400488/tls.key\\\\\\\"\\\\nF0219 09:42:39.487179 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2dcb23379da52333bd115902957e4051df5472eb6d909d45f4781487e8d6e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.951101 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.951223 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.951245 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.951276 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.951297 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:18Z","lastTransitionTime":"2026-02-19T09:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.951835 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85d200ad-dc81-4825-a3e0-976c042ebfd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e2ac875fca92d3c631dc7856cd9f72b9abbf3f2edcbc7efeb49ce1c03ac52a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d4ac252f5069500eef4e1579559c883095bf1c21a29cb96a36a4aab507a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29999666cb6f12b3a4a394a38d4304dd636fe7106b771ca4ef541693fbfc76a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fb1ec6375fa0345ae67191ebc522471cabd2510440f8051132b833c0fa595e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.968792 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:18 crc kubenswrapper[4965]: I0219 09:43:18.988030 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ca9c67a49c188984680f98e96b659087034f30727c1fcdad7dfc298157745c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:18Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:19 crc kubenswrapper[4965]: I0219 09:43:19.010298 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vpj8c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26ce37d0-9ace-438a-bdd4-6bb30e41bac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cab682da53d115c9e5ce5dca08aae544673283d03b3e11ba9d28ca7896fd4103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3b065fcc4e11f5576c86810e3af403b4336878950bb02f61e1e461b38b009e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c3b065fcc4e11f5576c86810e3af403b4336878950bb02f61e1e461b38b009e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4089ca812b00a9e7ab8b2ba1f148d929de9837384e6b0b64ea8f62dd6917fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4089ca812b00a9e7ab8b2ba1f148d929de9837384e6b0b64ea8f62dd6917fee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0fceec5800537c79268d8bad66cd51cedd7e6442e8f08ea259dd5714334a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0fceec5800537c79268d8bad66cd51cedd7e6442e8f08ea259dd5714334a81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vpj8c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:19Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:19 crc kubenswrapper[4965]: I0219 09:43:19.053710 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:19 crc kubenswrapper[4965]: I0219 09:43:19.053774 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:19 crc kubenswrapper[4965]: I0219 09:43:19.053793 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:19 crc kubenswrapper[4965]: I0219 09:43:19.053819 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:19 crc kubenswrapper[4965]: I0219 09:43:19.053838 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:19Z","lastTransitionTime":"2026-02-19T09:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:19 crc kubenswrapper[4965]: I0219 09:43:19.157170 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:19 crc kubenswrapper[4965]: I0219 09:43:19.157305 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:19 crc kubenswrapper[4965]: I0219 09:43:19.157327 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:19 crc kubenswrapper[4965]: I0219 09:43:19.157357 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:19 crc kubenswrapper[4965]: I0219 09:43:19.157376 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:19Z","lastTransitionTime":"2026-02-19T09:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:19 crc kubenswrapper[4965]: I0219 09:43:19.180807 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 01:08:53.560896944 +0000 UTC Feb 19 09:43:19 crc kubenswrapper[4965]: I0219 09:43:19.260735 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:19 crc kubenswrapper[4965]: I0219 09:43:19.260793 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:19 crc kubenswrapper[4965]: I0219 09:43:19.260808 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:19 crc kubenswrapper[4965]: I0219 09:43:19.260826 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:19 crc kubenswrapper[4965]: I0219 09:43:19.260839 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:19Z","lastTransitionTime":"2026-02-19T09:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:19 crc kubenswrapper[4965]: I0219 09:43:19.363376 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:19 crc kubenswrapper[4965]: I0219 09:43:19.363419 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:19 crc kubenswrapper[4965]: I0219 09:43:19.363428 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:19 crc kubenswrapper[4965]: I0219 09:43:19.363442 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:19 crc kubenswrapper[4965]: I0219 09:43:19.363452 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:19Z","lastTransitionTime":"2026-02-19T09:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:19 crc kubenswrapper[4965]: I0219 09:43:19.466977 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:19 crc kubenswrapper[4965]: I0219 09:43:19.467037 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:19 crc kubenswrapper[4965]: I0219 09:43:19.467050 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:19 crc kubenswrapper[4965]: I0219 09:43:19.467070 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:19 crc kubenswrapper[4965]: I0219 09:43:19.467083 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:19Z","lastTransitionTime":"2026-02-19T09:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:19 crc kubenswrapper[4965]: I0219 09:43:19.569460 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:19 crc kubenswrapper[4965]: I0219 09:43:19.569505 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:19 crc kubenswrapper[4965]: I0219 09:43:19.569515 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:19 crc kubenswrapper[4965]: I0219 09:43:19.569531 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:19 crc kubenswrapper[4965]: I0219 09:43:19.569544 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:19Z","lastTransitionTime":"2026-02-19T09:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:19 crc kubenswrapper[4965]: I0219 09:43:19.672466 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:19 crc kubenswrapper[4965]: I0219 09:43:19.672601 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:19 crc kubenswrapper[4965]: I0219 09:43:19.672636 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:19 crc kubenswrapper[4965]: I0219 09:43:19.672672 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:19 crc kubenswrapper[4965]: I0219 09:43:19.672697 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:19Z","lastTransitionTime":"2026-02-19T09:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:19 crc kubenswrapper[4965]: I0219 09:43:19.776099 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:19 crc kubenswrapper[4965]: I0219 09:43:19.776162 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:19 crc kubenswrapper[4965]: I0219 09:43:19.776173 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:19 crc kubenswrapper[4965]: I0219 09:43:19.776209 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:19 crc kubenswrapper[4965]: I0219 09:43:19.776221 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:19Z","lastTransitionTime":"2026-02-19T09:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:19 crc kubenswrapper[4965]: I0219 09:43:19.878763 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:19 crc kubenswrapper[4965]: I0219 09:43:19.878843 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:19 crc kubenswrapper[4965]: I0219 09:43:19.878857 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:19 crc kubenswrapper[4965]: I0219 09:43:19.878878 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:19 crc kubenswrapper[4965]: I0219 09:43:19.878891 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:19Z","lastTransitionTime":"2026-02-19T09:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:19 crc kubenswrapper[4965]: I0219 09:43:19.981796 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:19 crc kubenswrapper[4965]: I0219 09:43:19.981851 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:19 crc kubenswrapper[4965]: I0219 09:43:19.981863 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:19 crc kubenswrapper[4965]: I0219 09:43:19.981877 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:19 crc kubenswrapper[4965]: I0219 09:43:19.981888 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:19Z","lastTransitionTime":"2026-02-19T09:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:20 crc kubenswrapper[4965]: I0219 09:43:20.085078 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:20 crc kubenswrapper[4965]: I0219 09:43:20.085126 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:20 crc kubenswrapper[4965]: I0219 09:43:20.085141 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:20 crc kubenswrapper[4965]: I0219 09:43:20.085161 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:20 crc kubenswrapper[4965]: I0219 09:43:20.085175 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:20Z","lastTransitionTime":"2026-02-19T09:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:20 crc kubenswrapper[4965]: I0219 09:43:20.181161 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 15:36:55.691755074 +0000 UTC Feb 19 09:43:20 crc kubenswrapper[4965]: I0219 09:43:20.188978 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:20 crc kubenswrapper[4965]: I0219 09:43:20.189035 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:20 crc kubenswrapper[4965]: I0219 09:43:20.189056 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:20 crc kubenswrapper[4965]: I0219 09:43:20.189083 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:20 crc kubenswrapper[4965]: I0219 09:43:20.189102 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:20Z","lastTransitionTime":"2026-02-19T09:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:20 crc kubenswrapper[4965]: I0219 09:43:20.197252 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:43:20 crc kubenswrapper[4965]: I0219 09:43:20.197284 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:43:20 crc kubenswrapper[4965]: I0219 09:43:20.197299 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:43:20 crc kubenswrapper[4965]: E0219 09:43:20.197364 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:43:20 crc kubenswrapper[4965]: I0219 09:43:20.197436 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:43:20 crc kubenswrapper[4965]: E0219 09:43:20.197464 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:43:20 crc kubenswrapper[4965]: E0219 09:43:20.197685 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lwjwk" podUID="1e1b431a-0390-4366-82d1-6cb782c7a9e8" Feb 19 09:43:20 crc kubenswrapper[4965]: E0219 09:43:20.197759 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:43:20 crc kubenswrapper[4965]: I0219 09:43:20.292244 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:20 crc kubenswrapper[4965]: I0219 09:43:20.292563 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:20 crc kubenswrapper[4965]: I0219 09:43:20.292620 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:20 crc kubenswrapper[4965]: I0219 09:43:20.292655 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:20 crc kubenswrapper[4965]: I0219 09:43:20.292677 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:20Z","lastTransitionTime":"2026-02-19T09:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:20 crc kubenswrapper[4965]: I0219 09:43:20.395389 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:20 crc kubenswrapper[4965]: I0219 09:43:20.395445 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:20 crc kubenswrapper[4965]: I0219 09:43:20.395454 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:20 crc kubenswrapper[4965]: I0219 09:43:20.395469 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:20 crc kubenswrapper[4965]: I0219 09:43:20.395479 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:20Z","lastTransitionTime":"2026-02-19T09:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:20 crc kubenswrapper[4965]: I0219 09:43:20.498266 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:20 crc kubenswrapper[4965]: I0219 09:43:20.498317 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:20 crc kubenswrapper[4965]: I0219 09:43:20.498326 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:20 crc kubenswrapper[4965]: I0219 09:43:20.498348 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:20 crc kubenswrapper[4965]: I0219 09:43:20.498358 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:20Z","lastTransitionTime":"2026-02-19T09:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:20 crc kubenswrapper[4965]: I0219 09:43:20.600516 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:20 crc kubenswrapper[4965]: I0219 09:43:20.600564 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:20 crc kubenswrapper[4965]: I0219 09:43:20.600574 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:20 crc kubenswrapper[4965]: I0219 09:43:20.600592 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:20 crc kubenswrapper[4965]: I0219 09:43:20.600605 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:20Z","lastTransitionTime":"2026-02-19T09:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:20 crc kubenswrapper[4965]: I0219 09:43:20.704100 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:20 crc kubenswrapper[4965]: I0219 09:43:20.704153 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:20 crc kubenswrapper[4965]: I0219 09:43:20.704162 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:20 crc kubenswrapper[4965]: I0219 09:43:20.704180 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:20 crc kubenswrapper[4965]: I0219 09:43:20.704207 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:20Z","lastTransitionTime":"2026-02-19T09:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:20 crc kubenswrapper[4965]: I0219 09:43:20.807417 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:20 crc kubenswrapper[4965]: I0219 09:43:20.807487 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:20 crc kubenswrapper[4965]: I0219 09:43:20.807503 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:20 crc kubenswrapper[4965]: I0219 09:43:20.807528 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:20 crc kubenswrapper[4965]: I0219 09:43:20.807546 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:20Z","lastTransitionTime":"2026-02-19T09:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:20 crc kubenswrapper[4965]: I0219 09:43:20.910158 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:20 crc kubenswrapper[4965]: I0219 09:43:20.910239 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:20 crc kubenswrapper[4965]: I0219 09:43:20.910252 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:20 crc kubenswrapper[4965]: I0219 09:43:20.910277 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:20 crc kubenswrapper[4965]: I0219 09:43:20.910292 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:20Z","lastTransitionTime":"2026-02-19T09:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:21 crc kubenswrapper[4965]: I0219 09:43:21.013065 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:21 crc kubenswrapper[4965]: I0219 09:43:21.013127 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:21 crc kubenswrapper[4965]: I0219 09:43:21.013139 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:21 crc kubenswrapper[4965]: I0219 09:43:21.013160 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:21 crc kubenswrapper[4965]: I0219 09:43:21.013173 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:21Z","lastTransitionTime":"2026-02-19T09:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:21 crc kubenswrapper[4965]: I0219 09:43:21.115880 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:21 crc kubenswrapper[4965]: I0219 09:43:21.115928 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:21 crc kubenswrapper[4965]: I0219 09:43:21.115943 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:21 crc kubenswrapper[4965]: I0219 09:43:21.115961 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:21 crc kubenswrapper[4965]: I0219 09:43:21.115972 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:21Z","lastTransitionTime":"2026-02-19T09:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:21 crc kubenswrapper[4965]: I0219 09:43:21.181412 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 22:56:58.190338574 +0000 UTC Feb 19 09:43:21 crc kubenswrapper[4965]: I0219 09:43:21.218505 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:21 crc kubenswrapper[4965]: I0219 09:43:21.218573 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:21 crc kubenswrapper[4965]: I0219 09:43:21.218585 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:21 crc kubenswrapper[4965]: I0219 09:43:21.218603 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:21 crc kubenswrapper[4965]: I0219 09:43:21.218613 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:21Z","lastTransitionTime":"2026-02-19T09:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:21 crc kubenswrapper[4965]: I0219 09:43:21.321827 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:21 crc kubenswrapper[4965]: I0219 09:43:21.321888 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:21 crc kubenswrapper[4965]: I0219 09:43:21.321902 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:21 crc kubenswrapper[4965]: I0219 09:43:21.321921 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:21 crc kubenswrapper[4965]: I0219 09:43:21.321940 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:21Z","lastTransitionTime":"2026-02-19T09:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:21 crc kubenswrapper[4965]: I0219 09:43:21.424112 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:21 crc kubenswrapper[4965]: I0219 09:43:21.424146 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:21 crc kubenswrapper[4965]: I0219 09:43:21.424157 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:21 crc kubenswrapper[4965]: I0219 09:43:21.424174 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:21 crc kubenswrapper[4965]: I0219 09:43:21.424183 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:21Z","lastTransitionTime":"2026-02-19T09:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:21 crc kubenswrapper[4965]: I0219 09:43:21.526431 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:21 crc kubenswrapper[4965]: I0219 09:43:21.526469 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:21 crc kubenswrapper[4965]: I0219 09:43:21.526478 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:21 crc kubenswrapper[4965]: I0219 09:43:21.526495 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:21 crc kubenswrapper[4965]: I0219 09:43:21.526507 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:21Z","lastTransitionTime":"2026-02-19T09:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:21 crc kubenswrapper[4965]: I0219 09:43:21.629513 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:21 crc kubenswrapper[4965]: I0219 09:43:21.629624 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:21 crc kubenswrapper[4965]: I0219 09:43:21.629660 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:21 crc kubenswrapper[4965]: I0219 09:43:21.629694 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:21 crc kubenswrapper[4965]: I0219 09:43:21.629715 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:21Z","lastTransitionTime":"2026-02-19T09:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:21 crc kubenswrapper[4965]: I0219 09:43:21.733500 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:21 crc kubenswrapper[4965]: I0219 09:43:21.733575 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:21 crc kubenswrapper[4965]: I0219 09:43:21.733588 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:21 crc kubenswrapper[4965]: I0219 09:43:21.733609 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:21 crc kubenswrapper[4965]: I0219 09:43:21.733622 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:21Z","lastTransitionTime":"2026-02-19T09:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:21 crc kubenswrapper[4965]: I0219 09:43:21.836544 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:21 crc kubenswrapper[4965]: I0219 09:43:21.836602 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:21 crc kubenswrapper[4965]: I0219 09:43:21.836613 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:21 crc kubenswrapper[4965]: I0219 09:43:21.836631 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:21 crc kubenswrapper[4965]: I0219 09:43:21.836641 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:21Z","lastTransitionTime":"2026-02-19T09:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:21 crc kubenswrapper[4965]: I0219 09:43:21.940100 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:21 crc kubenswrapper[4965]: I0219 09:43:21.940169 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:21 crc kubenswrapper[4965]: I0219 09:43:21.940182 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:21 crc kubenswrapper[4965]: I0219 09:43:21.940218 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:21 crc kubenswrapper[4965]: I0219 09:43:21.940231 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:21Z","lastTransitionTime":"2026-02-19T09:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:22 crc kubenswrapper[4965]: I0219 09:43:22.043530 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:22 crc kubenswrapper[4965]: I0219 09:43:22.043593 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:22 crc kubenswrapper[4965]: I0219 09:43:22.043610 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:22 crc kubenswrapper[4965]: I0219 09:43:22.043632 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:22 crc kubenswrapper[4965]: I0219 09:43:22.043648 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:22Z","lastTransitionTime":"2026-02-19T09:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:22 crc kubenswrapper[4965]: I0219 09:43:22.147607 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:22 crc kubenswrapper[4965]: I0219 09:43:22.147706 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:22 crc kubenswrapper[4965]: I0219 09:43:22.147726 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:22 crc kubenswrapper[4965]: I0219 09:43:22.147838 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:22 crc kubenswrapper[4965]: I0219 09:43:22.147866 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:22Z","lastTransitionTime":"2026-02-19T09:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:22 crc kubenswrapper[4965]: I0219 09:43:22.182371 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 23:57:48.881534289 +0000 UTC Feb 19 09:43:22 crc kubenswrapper[4965]: I0219 09:43:22.197764 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:43:22 crc kubenswrapper[4965]: I0219 09:43:22.197840 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:43:22 crc kubenswrapper[4965]: E0219 09:43:22.197923 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:43:22 crc kubenswrapper[4965]: E0219 09:43:22.198027 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lwjwk" podUID="1e1b431a-0390-4366-82d1-6cb782c7a9e8" Feb 19 09:43:22 crc kubenswrapper[4965]: I0219 09:43:22.198122 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:43:22 crc kubenswrapper[4965]: E0219 09:43:22.198312 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:43:22 crc kubenswrapper[4965]: I0219 09:43:22.198414 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:43:22 crc kubenswrapper[4965]: E0219 09:43:22.198530 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:43:22 crc kubenswrapper[4965]: I0219 09:43:22.251537 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:22 crc kubenswrapper[4965]: I0219 09:43:22.251762 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:22 crc kubenswrapper[4965]: I0219 09:43:22.251785 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:22 crc kubenswrapper[4965]: I0219 09:43:22.251812 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:22 crc kubenswrapper[4965]: I0219 09:43:22.251837 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:22Z","lastTransitionTime":"2026-02-19T09:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:22 crc kubenswrapper[4965]: I0219 09:43:22.355545 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:22 crc kubenswrapper[4965]: I0219 09:43:22.355650 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:22 crc kubenswrapper[4965]: I0219 09:43:22.355669 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:22 crc kubenswrapper[4965]: I0219 09:43:22.355695 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:22 crc kubenswrapper[4965]: I0219 09:43:22.355712 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:22Z","lastTransitionTime":"2026-02-19T09:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:22 crc kubenswrapper[4965]: I0219 09:43:22.458611 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:22 crc kubenswrapper[4965]: I0219 09:43:22.458678 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:22 crc kubenswrapper[4965]: I0219 09:43:22.458690 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:22 crc kubenswrapper[4965]: I0219 09:43:22.458709 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:22 crc kubenswrapper[4965]: I0219 09:43:22.458722 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:22Z","lastTransitionTime":"2026-02-19T09:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:22 crc kubenswrapper[4965]: I0219 09:43:22.561777 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:22 crc kubenswrapper[4965]: I0219 09:43:22.561828 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:22 crc kubenswrapper[4965]: I0219 09:43:22.561842 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:22 crc kubenswrapper[4965]: I0219 09:43:22.561859 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:22 crc kubenswrapper[4965]: I0219 09:43:22.561870 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:22Z","lastTransitionTime":"2026-02-19T09:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:22 crc kubenswrapper[4965]: I0219 09:43:22.664420 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:22 crc kubenswrapper[4965]: I0219 09:43:22.664470 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:22 crc kubenswrapper[4965]: I0219 09:43:22.664479 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:22 crc kubenswrapper[4965]: I0219 09:43:22.664496 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:22 crc kubenswrapper[4965]: I0219 09:43:22.664508 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:22Z","lastTransitionTime":"2026-02-19T09:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:22 crc kubenswrapper[4965]: I0219 09:43:22.767618 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:22 crc kubenswrapper[4965]: I0219 09:43:22.767673 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:22 crc kubenswrapper[4965]: I0219 09:43:22.767685 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:22 crc kubenswrapper[4965]: I0219 09:43:22.767704 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:22 crc kubenswrapper[4965]: I0219 09:43:22.767723 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:22Z","lastTransitionTime":"2026-02-19T09:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:22 crc kubenswrapper[4965]: I0219 09:43:22.871803 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:22 crc kubenswrapper[4965]: I0219 09:43:22.871848 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:22 crc kubenswrapper[4965]: I0219 09:43:22.871858 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:22 crc kubenswrapper[4965]: I0219 09:43:22.871874 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:22 crc kubenswrapper[4965]: I0219 09:43:22.871885 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:22Z","lastTransitionTime":"2026-02-19T09:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:22 crc kubenswrapper[4965]: I0219 09:43:22.974910 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:22 crc kubenswrapper[4965]: I0219 09:43:22.974986 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:22 crc kubenswrapper[4965]: I0219 09:43:22.975005 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:22 crc kubenswrapper[4965]: I0219 09:43:22.975033 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:22 crc kubenswrapper[4965]: I0219 09:43:22.975053 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:22Z","lastTransitionTime":"2026-02-19T09:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.079339 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.079411 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.079432 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.079457 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.079478 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:23Z","lastTransitionTime":"2026-02-19T09:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.182959 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 06:04:49.711526825 +0000 UTC Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.183840 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.183926 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.183949 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.183983 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.184007 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:23Z","lastTransitionTime":"2026-02-19T09:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.288880 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.288964 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.288984 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.289013 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.289034 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:23Z","lastTransitionTime":"2026-02-19T09:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.326886 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.326974 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.326995 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.327026 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.327047 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:23Z","lastTransitionTime":"2026-02-19T09:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:23 crc kubenswrapper[4965]: E0219 09:43:23.350062 4965 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1c83089-21b1-454c-b8cd-3bf0aaa04cd0\\\",\\\"systemUUID\\\":\\\"70334fb7-3860-4c43-90b6-37f049faeb9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:23Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.356608 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.356682 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.356702 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.356739 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.356757 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:23Z","lastTransitionTime":"2026-02-19T09:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:23 crc kubenswrapper[4965]: E0219 09:43:23.379181 4965 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1c83089-21b1-454c-b8cd-3bf0aaa04cd0\\\",\\\"systemUUID\\\":\\\"70334fb7-3860-4c43-90b6-37f049faeb9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:23Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.385031 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.385120 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.385135 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.385156 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.385169 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:23Z","lastTransitionTime":"2026-02-19T09:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:23 crc kubenswrapper[4965]: E0219 09:43:23.404326 4965 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1c83089-21b1-454c-b8cd-3bf0aaa04cd0\\\",\\\"systemUUID\\\":\\\"70334fb7-3860-4c43-90b6-37f049faeb9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:23Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.410248 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.410333 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.410353 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.410386 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.410408 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:23Z","lastTransitionTime":"2026-02-19T09:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:23 crc kubenswrapper[4965]: E0219 09:43:23.433210 4965 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1c83089-21b1-454c-b8cd-3bf0aaa04cd0\\\",\\\"systemUUID\\\":\\\"70334fb7-3860-4c43-90b6-37f049faeb9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:23Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.437507 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.437547 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.437573 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.437597 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.437614 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:23Z","lastTransitionTime":"2026-02-19T09:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:23 crc kubenswrapper[4965]: E0219 09:43:23.451744 4965 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1c83089-21b1-454c-b8cd-3bf0aaa04cd0\\\",\\\"systemUUID\\\":\\\"70334fb7-3860-4c43-90b6-37f049faeb9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:23Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:23 crc kubenswrapper[4965]: E0219 09:43:23.451978 4965 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.455311 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.455370 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.455388 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.455416 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.455435 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:23Z","lastTransitionTime":"2026-02-19T09:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.558969 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.559034 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.559061 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.559086 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.559113 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:23Z","lastTransitionTime":"2026-02-19T09:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.663714 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.663776 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.663794 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.663818 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.663836 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:23Z","lastTransitionTime":"2026-02-19T09:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.767876 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.767936 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.767952 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.767976 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.767995 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:23Z","lastTransitionTime":"2026-02-19T09:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.871336 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.871403 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.871420 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.871447 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.871467 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:23Z","lastTransitionTime":"2026-02-19T09:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.975493 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.975545 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.975558 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.975575 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:23 crc kubenswrapper[4965]: I0219 09:43:23.975584 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:23Z","lastTransitionTime":"2026-02-19T09:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:24 crc kubenswrapper[4965]: I0219 09:43:24.079418 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:24 crc kubenswrapper[4965]: I0219 09:43:24.079494 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:24 crc kubenswrapper[4965]: I0219 09:43:24.079511 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:24 crc kubenswrapper[4965]: I0219 09:43:24.079539 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:24 crc kubenswrapper[4965]: I0219 09:43:24.079559 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:24Z","lastTransitionTime":"2026-02-19T09:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:24 crc kubenswrapper[4965]: I0219 09:43:24.182894 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:24 crc kubenswrapper[4965]: I0219 09:43:24.182947 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:24 crc kubenswrapper[4965]: I0219 09:43:24.182965 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:24 crc kubenswrapper[4965]: I0219 09:43:24.182993 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:24 crc kubenswrapper[4965]: I0219 09:43:24.183011 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:24Z","lastTransitionTime":"2026-02-19T09:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:24 crc kubenswrapper[4965]: I0219 09:43:24.183960 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 12:16:33.230881636 +0000 UTC Feb 19 09:43:24 crc kubenswrapper[4965]: I0219 09:43:24.197468 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:43:24 crc kubenswrapper[4965]: I0219 09:43:24.197508 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:43:24 crc kubenswrapper[4965]: I0219 09:43:24.197541 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:43:24 crc kubenswrapper[4965]: I0219 09:43:24.197642 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:43:24 crc kubenswrapper[4965]: E0219 09:43:24.197795 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:43:24 crc kubenswrapper[4965]: E0219 09:43:24.197904 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:43:24 crc kubenswrapper[4965]: E0219 09:43:24.197996 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lwjwk" podUID="1e1b431a-0390-4366-82d1-6cb782c7a9e8" Feb 19 09:43:24 crc kubenswrapper[4965]: E0219 09:43:24.198066 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:43:24 crc kubenswrapper[4965]: I0219 09:43:24.287330 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:24 crc kubenswrapper[4965]: I0219 09:43:24.287387 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:24 crc kubenswrapper[4965]: I0219 09:43:24.287399 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:24 crc kubenswrapper[4965]: I0219 09:43:24.287420 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:24 crc kubenswrapper[4965]: I0219 09:43:24.287433 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:24Z","lastTransitionTime":"2026-02-19T09:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:24 crc kubenswrapper[4965]: I0219 09:43:24.391570 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:24 crc kubenswrapper[4965]: I0219 09:43:24.391675 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:24 crc kubenswrapper[4965]: I0219 09:43:24.391698 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:24 crc kubenswrapper[4965]: I0219 09:43:24.391731 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:24 crc kubenswrapper[4965]: I0219 09:43:24.391754 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:24Z","lastTransitionTime":"2026-02-19T09:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:24 crc kubenswrapper[4965]: I0219 09:43:24.494695 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:24 crc kubenswrapper[4965]: I0219 09:43:24.494793 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:24 crc kubenswrapper[4965]: I0219 09:43:24.494819 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:24 crc kubenswrapper[4965]: I0219 09:43:24.494856 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:24 crc kubenswrapper[4965]: I0219 09:43:24.494881 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:24Z","lastTransitionTime":"2026-02-19T09:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:24 crc kubenswrapper[4965]: I0219 09:43:24.598878 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:24 crc kubenswrapper[4965]: I0219 09:43:24.598949 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:24 crc kubenswrapper[4965]: I0219 09:43:24.598962 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:24 crc kubenswrapper[4965]: I0219 09:43:24.598984 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:24 crc kubenswrapper[4965]: I0219 09:43:24.598997 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:24Z","lastTransitionTime":"2026-02-19T09:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:24 crc kubenswrapper[4965]: I0219 09:43:24.702166 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:24 crc kubenswrapper[4965]: I0219 09:43:24.702270 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:24 crc kubenswrapper[4965]: I0219 09:43:24.702290 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:24 crc kubenswrapper[4965]: I0219 09:43:24.702339 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:24 crc kubenswrapper[4965]: I0219 09:43:24.702358 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:24Z","lastTransitionTime":"2026-02-19T09:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:24 crc kubenswrapper[4965]: I0219 09:43:24.806415 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:24 crc kubenswrapper[4965]: I0219 09:43:24.806466 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:24 crc kubenswrapper[4965]: I0219 09:43:24.806480 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:24 crc kubenswrapper[4965]: I0219 09:43:24.806500 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:24 crc kubenswrapper[4965]: I0219 09:43:24.806513 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:24Z","lastTransitionTime":"2026-02-19T09:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:24 crc kubenswrapper[4965]: I0219 09:43:24.909388 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:24 crc kubenswrapper[4965]: I0219 09:43:24.909464 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:24 crc kubenswrapper[4965]: I0219 09:43:24.909479 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:24 crc kubenswrapper[4965]: I0219 09:43:24.909498 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:24 crc kubenswrapper[4965]: I0219 09:43:24.909509 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:24Z","lastTransitionTime":"2026-02-19T09:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.012190 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.012272 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.012285 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.012303 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.012315 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:25Z","lastTransitionTime":"2026-02-19T09:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.115440 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.115475 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.115484 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.115499 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.115509 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:25Z","lastTransitionTime":"2026-02-19T09:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.185242 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 08:53:29.648726104 +0000 UTC Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.221383 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.221442 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.221461 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.221487 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.221504 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:25Z","lastTransitionTime":"2026-02-19T09:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.223702 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f12bbde7-ee02-4143-b0a7-af0299919dda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5177a63dec267486f4128ae0156f4cb79507b735fca5964a100bc27890e5d13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a087adce637c236bd6e6e1ee13c3742493b9f09053cd984a7c4334056f06d2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://596eb135489ffa0def98b9d17adf293522beb945db0674088c8cb37d1e83b7d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://883c5ece4dc1535a8e1ed6490e8eb103b52b27777bb8dd2244aa3ccbcac483d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://883c5ece4dc1535a8e1ed6490e8eb103b52b27777bb8dd2244aa3ccbcac483d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:25Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.241930 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:25Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.259660 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6nv8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7972115-bfc1-42ee-b756-e394806eed51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://597dabc5893cced827268c6dc222b2f1535c93e6086c25cec52e7f612952eb65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vd96\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6nv8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:25Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.280807 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nsjqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aef896286f2619adf09fb4e2f4f25543b1d0d69c90fb4d301fb1c215e9b78f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4tp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nsjqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:25Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.296618 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pjxbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3965f16-f751-4de2-9f58-db2070fc99b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d81fdde65dd95b5dd26fd2bccb3c26f4491eee9891d4e837fd01338432057878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pjxbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:25Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.316985 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lwjwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e1b431a-0390-4366-82d1-6cb782c7a9e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdh66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdh66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:43:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lwjwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:25Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.324080 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.324169 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.324185 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.324615 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.324870 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:25Z","lastTransitionTime":"2026-02-19T09:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.338171 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd2c04f5bfa6800521c39502b241dfea1a0b9d3ddde4eb92d501d28bcfad1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:25Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.356418 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:25Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.373506 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed9a04147ac88af087b35406b7fc4e1261b034a9fbfa0014446cdc08743f7184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27905a4c42a1d28d582484efe02020cd2b7d5a5af7c53787412705c7a6da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:25Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.389643 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ef3eb8-6103-492d-b6ef-f16081d15e83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://107d47a2c3ddc138ad383ab20f81dabe2c31af50f7bd66c31b66df79488ba837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ff237da7e509d3b4a25e8042c384a768ef0123d1687b574502f769bde3121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mhh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:25Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.418063 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c788dfa-1923-4a2b-9619-73acf92ec849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa60b6875cede631c9383845eb085f96d62a6365609f1f98b84165b54e0872a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ebb933d7238665138ec7e854756522607a2814b48116b2ce4474869b39344c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac7fd5095ec7fd8ce98b9150bd5c0a642004e2c1239a6fa1ff002efa67471df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51316b32af59fe23cdf832fbc0b37b11f74d3a57d01eed32ca30a196d4c7e2c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccba1acfe523175d218c25c2f59a6f9874426235c9cba981a80cc53aca12408a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc418c94085bcd4ed93250cce9eb6bc122cd045035b72800df2bdf4b364d6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://656d2c48a8186b05aa2582865c3075e2a72238ef8cfd816187f2737062be98ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://656d2c48a8186b05aa2582865c3075e2a72238ef8cfd816187f2737062be98ee\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:43:16Z\\\",\\\"message\\\":\\\"map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0219 09:43:16.349938 6589 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0219 09:43:16.349981 6589 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0219 09:43:16.350026 6589 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0219 09:43:16.350041 6589 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:43:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-dcfpx_openshift-ovn-kubernetes(7c788dfa-1923-4a2b-9619-73acf92ec849)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://533452e14c9d0d57a451ec0dd06097f87f60658a8f008203b29c31b2b5310eb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dcfpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:25Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.429067 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.429115 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.429133 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.429160 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.429180 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:25Z","lastTransitionTime":"2026-02-19T09:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.435675 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g5jnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ab24976-06f3-4373-825a-5234ff24f2cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef52c51fd38bf34f1fc3eb014d85c40137dd15030237334159ffbb71e1d6c2a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll7dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcbd8e6b02f20a249ddb3fbf20ddd72a94b40fd420cb6ad4c59ea513994ac382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll7dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:43:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-g5jnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:25Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.457800 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vpj8c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26ce37d0-9ace-438a-bdd4-6bb30e41bac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cab682da53d115c9e5ce5dca08aae544673283d03b3e11ba9d28ca7896fd4103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3b065fcc4e11f5576c86810e3af403b4336878950bb02f61e1e461b38b009e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c3b065fcc4e11f5576c86810e3af403b4336878950bb02f61e1e461b38b009e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4089ca812b00a9e7ab8b2ba1f148d929de9837384e6b0b64ea8f62dd6917fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4089ca812b00a9e7ab8b2ba1f148d929de9837384e6b0b64ea8f62dd6917fee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0fceec5800537c79268d8bad66cd51cedd7e6442e8f08ea259dd5714334a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0fceec5800537c79268d8bad66cd51cedd7e6442e8f08ea259dd5714334a81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vpj8c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:25Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.476656 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"210f2216-544c-43a1-813b-68e47da7447e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31cd85d70b58724b5ed5071b085e5b20b31083087fc7847dfadccc52cfd717c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d403f58f80ef84bfe879f95390ffc186d6516ce879f1ca593e9d06b6fdaa9003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86861693b1090906a34be8e134571c684c24b327ed62db509ca470e79c70c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65b4ac45cd6766a2144308a99817f41ef69812519680ebae6edf2d6c72f1c97e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c0288960f3e7739ec0587fcefc29e57c0e351c4903326474454df7b6b57a29c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:42:39Z\\\",\\\"message\\\":\\\"W0219 09:42:28.573633 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 09:42:28.575071 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771494148 cert, and key in /tmp/serving-cert-427400488/serving-signer.crt, /tmp/serving-cert-427400488/serving-signer.key\\\\nI0219 09:42:29.117984 1 observer_polling.go:159] Starting file observer\\\\nW0219 09:42:29.120780 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 09:42:29.121009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:42:29.122010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-427400488/tls.crt::/tmp/serving-cert-427400488/tls.key\\\\\\\"\\\\nF0219 09:42:39.487179 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2dcb23379da52333bd115902957e4051df5472eb6d909d45f4781487e8d6e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:25Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.498371 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85d200ad-dc81-4825-a3e0-976c042ebfd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e2ac875fca92d3c631dc7856cd9f72b9abbf3f2edcbc7efeb49ce1c03ac52a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d4ac252f5069500eef4e1579559c883095bf1c21a29cb96a36a4aab507a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29999666cb6f12b3a4a394a38d4304dd636fe7106b771ca4ef541693fbfc76a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fb1ec6375fa0345ae67191ebc522471cabd2510440f8051132b833c0fa595e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:25Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.517866 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:25Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.532106 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.532326 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.532354 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.532384 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.532405 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:25Z","lastTransitionTime":"2026-02-19T09:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.536365 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ca9c67a49c188984680f98e96b659087034f30727c1fcdad7dfc298157745c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:25Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.635368 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.635434 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.635637 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.635661 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.635679 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:25Z","lastTransitionTime":"2026-02-19T09:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.739157 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.739286 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.739323 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.739349 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.739365 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:25Z","lastTransitionTime":"2026-02-19T09:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.843173 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.843271 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.843291 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.843310 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.843324 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:25Z","lastTransitionTime":"2026-02-19T09:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.946514 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.946569 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.946584 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.946602 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:25 crc kubenswrapper[4965]: I0219 09:43:25.946616 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:25Z","lastTransitionTime":"2026-02-19T09:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:26 crc kubenswrapper[4965]: I0219 09:43:26.050456 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:26 crc kubenswrapper[4965]: I0219 09:43:26.050531 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:26 crc kubenswrapper[4965]: I0219 09:43:26.050557 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:26 crc kubenswrapper[4965]: I0219 09:43:26.050589 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:26 crc kubenswrapper[4965]: I0219 09:43:26.050618 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:26Z","lastTransitionTime":"2026-02-19T09:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:26 crc kubenswrapper[4965]: I0219 09:43:26.153181 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:26 crc kubenswrapper[4965]: I0219 09:43:26.153280 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:26 crc kubenswrapper[4965]: I0219 09:43:26.153294 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:26 crc kubenswrapper[4965]: I0219 09:43:26.153310 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:26 crc kubenswrapper[4965]: I0219 09:43:26.153322 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:26Z","lastTransitionTime":"2026-02-19T09:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:26 crc kubenswrapper[4965]: I0219 09:43:26.186328 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 18:21:50.353900938 +0000 UTC Feb 19 09:43:26 crc kubenswrapper[4965]: I0219 09:43:26.197890 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:43:26 crc kubenswrapper[4965]: I0219 09:43:26.197980 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:43:26 crc kubenswrapper[4965]: I0219 09:43:26.198137 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:43:26 crc kubenswrapper[4965]: E0219 09:43:26.198415 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:43:26 crc kubenswrapper[4965]: I0219 09:43:26.198504 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:43:26 crc kubenswrapper[4965]: E0219 09:43:26.198738 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:43:26 crc kubenswrapper[4965]: E0219 09:43:26.198844 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:43:26 crc kubenswrapper[4965]: E0219 09:43:26.199021 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lwjwk" podUID="1e1b431a-0390-4366-82d1-6cb782c7a9e8" Feb 19 09:43:26 crc kubenswrapper[4965]: I0219 09:43:26.264817 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:26 crc kubenswrapper[4965]: I0219 09:43:26.264878 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:26 crc kubenswrapper[4965]: I0219 09:43:26.264899 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:26 crc kubenswrapper[4965]: I0219 09:43:26.264924 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:26 crc kubenswrapper[4965]: I0219 09:43:26.264942 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:26Z","lastTransitionTime":"2026-02-19T09:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:26 crc kubenswrapper[4965]: I0219 09:43:26.368191 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:26 crc kubenswrapper[4965]: I0219 09:43:26.368289 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:26 crc kubenswrapper[4965]: I0219 09:43:26.368307 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:26 crc kubenswrapper[4965]: I0219 09:43:26.368331 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:26 crc kubenswrapper[4965]: I0219 09:43:26.368346 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:26Z","lastTransitionTime":"2026-02-19T09:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:26 crc kubenswrapper[4965]: I0219 09:43:26.472947 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:26 crc kubenswrapper[4965]: I0219 09:43:26.473021 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:26 crc kubenswrapper[4965]: I0219 09:43:26.473041 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:26 crc kubenswrapper[4965]: I0219 09:43:26.473068 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:26 crc kubenswrapper[4965]: I0219 09:43:26.473086 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:26Z","lastTransitionTime":"2026-02-19T09:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:26 crc kubenswrapper[4965]: I0219 09:43:26.576243 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:26 crc kubenswrapper[4965]: I0219 09:43:26.576289 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:26 crc kubenswrapper[4965]: I0219 09:43:26.576300 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:26 crc kubenswrapper[4965]: I0219 09:43:26.576318 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:26 crc kubenswrapper[4965]: I0219 09:43:26.576329 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:26Z","lastTransitionTime":"2026-02-19T09:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:26 crc kubenswrapper[4965]: I0219 09:43:26.679672 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:26 crc kubenswrapper[4965]: I0219 09:43:26.679734 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:26 crc kubenswrapper[4965]: I0219 09:43:26.679752 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:26 crc kubenswrapper[4965]: I0219 09:43:26.679777 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:26 crc kubenswrapper[4965]: I0219 09:43:26.679798 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:26Z","lastTransitionTime":"2026-02-19T09:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:26 crc kubenswrapper[4965]: I0219 09:43:26.783646 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:26 crc kubenswrapper[4965]: I0219 09:43:26.783708 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:26 crc kubenswrapper[4965]: I0219 09:43:26.783726 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:26 crc kubenswrapper[4965]: I0219 09:43:26.783749 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:26 crc kubenswrapper[4965]: I0219 09:43:26.783767 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:26Z","lastTransitionTime":"2026-02-19T09:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:26 crc kubenswrapper[4965]: I0219 09:43:26.887807 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:26 crc kubenswrapper[4965]: I0219 09:43:26.887903 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:26 crc kubenswrapper[4965]: I0219 09:43:26.887930 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:26 crc kubenswrapper[4965]: I0219 09:43:26.887962 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:26 crc kubenswrapper[4965]: I0219 09:43:26.887986 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:26Z","lastTransitionTime":"2026-02-19T09:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:26 crc kubenswrapper[4965]: I0219 09:43:26.991242 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:26 crc kubenswrapper[4965]: I0219 09:43:26.991343 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:26 crc kubenswrapper[4965]: I0219 09:43:26.991357 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:26 crc kubenswrapper[4965]: I0219 09:43:26.991393 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:26 crc kubenswrapper[4965]: I0219 09:43:26.991410 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:26Z","lastTransitionTime":"2026-02-19T09:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:27 crc kubenswrapper[4965]: I0219 09:43:27.095009 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:27 crc kubenswrapper[4965]: I0219 09:43:27.095081 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:27 crc kubenswrapper[4965]: I0219 09:43:27.095095 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:27 crc kubenswrapper[4965]: I0219 09:43:27.095118 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:27 crc kubenswrapper[4965]: I0219 09:43:27.095135 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:27Z","lastTransitionTime":"2026-02-19T09:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:27 crc kubenswrapper[4965]: I0219 09:43:27.187062 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 09:10:00.078483811 +0000 UTC Feb 19 09:43:27 crc kubenswrapper[4965]: I0219 09:43:27.198727 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:27 crc kubenswrapper[4965]: I0219 09:43:27.198769 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:27 crc kubenswrapper[4965]: I0219 09:43:27.198780 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:27 crc kubenswrapper[4965]: I0219 09:43:27.198802 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:27 crc kubenswrapper[4965]: I0219 09:43:27.198817 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:27Z","lastTransitionTime":"2026-02-19T09:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:27 crc kubenswrapper[4965]: I0219 09:43:27.302726 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:27 crc kubenswrapper[4965]: I0219 09:43:27.302778 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:27 crc kubenswrapper[4965]: I0219 09:43:27.302793 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:27 crc kubenswrapper[4965]: I0219 09:43:27.302813 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:27 crc kubenswrapper[4965]: I0219 09:43:27.302829 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:27Z","lastTransitionTime":"2026-02-19T09:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:27 crc kubenswrapper[4965]: I0219 09:43:27.406332 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:27 crc kubenswrapper[4965]: I0219 09:43:27.406380 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:27 crc kubenswrapper[4965]: I0219 09:43:27.406391 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:27 crc kubenswrapper[4965]: I0219 09:43:27.406410 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:27 crc kubenswrapper[4965]: I0219 09:43:27.406419 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:27Z","lastTransitionTime":"2026-02-19T09:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:27 crc kubenswrapper[4965]: I0219 09:43:27.509245 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:27 crc kubenswrapper[4965]: I0219 09:43:27.509316 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:27 crc kubenswrapper[4965]: I0219 09:43:27.509328 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:27 crc kubenswrapper[4965]: I0219 09:43:27.509349 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:27 crc kubenswrapper[4965]: I0219 09:43:27.509368 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:27Z","lastTransitionTime":"2026-02-19T09:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:27 crc kubenswrapper[4965]: I0219 09:43:27.612792 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:27 crc kubenswrapper[4965]: I0219 09:43:27.612841 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:27 crc kubenswrapper[4965]: I0219 09:43:27.612855 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:27 crc kubenswrapper[4965]: I0219 09:43:27.612877 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:27 crc kubenswrapper[4965]: I0219 09:43:27.612891 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:27Z","lastTransitionTime":"2026-02-19T09:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:27 crc kubenswrapper[4965]: I0219 09:43:27.717276 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:27 crc kubenswrapper[4965]: I0219 09:43:27.717357 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:27 crc kubenswrapper[4965]: I0219 09:43:27.717374 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:27 crc kubenswrapper[4965]: I0219 09:43:27.717401 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:27 crc kubenswrapper[4965]: I0219 09:43:27.717419 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:27Z","lastTransitionTime":"2026-02-19T09:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:27 crc kubenswrapper[4965]: I0219 09:43:27.821545 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:27 crc kubenswrapper[4965]: I0219 09:43:27.821616 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:27 crc kubenswrapper[4965]: I0219 09:43:27.821634 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:27 crc kubenswrapper[4965]: I0219 09:43:27.821659 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:27 crc kubenswrapper[4965]: I0219 09:43:27.821718 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:27Z","lastTransitionTime":"2026-02-19T09:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:27 crc kubenswrapper[4965]: I0219 09:43:27.924954 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:27 crc kubenswrapper[4965]: I0219 09:43:27.925042 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:27 crc kubenswrapper[4965]: I0219 09:43:27.925055 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:27 crc kubenswrapper[4965]: I0219 09:43:27.925078 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:27 crc kubenswrapper[4965]: I0219 09:43:27.925093 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:27Z","lastTransitionTime":"2026-02-19T09:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:28 crc kubenswrapper[4965]: I0219 09:43:28.028428 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:28 crc kubenswrapper[4965]: I0219 09:43:28.028507 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:28 crc kubenswrapper[4965]: I0219 09:43:28.028524 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:28 crc kubenswrapper[4965]: I0219 09:43:28.028551 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:28 crc kubenswrapper[4965]: I0219 09:43:28.028573 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:28Z","lastTransitionTime":"2026-02-19T09:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:28 crc kubenswrapper[4965]: I0219 09:43:28.131798 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:28 crc kubenswrapper[4965]: I0219 09:43:28.131875 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:28 crc kubenswrapper[4965]: I0219 09:43:28.131893 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:28 crc kubenswrapper[4965]: I0219 09:43:28.131919 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:28 crc kubenswrapper[4965]: I0219 09:43:28.131940 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:28Z","lastTransitionTime":"2026-02-19T09:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:28 crc kubenswrapper[4965]: I0219 09:43:28.188016 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 18:07:06.399782382 +0000 UTC Feb 19 09:43:28 crc kubenswrapper[4965]: I0219 09:43:28.197409 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:43:28 crc kubenswrapper[4965]: I0219 09:43:28.197630 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:43:28 crc kubenswrapper[4965]: E0219 09:43:28.197637 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lwjwk" podUID="1e1b431a-0390-4366-82d1-6cb782c7a9e8" Feb 19 09:43:28 crc kubenswrapper[4965]: I0219 09:43:28.197708 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:43:28 crc kubenswrapper[4965]: I0219 09:43:28.197770 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:43:28 crc kubenswrapper[4965]: E0219 09:43:28.198459 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:43:28 crc kubenswrapper[4965]: E0219 09:43:28.198646 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:43:28 crc kubenswrapper[4965]: E0219 09:43:28.198817 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:43:28 crc kubenswrapper[4965]: I0219 09:43:28.201513 4965 scope.go:117] "RemoveContainer" containerID="656d2c48a8186b05aa2582865c3075e2a72238ef8cfd816187f2737062be98ee" Feb 19 09:43:28 crc kubenswrapper[4965]: E0219 09:43:28.203440 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-dcfpx_openshift-ovn-kubernetes(7c788dfa-1923-4a2b-9619-73acf92ec849)\"" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" podUID="7c788dfa-1923-4a2b-9619-73acf92ec849" Feb 19 09:43:28 crc kubenswrapper[4965]: I0219 09:43:28.235731 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:28 crc kubenswrapper[4965]: I0219 09:43:28.235788 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:28 crc kubenswrapper[4965]: I0219 09:43:28.235802 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:28 crc kubenswrapper[4965]: I0219 09:43:28.235826 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:28 crc kubenswrapper[4965]: I0219 09:43:28.235841 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:28Z","lastTransitionTime":"2026-02-19T09:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:28 crc kubenswrapper[4965]: I0219 09:43:28.338962 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:28 crc kubenswrapper[4965]: I0219 09:43:28.339046 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:28 crc kubenswrapper[4965]: I0219 09:43:28.339064 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:28 crc kubenswrapper[4965]: I0219 09:43:28.339091 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:28 crc kubenswrapper[4965]: I0219 09:43:28.339111 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:28Z","lastTransitionTime":"2026-02-19T09:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:28 crc kubenswrapper[4965]: I0219 09:43:28.442632 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:28 crc kubenswrapper[4965]: I0219 09:43:28.442684 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:28 crc kubenswrapper[4965]: I0219 09:43:28.442697 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:28 crc kubenswrapper[4965]: I0219 09:43:28.442714 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:28 crc kubenswrapper[4965]: I0219 09:43:28.442726 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:28Z","lastTransitionTime":"2026-02-19T09:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:28 crc kubenswrapper[4965]: I0219 09:43:28.546261 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:28 crc kubenswrapper[4965]: I0219 09:43:28.546307 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:28 crc kubenswrapper[4965]: I0219 09:43:28.546319 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:28 crc kubenswrapper[4965]: I0219 09:43:28.546337 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:28 crc kubenswrapper[4965]: I0219 09:43:28.546376 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:28Z","lastTransitionTime":"2026-02-19T09:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:28 crc kubenswrapper[4965]: I0219 09:43:28.648586 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:28 crc kubenswrapper[4965]: I0219 09:43:28.648638 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:28 crc kubenswrapper[4965]: I0219 09:43:28.648651 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:28 crc kubenswrapper[4965]: I0219 09:43:28.648672 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:28 crc kubenswrapper[4965]: I0219 09:43:28.648686 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:28Z","lastTransitionTime":"2026-02-19T09:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:28 crc kubenswrapper[4965]: I0219 09:43:28.755482 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:28 crc kubenswrapper[4965]: I0219 09:43:28.755542 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:28 crc kubenswrapper[4965]: I0219 09:43:28.755554 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:28 crc kubenswrapper[4965]: I0219 09:43:28.755576 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:28 crc kubenswrapper[4965]: I0219 09:43:28.755592 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:28Z","lastTransitionTime":"2026-02-19T09:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:28 crc kubenswrapper[4965]: I0219 09:43:28.859142 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:28 crc kubenswrapper[4965]: I0219 09:43:28.859213 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:28 crc kubenswrapper[4965]: I0219 09:43:28.859227 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:28 crc kubenswrapper[4965]: I0219 09:43:28.859250 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:28 crc kubenswrapper[4965]: I0219 09:43:28.859265 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:28Z","lastTransitionTime":"2026-02-19T09:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:28 crc kubenswrapper[4965]: I0219 09:43:28.962221 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:28 crc kubenswrapper[4965]: I0219 09:43:28.962270 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:28 crc kubenswrapper[4965]: I0219 09:43:28.962283 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:28 crc kubenswrapper[4965]: I0219 09:43:28.962302 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:28 crc kubenswrapper[4965]: I0219 09:43:28.962315 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:28Z","lastTransitionTime":"2026-02-19T09:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:29 crc kubenswrapper[4965]: I0219 09:43:29.065688 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:29 crc kubenswrapper[4965]: I0219 09:43:29.065729 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:29 crc kubenswrapper[4965]: I0219 09:43:29.065745 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:29 crc kubenswrapper[4965]: I0219 09:43:29.065767 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:29 crc kubenswrapper[4965]: I0219 09:43:29.065781 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:29Z","lastTransitionTime":"2026-02-19T09:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:29 crc kubenswrapper[4965]: I0219 09:43:29.168063 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:29 crc kubenswrapper[4965]: I0219 09:43:29.168112 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:29 crc kubenswrapper[4965]: I0219 09:43:29.168124 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:29 crc kubenswrapper[4965]: I0219 09:43:29.168141 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:29 crc kubenswrapper[4965]: I0219 09:43:29.168153 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:29Z","lastTransitionTime":"2026-02-19T09:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:29 crc kubenswrapper[4965]: I0219 09:43:29.188907 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 10:09:20.135302681 +0000 UTC Feb 19 09:43:29 crc kubenswrapper[4965]: I0219 09:43:29.271137 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:29 crc kubenswrapper[4965]: I0219 09:43:29.271182 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:29 crc kubenswrapper[4965]: I0219 09:43:29.271217 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:29 crc kubenswrapper[4965]: I0219 09:43:29.271240 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:29 crc kubenswrapper[4965]: I0219 09:43:29.271258 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:29Z","lastTransitionTime":"2026-02-19T09:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:29 crc kubenswrapper[4965]: I0219 09:43:29.373586 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:29 crc kubenswrapper[4965]: I0219 09:43:29.373630 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:29 crc kubenswrapper[4965]: I0219 09:43:29.373639 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:29 crc kubenswrapper[4965]: I0219 09:43:29.373655 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:29 crc kubenswrapper[4965]: I0219 09:43:29.373665 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:29Z","lastTransitionTime":"2026-02-19T09:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:29 crc kubenswrapper[4965]: I0219 09:43:29.476371 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:29 crc kubenswrapper[4965]: I0219 09:43:29.476443 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:29 crc kubenswrapper[4965]: I0219 09:43:29.476461 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:29 crc kubenswrapper[4965]: I0219 09:43:29.476490 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:29 crc kubenswrapper[4965]: I0219 09:43:29.476509 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:29Z","lastTransitionTime":"2026-02-19T09:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:29 crc kubenswrapper[4965]: I0219 09:43:29.579687 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:29 crc kubenswrapper[4965]: I0219 09:43:29.579756 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:29 crc kubenswrapper[4965]: I0219 09:43:29.579771 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:29 crc kubenswrapper[4965]: I0219 09:43:29.579793 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:29 crc kubenswrapper[4965]: I0219 09:43:29.579807 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:29Z","lastTransitionTime":"2026-02-19T09:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:29 crc kubenswrapper[4965]: I0219 09:43:29.682628 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:29 crc kubenswrapper[4965]: I0219 09:43:29.683114 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:29 crc kubenswrapper[4965]: I0219 09:43:29.683125 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:29 crc kubenswrapper[4965]: I0219 09:43:29.683142 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:29 crc kubenswrapper[4965]: I0219 09:43:29.683151 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:29Z","lastTransitionTime":"2026-02-19T09:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:29 crc kubenswrapper[4965]: I0219 09:43:29.788010 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:29 crc kubenswrapper[4965]: I0219 09:43:29.788068 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:29 crc kubenswrapper[4965]: I0219 09:43:29.788084 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:29 crc kubenswrapper[4965]: I0219 09:43:29.788106 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:29 crc kubenswrapper[4965]: I0219 09:43:29.788120 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:29Z","lastTransitionTime":"2026-02-19T09:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:29 crc kubenswrapper[4965]: I0219 09:43:29.891513 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:29 crc kubenswrapper[4965]: I0219 09:43:29.891561 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:29 crc kubenswrapper[4965]: I0219 09:43:29.891575 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:29 crc kubenswrapper[4965]: I0219 09:43:29.891593 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:29 crc kubenswrapper[4965]: I0219 09:43:29.891606 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:29Z","lastTransitionTime":"2026-02-19T09:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:29 crc kubenswrapper[4965]: I0219 09:43:29.995012 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:29 crc kubenswrapper[4965]: I0219 09:43:29.995105 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:29 crc kubenswrapper[4965]: I0219 09:43:29.995132 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:29 crc kubenswrapper[4965]: I0219 09:43:29.995165 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:29 crc kubenswrapper[4965]: I0219 09:43:29.995188 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:29Z","lastTransitionTime":"2026-02-19T09:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:30 crc kubenswrapper[4965]: I0219 09:43:30.098733 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:30 crc kubenswrapper[4965]: I0219 09:43:30.098791 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:30 crc kubenswrapper[4965]: I0219 09:43:30.098810 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:30 crc kubenswrapper[4965]: I0219 09:43:30.098838 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:30 crc kubenswrapper[4965]: I0219 09:43:30.098855 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:30Z","lastTransitionTime":"2026-02-19T09:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:30 crc kubenswrapper[4965]: I0219 09:43:30.189505 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 11:15:00.613470053 +0000 UTC Feb 19 09:43:30 crc kubenswrapper[4965]: I0219 09:43:30.197050 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:43:30 crc kubenswrapper[4965]: I0219 09:43:30.197119 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:43:30 crc kubenswrapper[4965]: I0219 09:43:30.197157 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:43:30 crc kubenswrapper[4965]: I0219 09:43:30.197068 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:43:30 crc kubenswrapper[4965]: E0219 09:43:30.197284 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:43:30 crc kubenswrapper[4965]: E0219 09:43:30.197420 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:43:30 crc kubenswrapper[4965]: E0219 09:43:30.197523 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lwjwk" podUID="1e1b431a-0390-4366-82d1-6cb782c7a9e8" Feb 19 09:43:30 crc kubenswrapper[4965]: E0219 09:43:30.197610 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:43:30 crc kubenswrapper[4965]: I0219 09:43:30.203259 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:30 crc kubenswrapper[4965]: I0219 09:43:30.203319 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:30 crc kubenswrapper[4965]: I0219 09:43:30.203337 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:30 crc kubenswrapper[4965]: I0219 09:43:30.203361 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:30 crc kubenswrapper[4965]: I0219 09:43:30.203380 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:30Z","lastTransitionTime":"2026-02-19T09:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:30 crc kubenswrapper[4965]: I0219 09:43:30.306372 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:30 crc kubenswrapper[4965]: I0219 09:43:30.306434 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:30 crc kubenswrapper[4965]: I0219 09:43:30.306449 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:30 crc kubenswrapper[4965]: I0219 09:43:30.306472 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:30 crc kubenswrapper[4965]: I0219 09:43:30.306492 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:30Z","lastTransitionTime":"2026-02-19T09:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:30 crc kubenswrapper[4965]: I0219 09:43:30.409521 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:30 crc kubenswrapper[4965]: I0219 09:43:30.409557 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:30 crc kubenswrapper[4965]: I0219 09:43:30.409565 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:30 crc kubenswrapper[4965]: I0219 09:43:30.409581 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:30 crc kubenswrapper[4965]: I0219 09:43:30.409593 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:30Z","lastTransitionTime":"2026-02-19T09:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:30 crc kubenswrapper[4965]: I0219 09:43:30.511701 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:30 crc kubenswrapper[4965]: I0219 09:43:30.511780 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:30 crc kubenswrapper[4965]: I0219 09:43:30.511796 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:30 crc kubenswrapper[4965]: I0219 09:43:30.511818 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:30 crc kubenswrapper[4965]: I0219 09:43:30.511831 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:30Z","lastTransitionTime":"2026-02-19T09:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:30 crc kubenswrapper[4965]: I0219 09:43:30.614596 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:30 crc kubenswrapper[4965]: I0219 09:43:30.614667 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:30 crc kubenswrapper[4965]: I0219 09:43:30.614685 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:30 crc kubenswrapper[4965]: I0219 09:43:30.614713 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:30 crc kubenswrapper[4965]: I0219 09:43:30.614732 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:30Z","lastTransitionTime":"2026-02-19T09:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:30 crc kubenswrapper[4965]: I0219 09:43:30.717617 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:30 crc kubenswrapper[4965]: I0219 09:43:30.717666 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:30 crc kubenswrapper[4965]: I0219 09:43:30.717679 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:30 crc kubenswrapper[4965]: I0219 09:43:30.717697 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:30 crc kubenswrapper[4965]: I0219 09:43:30.717711 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:30Z","lastTransitionTime":"2026-02-19T09:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:30 crc kubenswrapper[4965]: I0219 09:43:30.820839 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:30 crc kubenswrapper[4965]: I0219 09:43:30.820891 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:30 crc kubenswrapper[4965]: I0219 09:43:30.820900 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:30 crc kubenswrapper[4965]: I0219 09:43:30.820915 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:30 crc kubenswrapper[4965]: I0219 09:43:30.820925 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:30Z","lastTransitionTime":"2026-02-19T09:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:30 crc kubenswrapper[4965]: I0219 09:43:30.936517 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:30 crc kubenswrapper[4965]: I0219 09:43:30.936572 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:30 crc kubenswrapper[4965]: I0219 09:43:30.936582 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:30 crc kubenswrapper[4965]: I0219 09:43:30.936600 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:30 crc kubenswrapper[4965]: I0219 09:43:30.936617 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:30Z","lastTransitionTime":"2026-02-19T09:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:31 crc kubenswrapper[4965]: I0219 09:43:31.041686 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:31 crc kubenswrapper[4965]: I0219 09:43:31.041746 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:31 crc kubenswrapper[4965]: I0219 09:43:31.041766 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:31 crc kubenswrapper[4965]: I0219 09:43:31.041786 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:31 crc kubenswrapper[4965]: I0219 09:43:31.041802 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:31Z","lastTransitionTime":"2026-02-19T09:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:31 crc kubenswrapper[4965]: I0219 09:43:31.145559 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:31 crc kubenswrapper[4965]: I0219 09:43:31.145623 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:31 crc kubenswrapper[4965]: I0219 09:43:31.145639 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:31 crc kubenswrapper[4965]: I0219 09:43:31.145664 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:31 crc kubenswrapper[4965]: I0219 09:43:31.145681 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:31Z","lastTransitionTime":"2026-02-19T09:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:31 crc kubenswrapper[4965]: I0219 09:43:31.189863 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 00:36:29.263602419 +0000 UTC Feb 19 09:43:31 crc kubenswrapper[4965]: I0219 09:43:31.250490 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:31 crc kubenswrapper[4965]: I0219 09:43:31.250554 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:31 crc kubenswrapper[4965]: I0219 09:43:31.250567 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:31 crc kubenswrapper[4965]: I0219 09:43:31.250591 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:31 crc kubenswrapper[4965]: I0219 09:43:31.250607 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:31Z","lastTransitionTime":"2026-02-19T09:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:31 crc kubenswrapper[4965]: I0219 09:43:31.353845 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:31 crc kubenswrapper[4965]: I0219 09:43:31.353906 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:31 crc kubenswrapper[4965]: I0219 09:43:31.353923 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:31 crc kubenswrapper[4965]: I0219 09:43:31.353950 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:31 crc kubenswrapper[4965]: I0219 09:43:31.353968 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:31Z","lastTransitionTime":"2026-02-19T09:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:31 crc kubenswrapper[4965]: I0219 09:43:31.457684 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:31 crc kubenswrapper[4965]: I0219 09:43:31.457910 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:31 crc kubenswrapper[4965]: I0219 09:43:31.457937 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:31 crc kubenswrapper[4965]: I0219 09:43:31.457967 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:31 crc kubenswrapper[4965]: I0219 09:43:31.457986 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:31Z","lastTransitionTime":"2026-02-19T09:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:31 crc kubenswrapper[4965]: I0219 09:43:31.560432 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:31 crc kubenswrapper[4965]: I0219 09:43:31.560490 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:31 crc kubenswrapper[4965]: I0219 09:43:31.560499 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:31 crc kubenswrapper[4965]: I0219 09:43:31.560516 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:31 crc kubenswrapper[4965]: I0219 09:43:31.560525 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:31Z","lastTransitionTime":"2026-02-19T09:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:31 crc kubenswrapper[4965]: I0219 09:43:31.663353 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:31 crc kubenswrapper[4965]: I0219 09:43:31.663421 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:31 crc kubenswrapper[4965]: I0219 09:43:31.663444 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:31 crc kubenswrapper[4965]: I0219 09:43:31.663473 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:31 crc kubenswrapper[4965]: I0219 09:43:31.663496 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:31Z","lastTransitionTime":"2026-02-19T09:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:31 crc kubenswrapper[4965]: I0219 09:43:31.766250 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:31 crc kubenswrapper[4965]: I0219 09:43:31.766307 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:31 crc kubenswrapper[4965]: I0219 09:43:31.766322 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:31 crc kubenswrapper[4965]: I0219 09:43:31.766341 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:31 crc kubenswrapper[4965]: I0219 09:43:31.766353 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:31Z","lastTransitionTime":"2026-02-19T09:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:31 crc kubenswrapper[4965]: I0219 09:43:31.870607 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:31 crc kubenswrapper[4965]: I0219 09:43:31.870681 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:31 crc kubenswrapper[4965]: I0219 09:43:31.870705 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:31 crc kubenswrapper[4965]: I0219 09:43:31.870736 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:31 crc kubenswrapper[4965]: I0219 09:43:31.870759 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:31Z","lastTransitionTime":"2026-02-19T09:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:31 crc kubenswrapper[4965]: I0219 09:43:31.974706 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:31 crc kubenswrapper[4965]: I0219 09:43:31.974771 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:31 crc kubenswrapper[4965]: I0219 09:43:31.974790 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:31 crc kubenswrapper[4965]: I0219 09:43:31.974817 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:31 crc kubenswrapper[4965]: I0219 09:43:31.974835 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:31Z","lastTransitionTime":"2026-02-19T09:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:32 crc kubenswrapper[4965]: I0219 09:43:32.078041 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:32 crc kubenswrapper[4965]: I0219 09:43:32.078083 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:32 crc kubenswrapper[4965]: I0219 09:43:32.078092 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:32 crc kubenswrapper[4965]: I0219 09:43:32.078110 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:32 crc kubenswrapper[4965]: I0219 09:43:32.078120 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:32Z","lastTransitionTime":"2026-02-19T09:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:32 crc kubenswrapper[4965]: I0219 09:43:32.181467 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:32 crc kubenswrapper[4965]: I0219 09:43:32.181520 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:32 crc kubenswrapper[4965]: I0219 09:43:32.181532 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:32 crc kubenswrapper[4965]: I0219 09:43:32.181553 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:32 crc kubenswrapper[4965]: I0219 09:43:32.181563 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:32Z","lastTransitionTime":"2026-02-19T09:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:32 crc kubenswrapper[4965]: I0219 09:43:32.190844 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 07:33:27.946037262 +0000 UTC Feb 19 09:43:32 crc kubenswrapper[4965]: I0219 09:43:32.197259 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:43:32 crc kubenswrapper[4965]: I0219 09:43:32.197292 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:43:32 crc kubenswrapper[4965]: I0219 09:43:32.197363 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:43:32 crc kubenswrapper[4965]: I0219 09:43:32.197257 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:43:32 crc kubenswrapper[4965]: E0219 09:43:32.197404 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:43:32 crc kubenswrapper[4965]: E0219 09:43:32.197596 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:43:32 crc kubenswrapper[4965]: E0219 09:43:32.197630 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lwjwk" podUID="1e1b431a-0390-4366-82d1-6cb782c7a9e8" Feb 19 09:43:32 crc kubenswrapper[4965]: E0219 09:43:32.197697 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:43:32 crc kubenswrapper[4965]: I0219 09:43:32.283987 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:32 crc kubenswrapper[4965]: I0219 09:43:32.284093 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:32 crc kubenswrapper[4965]: I0219 09:43:32.284116 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:32 crc kubenswrapper[4965]: I0219 09:43:32.284139 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:32 crc kubenswrapper[4965]: I0219 09:43:32.284157 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:32Z","lastTransitionTime":"2026-02-19T09:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:32 crc kubenswrapper[4965]: I0219 09:43:32.386907 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:32 crc kubenswrapper[4965]: I0219 09:43:32.386943 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:32 crc kubenswrapper[4965]: I0219 09:43:32.386952 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:32 crc kubenswrapper[4965]: I0219 09:43:32.386966 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:32 crc kubenswrapper[4965]: I0219 09:43:32.386976 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:32Z","lastTransitionTime":"2026-02-19T09:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:32 crc kubenswrapper[4965]: I0219 09:43:32.489642 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:32 crc kubenswrapper[4965]: I0219 09:43:32.489690 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:32 crc kubenswrapper[4965]: I0219 09:43:32.489708 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:32 crc kubenswrapper[4965]: I0219 09:43:32.489730 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:32 crc kubenswrapper[4965]: I0219 09:43:32.489745 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:32Z","lastTransitionTime":"2026-02-19T09:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:32 crc kubenswrapper[4965]: I0219 09:43:32.592574 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:32 crc kubenswrapper[4965]: I0219 09:43:32.592658 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:32 crc kubenswrapper[4965]: I0219 09:43:32.592716 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:32 crc kubenswrapper[4965]: I0219 09:43:32.592747 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:32 crc kubenswrapper[4965]: I0219 09:43:32.592766 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:32Z","lastTransitionTime":"2026-02-19T09:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:32 crc kubenswrapper[4965]: I0219 09:43:32.695553 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:32 crc kubenswrapper[4965]: I0219 09:43:32.695613 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:32 crc kubenswrapper[4965]: I0219 09:43:32.695631 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:32 crc kubenswrapper[4965]: I0219 09:43:32.695651 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:32 crc kubenswrapper[4965]: I0219 09:43:32.695669 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:32Z","lastTransitionTime":"2026-02-19T09:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:32 crc kubenswrapper[4965]: I0219 09:43:32.799132 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:32 crc kubenswrapper[4965]: I0219 09:43:32.799238 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:32 crc kubenswrapper[4965]: I0219 09:43:32.799259 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:32 crc kubenswrapper[4965]: I0219 09:43:32.799286 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:32 crc kubenswrapper[4965]: I0219 09:43:32.799307 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:32Z","lastTransitionTime":"2026-02-19T09:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:32 crc kubenswrapper[4965]: I0219 09:43:32.902423 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:32 crc kubenswrapper[4965]: I0219 09:43:32.902490 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:32 crc kubenswrapper[4965]: I0219 09:43:32.902500 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:32 crc kubenswrapper[4965]: I0219 09:43:32.902525 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:32 crc kubenswrapper[4965]: I0219 09:43:32.902541 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:32Z","lastTransitionTime":"2026-02-19T09:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.005121 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.005184 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.005220 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.005240 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.005254 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:33Z","lastTransitionTime":"2026-02-19T09:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.108373 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.108434 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.108445 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.108464 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.108476 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:33Z","lastTransitionTime":"2026-02-19T09:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.191236 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 01:46:23.560506482 +0000 UTC Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.210872 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.210912 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.210930 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.210949 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.210960 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:33Z","lastTransitionTime":"2026-02-19T09:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.314832 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.314881 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.314892 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.314911 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.314923 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:33Z","lastTransitionTime":"2026-02-19T09:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.418831 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.418869 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.418883 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.418901 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.418915 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:33Z","lastTransitionTime":"2026-02-19T09:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.522392 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.522470 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.522494 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.522521 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.522540 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:33Z","lastTransitionTime":"2026-02-19T09:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.625997 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.626046 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.626058 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.626074 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.626086 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:33Z","lastTransitionTime":"2026-02-19T09:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.662821 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.662847 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.662857 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.662872 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.662881 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:33Z","lastTransitionTime":"2026-02-19T09:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:33 crc kubenswrapper[4965]: E0219 09:43:33.681342 4965 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1c83089-21b1-454c-b8cd-3bf0aaa04cd0\\\",\\\"systemUUID\\\":\\\"70334fb7-3860-4c43-90b6-37f049faeb9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:33Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.686011 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.686057 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.686070 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.686088 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.686101 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:33Z","lastTransitionTime":"2026-02-19T09:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:33 crc kubenswrapper[4965]: E0219 09:43:33.703766 4965 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1c83089-21b1-454c-b8cd-3bf0aaa04cd0\\\",\\\"systemUUID\\\":\\\"70334fb7-3860-4c43-90b6-37f049faeb9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:33Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.714634 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.714690 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.714725 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.714745 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.714761 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:33Z","lastTransitionTime":"2026-02-19T09:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:33 crc kubenswrapper[4965]: E0219 09:43:33.733946 4965 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1c83089-21b1-454c-b8cd-3bf0aaa04cd0\\\",\\\"systemUUID\\\":\\\"70334fb7-3860-4c43-90b6-37f049faeb9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:33Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.738462 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.738524 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.738548 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.738580 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.738616 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:33Z","lastTransitionTime":"2026-02-19T09:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:33 crc kubenswrapper[4965]: E0219 09:43:33.756799 4965 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1c83089-21b1-454c-b8cd-3bf0aaa04cd0\\\",\\\"systemUUID\\\":\\\"70334fb7-3860-4c43-90b6-37f049faeb9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:33Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.761050 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.761100 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.761115 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.761134 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.761146 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:33Z","lastTransitionTime":"2026-02-19T09:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:33 crc kubenswrapper[4965]: E0219 09:43:33.773936 4965 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1c83089-21b1-454c-b8cd-3bf0aaa04cd0\\\",\\\"systemUUID\\\":\\\"70334fb7-3860-4c43-90b6-37f049faeb9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:33Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:33 crc kubenswrapper[4965]: E0219 09:43:33.774045 4965 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.775802 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.775839 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.775852 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.775868 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.775880 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:33Z","lastTransitionTime":"2026-02-19T09:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.878831 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.878874 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.878886 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.878903 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.878916 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:33Z","lastTransitionTime":"2026-02-19T09:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.981833 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.981881 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.981894 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.981912 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:33 crc kubenswrapper[4965]: I0219 09:43:33.981949 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:33Z","lastTransitionTime":"2026-02-19T09:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:34 crc kubenswrapper[4965]: I0219 09:43:34.024308 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1e1b431a-0390-4366-82d1-6cb782c7a9e8-metrics-certs\") pod \"network-metrics-daemon-lwjwk\" (UID: \"1e1b431a-0390-4366-82d1-6cb782c7a9e8\") " pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:43:34 crc kubenswrapper[4965]: E0219 09:43:34.024509 4965 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 09:43:34 crc kubenswrapper[4965]: E0219 09:43:34.024593 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e1b431a-0390-4366-82d1-6cb782c7a9e8-metrics-certs podName:1e1b431a-0390-4366-82d1-6cb782c7a9e8 nodeName:}" failed. No retries permitted until 2026-02-19 09:44:06.02456908 +0000 UTC m=+101.645890390 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1e1b431a-0390-4366-82d1-6cb782c7a9e8-metrics-certs") pod "network-metrics-daemon-lwjwk" (UID: "1e1b431a-0390-4366-82d1-6cb782c7a9e8") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 09:43:34 crc kubenswrapper[4965]: I0219 09:43:34.085500 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:34 crc kubenswrapper[4965]: I0219 09:43:34.085550 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:34 crc kubenswrapper[4965]: I0219 09:43:34.085563 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:34 crc kubenswrapper[4965]: I0219 09:43:34.085580 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:34 crc kubenswrapper[4965]: I0219 09:43:34.085590 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:34Z","lastTransitionTime":"2026-02-19T09:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:34 crc kubenswrapper[4965]: I0219 09:43:34.188288 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:34 crc kubenswrapper[4965]: I0219 09:43:34.188331 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:34 crc kubenswrapper[4965]: I0219 09:43:34.188340 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:34 crc kubenswrapper[4965]: I0219 09:43:34.188356 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:34 crc kubenswrapper[4965]: I0219 09:43:34.188366 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:34Z","lastTransitionTime":"2026-02-19T09:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:34 crc kubenswrapper[4965]: I0219 09:43:34.192410 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 23:42:02.444745948 +0000 UTC Feb 19 09:43:34 crc kubenswrapper[4965]: I0219 09:43:34.197733 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:43:34 crc kubenswrapper[4965]: I0219 09:43:34.197770 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:43:34 crc kubenswrapper[4965]: I0219 09:43:34.197775 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:43:34 crc kubenswrapper[4965]: I0219 09:43:34.197810 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:43:34 crc kubenswrapper[4965]: E0219 09:43:34.197860 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:43:34 crc kubenswrapper[4965]: E0219 09:43:34.197944 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:43:34 crc kubenswrapper[4965]: E0219 09:43:34.198022 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lwjwk" podUID="1e1b431a-0390-4366-82d1-6cb782c7a9e8" Feb 19 09:43:34 crc kubenswrapper[4965]: E0219 09:43:34.198083 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:43:34 crc kubenswrapper[4965]: I0219 09:43:34.291949 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:34 crc kubenswrapper[4965]: I0219 09:43:34.292010 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:34 crc kubenswrapper[4965]: I0219 09:43:34.292022 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:34 crc kubenswrapper[4965]: I0219 09:43:34.292043 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:34 crc kubenswrapper[4965]: I0219 09:43:34.292057 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:34Z","lastTransitionTime":"2026-02-19T09:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:34 crc kubenswrapper[4965]: I0219 09:43:34.395375 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:34 crc kubenswrapper[4965]: I0219 09:43:34.395442 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:34 crc kubenswrapper[4965]: I0219 09:43:34.395454 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:34 crc kubenswrapper[4965]: I0219 09:43:34.395499 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:34 crc kubenswrapper[4965]: I0219 09:43:34.395512 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:34Z","lastTransitionTime":"2026-02-19T09:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:34 crc kubenswrapper[4965]: I0219 09:43:34.499654 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:34 crc kubenswrapper[4965]: I0219 09:43:34.499715 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:34 crc kubenswrapper[4965]: I0219 09:43:34.499733 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:34 crc kubenswrapper[4965]: I0219 09:43:34.499760 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:34 crc kubenswrapper[4965]: I0219 09:43:34.499777 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:34Z","lastTransitionTime":"2026-02-19T09:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:34 crc kubenswrapper[4965]: I0219 09:43:34.602846 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:34 crc kubenswrapper[4965]: I0219 09:43:34.603300 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:34 crc kubenswrapper[4965]: I0219 09:43:34.603414 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:34 crc kubenswrapper[4965]: I0219 09:43:34.603524 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:34 crc kubenswrapper[4965]: I0219 09:43:34.603631 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:34Z","lastTransitionTime":"2026-02-19T09:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:34 crc kubenswrapper[4965]: I0219 09:43:34.706259 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:34 crc kubenswrapper[4965]: I0219 09:43:34.706304 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:34 crc kubenswrapper[4965]: I0219 09:43:34.706314 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:34 crc kubenswrapper[4965]: I0219 09:43:34.706330 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:34 crc kubenswrapper[4965]: I0219 09:43:34.706340 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:34Z","lastTransitionTime":"2026-02-19T09:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:34 crc kubenswrapper[4965]: I0219 09:43:34.809912 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:34 crc kubenswrapper[4965]: I0219 09:43:34.809966 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:34 crc kubenswrapper[4965]: I0219 09:43:34.809979 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:34 crc kubenswrapper[4965]: I0219 09:43:34.809996 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:34 crc kubenswrapper[4965]: I0219 09:43:34.810012 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:34Z","lastTransitionTime":"2026-02-19T09:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:34 crc kubenswrapper[4965]: I0219 09:43:34.913243 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:34 crc kubenswrapper[4965]: I0219 09:43:34.913303 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:34 crc kubenswrapper[4965]: I0219 09:43:34.913316 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:34 crc kubenswrapper[4965]: I0219 09:43:34.913331 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:34 crc kubenswrapper[4965]: I0219 09:43:34.913345 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:34Z","lastTransitionTime":"2026-02-19T09:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.018302 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.018353 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.018370 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.018398 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.018415 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:35Z","lastTransitionTime":"2026-02-19T09:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.121253 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.121356 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.121375 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.121396 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.121410 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:35Z","lastTransitionTime":"2026-02-19T09:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.193431 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 11:50:50.329908479 +0000 UTC Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.215827 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nsjqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aef896286f2619adf09fb4e2f4f25543b1d0d69c90fb4d301fb1c215e9b78f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4tp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nsjqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:35Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.224829 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.224919 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.224940 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.224971 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.224992 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:35Z","lastTransitionTime":"2026-02-19T09:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.232051 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pjxbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3965f16-f751-4de2-9f58-db2070fc99b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d81fdde65dd95b5dd26fd2bccb3c26f4491eee9891d4e837fd01338432057878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pjxbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:35Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.249126 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lwjwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e1b431a-0390-4366-82d1-6cb782c7a9e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdh66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdh66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:43:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lwjwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:35Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.262643 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f12bbde7-ee02-4143-b0a7-af0299919dda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5177a63dec267486f4128ae0156f4cb79507b735fca5964a100bc27890e5d13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a087adce637c236bd6e6e1ee13c3742493b9f09053cd984a7c4334056f06d2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://596eb135489ffa0def98b9d17adf293522beb945db0674088c8cb37d1e83b7d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://883c5ece4dc1535a8e1ed6490e8eb103b52b27777bb8dd2244aa3ccbcac483d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://883c5ece4dc1535a8e1ed6490e8eb103b52b27777bb8dd2244aa3ccbcac483d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:35Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.280341 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:35Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.293375 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6nv8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7972115-bfc1-42ee-b756-e394806eed51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://597dabc5893cced827268c6dc222b2f1535c93e6086c25cec52e7f612952eb65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vd96\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6nv8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:35Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.309179 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ef3eb8-6103-492d-b6ef-f16081d15e83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://107d47a2c3ddc138ad383ab20f81dabe2c31af50f7bd66c31b66df79488ba837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ff237da7e509d3b4a25e8042c384a768ef0123d1687b574502f769bde3121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mhh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:35Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.327887 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.328168 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.328287 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.328370 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.328433 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:35Z","lastTransitionTime":"2026-02-19T09:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.343955 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c788dfa-1923-4a2b-9619-73acf92ec849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa60b6875cede631c9383845eb085f96d62a6365609f1f98b84165b54e0872a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ebb933d7238665138ec7e854756522607a2814b48116b2ce4474869b39344c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac7fd5095ec7fd8ce98b9150bd5c0a642004e2c1239a6fa1ff002efa67471df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51316b32af59fe23cdf832fbc0b37b11f74d3a57d01eed32ca30a196d4c7e2c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccba1acfe523175d218c25c2f59a6f9874426235c9cba981a80cc53aca12408a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc418c94085bcd4ed93250cce9eb6bc122cd045035b72800df2bdf4b364d6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://656d2c48a8186b05aa2582865c3075e2a72238ef8cfd816187f2737062be98ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://656d2c48a8186b05aa2582865c3075e2a72238ef8cfd816187f2737062be98ee\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:43:16Z\\\",\\\"message\\\":\\\"map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0219 09:43:16.349938 6589 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0219 09:43:16.349981 6589 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0219 09:43:16.350026 6589 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0219 09:43:16.350041 6589 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:43:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-dcfpx_openshift-ovn-kubernetes(7c788dfa-1923-4a2b-9619-73acf92ec849)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://533452e14c9d0d57a451ec0dd06097f87f60658a8f008203b29c31b2b5310eb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dcfpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:35Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.356105 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g5jnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ab24976-06f3-4373-825a-5234ff24f2cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef52c51fd38bf34f1fc3eb014d85c40137dd15030237334159ffbb71e1d6c2a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll7dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcbd8e6b02f20a249ddb3fbf20ddd72a94b40fd420cb6ad4c59ea513994ac382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll7dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:43:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-g5jnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:35Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.373688 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd2c04f5bfa6800521c39502b241dfea1a0b9d3ddde4eb92d501d28bcfad1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:35Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.392022 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:35Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.408007 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed9a04147ac88af087b35406b7fc4e1261b034a9fbfa0014446cdc08743f7184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27905a4c42a1d28d582484efe02020cd2b7d5a5af7c53787412705c7a6da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:35Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.423081 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85d200ad-dc81-4825-a3e0-976c042ebfd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e2ac875fca92d3c631dc7856cd9f72b9abbf3f2edcbc7efeb49ce1c03ac52a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d4ac252f5069500eef4e1579559c883095bf1c21a29cb96a36a4aab507a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29999666cb6f12b3a4a394a38d4304dd636fe7106b771ca4ef541693fbfc76a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fb1ec6375fa0345ae67191ebc522471cabd2510440f8051132b833c0fa595e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:35Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.431990 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.432216 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.432378 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.432532 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.432680 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:35Z","lastTransitionTime":"2026-02-19T09:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.436425 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:35Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.453178 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ca9c67a49c188984680f98e96b659087034f30727c1fcdad7dfc298157745c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:35Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.474878 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vpj8c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26ce37d0-9ace-438a-bdd4-6bb30e41bac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cab682da53d115c9e5ce5dca08aae544673283d03b3e11ba9d28ca7896fd4103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3b065fcc4e11f5576c86810e3af403b4336878950bb02f61e1e461b38b009e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c3b065fcc4e11f5576c86810e3af403b4336878950bb02f61e1e461b38b009e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4089ca812b00a9e7ab8b2ba1f148d929de9837384e6b0b64ea8f62dd6917fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4089ca812b00a9e7ab8b2ba1f148d929de9837384e6b0b64ea8f62dd6917fee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0fceec5800537c79268d8bad66cd51cedd7e6442e8f08ea259dd5714334a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0fceec5800537c79268d8bad66cd51cedd7e6442e8f08ea259dd5714334a81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vpj8c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:35Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.494584 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"210f2216-544c-43a1-813b-68e47da7447e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31cd85d70b58724b5ed5071b085e5b20b31083087fc7847dfadccc52cfd717c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d403f58f80ef84bfe879f95390ffc186d6516ce879f1ca593e9d06b6fdaa9003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86861693b1090906a34be8e134571c684c24b327ed62db509ca470e79c70c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65b4ac45cd6766a2144308a99817f41ef69812519680ebae6edf2d6c72f1c97e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c0288960f3e7739ec0587fcefc29e57c0e351c4903326474454df7b6b57a29c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:42:39Z\\\",\\\"message\\\":\\\"W0219 09:42:28.573633 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 09:42:28.575071 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771494148 cert, and key in /tmp/serving-cert-427400488/serving-signer.crt, /tmp/serving-cert-427400488/serving-signer.key\\\\nI0219 09:42:29.117984 1 observer_polling.go:159] Starting file observer\\\\nW0219 09:42:29.120780 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 09:42:29.121009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:42:29.122010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-427400488/tls.crt::/tmp/serving-cert-427400488/tls.key\\\\\\\"\\\\nF0219 09:42:39.487179 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2dcb23379da52333bd115902957e4051df5472eb6d909d45f4781487e8d6e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:35Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.536296 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.536350 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.536362 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.536385 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.536403 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:35Z","lastTransitionTime":"2026-02-19T09:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.638985 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.639056 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.639069 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.639093 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.639107 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:35Z","lastTransitionTime":"2026-02-19T09:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.742848 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.742894 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.742903 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.742922 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.742935 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:35Z","lastTransitionTime":"2026-02-19T09:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.846089 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.846143 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.846156 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.846174 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.846187 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:35Z","lastTransitionTime":"2026-02-19T09:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.951773 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.951847 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.951860 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.951883 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:35 crc kubenswrapper[4965]: I0219 09:43:35.951896 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:35Z","lastTransitionTime":"2026-02-19T09:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:36 crc kubenswrapper[4965]: I0219 09:43:36.053992 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:36 crc kubenswrapper[4965]: I0219 09:43:36.054036 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:36 crc kubenswrapper[4965]: I0219 09:43:36.054049 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:36 crc kubenswrapper[4965]: I0219 09:43:36.054066 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:36 crc kubenswrapper[4965]: I0219 09:43:36.054076 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:36Z","lastTransitionTime":"2026-02-19T09:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:36 crc kubenswrapper[4965]: I0219 09:43:36.156822 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:36 crc kubenswrapper[4965]: I0219 09:43:36.156863 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:36 crc kubenswrapper[4965]: I0219 09:43:36.156874 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:36 crc kubenswrapper[4965]: I0219 09:43:36.156889 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:36 crc kubenswrapper[4965]: I0219 09:43:36.156898 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:36Z","lastTransitionTime":"2026-02-19T09:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:36 crc kubenswrapper[4965]: I0219 09:43:36.194466 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 10:22:37.140464368 +0000 UTC Feb 19 09:43:36 crc kubenswrapper[4965]: I0219 09:43:36.197840 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:43:36 crc kubenswrapper[4965]: I0219 09:43:36.197927 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:43:36 crc kubenswrapper[4965]: I0219 09:43:36.197866 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:43:36 crc kubenswrapper[4965]: E0219 09:43:36.197994 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:43:36 crc kubenswrapper[4965]: I0219 09:43:36.198103 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:43:36 crc kubenswrapper[4965]: E0219 09:43:36.198105 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lwjwk" podUID="1e1b431a-0390-4366-82d1-6cb782c7a9e8" Feb 19 09:43:36 crc kubenswrapper[4965]: E0219 09:43:36.198177 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:43:36 crc kubenswrapper[4965]: E0219 09:43:36.198248 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:43:36 crc kubenswrapper[4965]: I0219 09:43:36.259815 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:36 crc kubenswrapper[4965]: I0219 09:43:36.259866 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:36 crc kubenswrapper[4965]: I0219 09:43:36.259880 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:36 crc kubenswrapper[4965]: I0219 09:43:36.259906 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:36 crc kubenswrapper[4965]: I0219 09:43:36.259922 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:36Z","lastTransitionTime":"2026-02-19T09:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:36 crc kubenswrapper[4965]: I0219 09:43:36.362677 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:36 crc kubenswrapper[4965]: I0219 09:43:36.362728 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:36 crc kubenswrapper[4965]: I0219 09:43:36.362738 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:36 crc kubenswrapper[4965]: I0219 09:43:36.362756 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:36 crc kubenswrapper[4965]: I0219 09:43:36.362768 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:36Z","lastTransitionTime":"2026-02-19T09:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:36 crc kubenswrapper[4965]: I0219 09:43:36.467076 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:36 crc kubenswrapper[4965]: I0219 09:43:36.467137 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:36 crc kubenswrapper[4965]: I0219 09:43:36.467151 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:36 crc kubenswrapper[4965]: I0219 09:43:36.467174 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:36 crc kubenswrapper[4965]: I0219 09:43:36.467234 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:36Z","lastTransitionTime":"2026-02-19T09:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:36 crc kubenswrapper[4965]: I0219 09:43:36.570124 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:36 crc kubenswrapper[4965]: I0219 09:43:36.570165 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:36 crc kubenswrapper[4965]: I0219 09:43:36.570173 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:36 crc kubenswrapper[4965]: I0219 09:43:36.570189 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:36 crc kubenswrapper[4965]: I0219 09:43:36.570241 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:36Z","lastTransitionTime":"2026-02-19T09:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:36 crc kubenswrapper[4965]: I0219 09:43:36.672126 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:36 crc kubenswrapper[4965]: I0219 09:43:36.672175 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:36 crc kubenswrapper[4965]: I0219 09:43:36.672208 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:36 crc kubenswrapper[4965]: I0219 09:43:36.672230 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:36 crc kubenswrapper[4965]: I0219 09:43:36.672244 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:36Z","lastTransitionTime":"2026-02-19T09:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:36 crc kubenswrapper[4965]: I0219 09:43:36.775317 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:36 crc kubenswrapper[4965]: I0219 09:43:36.775378 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:36 crc kubenswrapper[4965]: I0219 09:43:36.775390 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:36 crc kubenswrapper[4965]: I0219 09:43:36.775410 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:36 crc kubenswrapper[4965]: I0219 09:43:36.775423 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:36Z","lastTransitionTime":"2026-02-19T09:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:36 crc kubenswrapper[4965]: I0219 09:43:36.878943 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:36 crc kubenswrapper[4965]: I0219 09:43:36.878985 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:36 crc kubenswrapper[4965]: I0219 09:43:36.878998 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:36 crc kubenswrapper[4965]: I0219 09:43:36.879013 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:36 crc kubenswrapper[4965]: I0219 09:43:36.879026 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:36Z","lastTransitionTime":"2026-02-19T09:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:36 crc kubenswrapper[4965]: I0219 09:43:36.981930 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:36 crc kubenswrapper[4965]: I0219 09:43:36.981991 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:36 crc kubenswrapper[4965]: I0219 09:43:36.982004 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:36 crc kubenswrapper[4965]: I0219 09:43:36.982026 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:36 crc kubenswrapper[4965]: I0219 09:43:36.982041 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:36Z","lastTransitionTime":"2026-02-19T09:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:37 crc kubenswrapper[4965]: I0219 09:43:37.086421 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:37 crc kubenswrapper[4965]: I0219 09:43:37.086475 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:37 crc kubenswrapper[4965]: I0219 09:43:37.086485 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:37 crc kubenswrapper[4965]: I0219 09:43:37.086505 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:37 crc kubenswrapper[4965]: I0219 09:43:37.086520 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:37Z","lastTransitionTime":"2026-02-19T09:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:37 crc kubenswrapper[4965]: I0219 09:43:37.192728 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:37 crc kubenswrapper[4965]: I0219 09:43:37.192796 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:37 crc kubenswrapper[4965]: I0219 09:43:37.192809 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:37 crc kubenswrapper[4965]: I0219 09:43:37.192827 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:37 crc kubenswrapper[4965]: I0219 09:43:37.192840 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:37Z","lastTransitionTime":"2026-02-19T09:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:37 crc kubenswrapper[4965]: I0219 09:43:37.194845 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 16:32:13.648355775 +0000 UTC Feb 19 09:43:37 crc kubenswrapper[4965]: I0219 09:43:37.296027 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:37 crc kubenswrapper[4965]: I0219 09:43:37.296080 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:37 crc kubenswrapper[4965]: I0219 09:43:37.296095 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:37 crc kubenswrapper[4965]: I0219 09:43:37.296113 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:37 crc kubenswrapper[4965]: I0219 09:43:37.296126 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:37Z","lastTransitionTime":"2026-02-19T09:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:37 crc kubenswrapper[4965]: I0219 09:43:37.399107 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:37 crc kubenswrapper[4965]: I0219 09:43:37.399151 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:37 crc kubenswrapper[4965]: I0219 09:43:37.399166 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:37 crc kubenswrapper[4965]: I0219 09:43:37.399182 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:37 crc kubenswrapper[4965]: I0219 09:43:37.399212 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:37Z","lastTransitionTime":"2026-02-19T09:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:37 crc kubenswrapper[4965]: I0219 09:43:37.502551 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:37 crc kubenswrapper[4965]: I0219 09:43:37.502600 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:37 crc kubenswrapper[4965]: I0219 09:43:37.502609 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:37 crc kubenswrapper[4965]: I0219 09:43:37.502624 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:37 crc kubenswrapper[4965]: I0219 09:43:37.502663 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:37Z","lastTransitionTime":"2026-02-19T09:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:37 crc kubenswrapper[4965]: I0219 09:43:37.606076 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:37 crc kubenswrapper[4965]: I0219 09:43:37.606141 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:37 crc kubenswrapper[4965]: I0219 09:43:37.606159 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:37 crc kubenswrapper[4965]: I0219 09:43:37.606187 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:37 crc kubenswrapper[4965]: I0219 09:43:37.606225 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:37Z","lastTransitionTime":"2026-02-19T09:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:37 crc kubenswrapper[4965]: I0219 09:43:37.710110 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:37 crc kubenswrapper[4965]: I0219 09:43:37.710162 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:37 crc kubenswrapper[4965]: I0219 09:43:37.710172 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:37 crc kubenswrapper[4965]: I0219 09:43:37.710187 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:37 crc kubenswrapper[4965]: I0219 09:43:37.710228 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:37Z","lastTransitionTime":"2026-02-19T09:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:37 crc kubenswrapper[4965]: I0219 09:43:37.813522 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:37 crc kubenswrapper[4965]: I0219 09:43:37.813582 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:37 crc kubenswrapper[4965]: I0219 09:43:37.813595 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:37 crc kubenswrapper[4965]: I0219 09:43:37.813612 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:37 crc kubenswrapper[4965]: I0219 09:43:37.813625 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:37Z","lastTransitionTime":"2026-02-19T09:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:37 crc kubenswrapper[4965]: I0219 09:43:37.916705 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:37 crc kubenswrapper[4965]: I0219 09:43:37.916794 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:37 crc kubenswrapper[4965]: I0219 09:43:37.916808 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:37 crc kubenswrapper[4965]: I0219 09:43:37.916825 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:37 crc kubenswrapper[4965]: I0219 09:43:37.916836 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:37Z","lastTransitionTime":"2026-02-19T09:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.019902 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.019978 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.020002 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.020030 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.020057 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:38Z","lastTransitionTime":"2026-02-19T09:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.122658 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.122724 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.122734 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.122754 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.122767 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:38Z","lastTransitionTime":"2026-02-19T09:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.195476 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 03:30:22.094220669 +0000 UTC Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.196844 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.196924 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.196950 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.196973 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:43:38 crc kubenswrapper[4965]: E0219 09:43:38.197074 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:43:38 crc kubenswrapper[4965]: E0219 09:43:38.197188 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:43:38 crc kubenswrapper[4965]: E0219 09:43:38.197386 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:43:38 crc kubenswrapper[4965]: E0219 09:43:38.197489 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lwjwk" podUID="1e1b431a-0390-4366-82d1-6cb782c7a9e8" Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.224989 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.225017 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.225026 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.225040 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.225049 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:38Z","lastTransitionTime":"2026-02-19T09:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.327333 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.327413 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.327426 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.327447 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.327464 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:38Z","lastTransitionTime":"2026-02-19T09:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.430502 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.430574 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.430592 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.430616 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.430633 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:38Z","lastTransitionTime":"2026-02-19T09:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.533835 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.533873 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.533882 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.533897 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.533907 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:38Z","lastTransitionTime":"2026-02-19T09:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.636832 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.636894 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.636911 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.636937 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.636954 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:38Z","lastTransitionTime":"2026-02-19T09:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.677809 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nsjqz_5e0b10c6-02b7-49d0-9a76-e89ebbb00528/kube-multus/0.log" Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.677862 4965 generic.go:334] "Generic (PLEG): container finished" podID="5e0b10c6-02b7-49d0-9a76-e89ebbb00528" containerID="8aef896286f2619adf09fb4e2f4f25543b1d0d69c90fb4d301fb1c215e9b78f8" exitCode=1 Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.677889 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-nsjqz" event={"ID":"5e0b10c6-02b7-49d0-9a76-e89ebbb00528","Type":"ContainerDied","Data":"8aef896286f2619adf09fb4e2f4f25543b1d0d69c90fb4d301fb1c215e9b78f8"} Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.678259 4965 scope.go:117] "RemoveContainer" containerID="8aef896286f2619adf09fb4e2f4f25543b1d0d69c90fb4d301fb1c215e9b78f8" Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.694345 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nsjqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aef896286f2619adf09fb4e2f4f25543b1d0d69c90fb4d301fb1c215e9b78f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aef896286f2619adf09fb4e2f4f25543b1d0d69c90fb4d301fb1c215e9b78f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:43:38Z\\\",\\\"message\\\":\\\"2026-02-19T09:42:53+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e32317b1-e341-4aca-af44-d186bb1c6485\\\\n2026-02-19T09:42:53+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e32317b1-e341-4aca-af44-d186bb1c6485 to /host/opt/cni/bin/\\\\n2026-02-19T09:42:53Z [verbose] multus-daemon started\\\\n2026-02-19T09:42:53Z [verbose] Readiness Indicator file check\\\\n2026-02-19T09:43:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4tp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nsjqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:38Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.709744 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pjxbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3965f16-f751-4de2-9f58-db2070fc99b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d81fdde65dd95b5dd26fd2bccb3c26f4491eee9891d4e837fd01338432057878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pjxbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:38Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.722354 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lwjwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e1b431a-0390-4366-82d1-6cb782c7a9e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdh66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdh66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:43:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lwjwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:38Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.735611 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f12bbde7-ee02-4143-b0a7-af0299919dda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5177a63dec267486f4128ae0156f4cb79507b735fca5964a100bc27890e5d13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a087adce637c236bd6e6e1ee13c3742493b9f09053cd984a7c4334056f06d2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://596eb135489ffa0def98b9d17adf293522beb945db0674088c8cb37d1e83b7d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://883c5ece4dc1535a8e1ed6490e8eb103b52b27777bb8dd2244aa3ccbcac483d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://883c5ece4dc1535a8e1ed6490e8eb103b52b27777bb8dd2244aa3ccbcac483d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:38Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.741346 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.741377 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.741386 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.741399 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.741409 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:38Z","lastTransitionTime":"2026-02-19T09:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.750321 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:38Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.764828 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6nv8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7972115-bfc1-42ee-b756-e394806eed51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://597dabc5893cced827268c6dc222b2f1535c93e6086c25cec52e7f612952eb65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vd96\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6nv8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:38Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.779910 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ef3eb8-6103-492d-b6ef-f16081d15e83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://107d47a2c3ddc138ad383ab20f81dabe2c31af50f7bd66c31b66df79488ba837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ff237da7e509d3b4a25e8042c384a768ef0123d1687b574502f769bde3121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mhh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:38Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.798171 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c788dfa-1923-4a2b-9619-73acf92ec849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa60b6875cede631c9383845eb085f96d62a6365609f1f98b84165b54e0872a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ebb933d7238665138ec7e854756522607a2814b48116b2ce4474869b39344c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac7fd5095ec7fd8ce98b9150bd5c0a642004e2c1239a6fa1ff002efa67471df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51316b32af59fe23cdf832fbc0b37b11f74d3a57d01eed32ca30a196d4c7e2c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccba1acfe523175d218c25c2f59a6f9874426235c9cba981a80cc53aca12408a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc418c94085bcd4ed93250cce9eb6bc122cd045035b72800df2bdf4b364d6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://656d2c48a8186b05aa2582865c3075e2a72238ef8cfd816187f2737062be98ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://656d2c48a8186b05aa2582865c3075e2a72238ef8cfd816187f2737062be98ee\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:43:16Z\\\",\\\"message\\\":\\\"map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0219 09:43:16.349938 6589 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0219 09:43:16.349981 6589 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0219 09:43:16.350026 6589 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0219 09:43:16.350041 6589 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:43:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-dcfpx_openshift-ovn-kubernetes(7c788dfa-1923-4a2b-9619-73acf92ec849)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://533452e14c9d0d57a451ec0dd06097f87f60658a8f008203b29c31b2b5310eb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dcfpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:38Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.811112 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g5jnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ab24976-06f3-4373-825a-5234ff24f2cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef52c51fd38bf34f1fc3eb014d85c40137dd15030237334159ffbb71e1d6c2a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll7dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcbd8e6b02f20a249ddb3fbf20ddd72a94b40fd420cb6ad4c59ea513994ac382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll7dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:43:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-g5jnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:38Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.828519 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd2c04f5bfa6800521c39502b241dfea1a0b9d3ddde4eb92d501d28bcfad1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:38Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.843902 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:38Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.845257 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.845293 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.845304 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.845318 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.845329 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:38Z","lastTransitionTime":"2026-02-19T09:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.861101 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed9a04147ac88af087b35406b7fc4e1261b034a9fbfa0014446cdc08743f7184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27905a4c42a1d28d582484efe02020cd2b7d5a5af7c53787412705c7a6da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:38Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.873137 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85d200ad-dc81-4825-a3e0-976c042ebfd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e2ac875fca92d3c631dc7856cd9f72b9abbf3f2edcbc7efeb49ce1c03ac52a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d4ac252f5069500eef4e1579559c883095bf1c21a29cb96a36a4aab507a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29999666cb6f12b3a4a394a38d4304dd636fe7106b771ca4ef541693fbfc76a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fb1ec6375fa0345ae67191ebc522471cabd2510440f8051132b833c0fa595e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:38Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.884681 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:38Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.894481 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ca9c67a49c188984680f98e96b659087034f30727c1fcdad7dfc298157745c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:38Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.907622 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vpj8c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26ce37d0-9ace-438a-bdd4-6bb30e41bac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cab682da53d115c9e5ce5dca08aae544673283d03b3e11ba9d28ca7896fd4103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3b065fcc4e11f5576c86810e3af403b4336878950bb02f61e1e461b38b009e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c3b065fcc4e11f5576c86810e3af403b4336878950bb02f61e1e461b38b009e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4089ca812b00a9e7ab8b2ba1f148d929de9837384e6b0b64ea8f62dd6917fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4089ca812b00a9e7ab8b2ba1f148d929de9837384e6b0b64ea8f62dd6917fee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0fceec5800537c79268d8bad66cd51cedd7e6442e8f08ea259dd5714334a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0fceec5800537c79268d8bad66cd51cedd7e6442e8f08ea259dd5714334a81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vpj8c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:38Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.927271 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"210f2216-544c-43a1-813b-68e47da7447e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31cd85d70b58724b5ed5071b085e5b20b31083087fc7847dfadccc52cfd717c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d403f58f80ef84bfe879f95390ffc186d6516ce879f1ca593e9d06b6fdaa9003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86861693b1090906a34be8e134571c684c24b327ed62db509ca470e79c70c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65b4ac45cd6766a2144308a99817f41ef69812519680ebae6edf2d6c72f1c97e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c0288960f3e7739ec0587fcefc29e57c0e351c4903326474454df7b6b57a29c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:42:39Z\\\",\\\"message\\\":\\\"W0219 09:42:28.573633 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 09:42:28.575071 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771494148 cert, and key in /tmp/serving-cert-427400488/serving-signer.crt, /tmp/serving-cert-427400488/serving-signer.key\\\\nI0219 09:42:29.117984 1 observer_polling.go:159] Starting file observer\\\\nW0219 09:42:29.120780 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 09:42:29.121009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:42:29.122010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-427400488/tls.crt::/tmp/serving-cert-427400488/tls.key\\\\\\\"\\\\nF0219 09:42:39.487179 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2dcb23379da52333bd115902957e4051df5472eb6d909d45f4781487e8d6e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:38Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.948730 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.948779 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.948791 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.948809 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:38 crc kubenswrapper[4965]: I0219 09:43:38.948822 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:38Z","lastTransitionTime":"2026-02-19T09:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.051556 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.051667 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.051688 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.051716 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.051735 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:39Z","lastTransitionTime":"2026-02-19T09:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.154434 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.154512 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.154557 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.154600 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.154624 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:39Z","lastTransitionTime":"2026-02-19T09:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.196234 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 11:50:17.057615396 +0000 UTC Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.257750 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.257879 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.257894 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.257941 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.257957 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:39Z","lastTransitionTime":"2026-02-19T09:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.361454 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.361507 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.361525 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.361552 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.361571 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:39Z","lastTransitionTime":"2026-02-19T09:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.464622 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.464668 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.464681 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.464696 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.464710 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:39Z","lastTransitionTime":"2026-02-19T09:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.568543 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.568620 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.568636 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.568665 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.568685 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:39Z","lastTransitionTime":"2026-02-19T09:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.671921 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.671973 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.671986 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.672007 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.672021 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:39Z","lastTransitionTime":"2026-02-19T09:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.683369 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nsjqz_5e0b10c6-02b7-49d0-9a76-e89ebbb00528/kube-multus/0.log" Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.683455 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-nsjqz" event={"ID":"5e0b10c6-02b7-49d0-9a76-e89ebbb00528","Type":"ContainerStarted","Data":"54890991cfb2ac3b404ed7c4c815f5c02e5a23fed0a82dcbc8b0071ae6bda90b"} Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.702228 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"210f2216-544c-43a1-813b-68e47da7447e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31cd85d70b58724b5ed5071b085e5b20b31083087fc7847dfadccc52cfd717c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d403f58f80ef84bfe879f95390ffc186d6516ce879f1ca593e9d06b6fdaa9003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86861693b1090906a34be8e134571c684c24b327ed62db509ca470e79c70c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65b4ac45cd6766a2144308a99817f41ef69812519680ebae6edf2d6c72f1c97e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c0288960f3e7739ec0587fcefc29e57c0e351c4903326474454df7b6b57a29c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:42:39Z\\\",\\\"message\\\":\\\"W0219 09:42:28.573633 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 09:42:28.575071 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771494148 cert, and key in /tmp/serving-cert-427400488/serving-signer.crt, /tmp/serving-cert-427400488/serving-signer.key\\\\nI0219 09:42:29.117984 1 observer_polling.go:159] Starting file observer\\\\nW0219 09:42:29.120780 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 09:42:29.121009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:42:29.122010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-427400488/tls.crt::/tmp/serving-cert-427400488/tls.key\\\\\\\"\\\\nF0219 09:42:39.487179 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2dcb23379da52333bd115902957e4051df5472eb6d909d45f4781487e8d6e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:39Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.731326 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85d200ad-dc81-4825-a3e0-976c042ebfd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e2ac875fca92d3c631dc7856cd9f72b9abbf3f2edcbc7efeb49ce1c03ac52a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d4ac252f5069500eef4e1579559c883095bf1c21a29cb96a36a4aab507a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29999666cb6f12b3a4a394a38d4304dd636fe7106b771ca4ef541693fbfc76a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fb1ec6375fa0345ae67191ebc522471cabd2510440f8051132b833c0fa595e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:39Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.747796 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:39Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.765982 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ca9c67a49c188984680f98e96b659087034f30727c1fcdad7dfc298157745c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:39Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.775539 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.775608 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.775624 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.775648 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.775664 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:39Z","lastTransitionTime":"2026-02-19T09:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.781939 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vpj8c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26ce37d0-9ace-438a-bdd4-6bb30e41bac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cab682da53d115c9e5ce5dca08aae544673283d03b3e11ba9d28ca7896fd4103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3b065fcc4e11f5576c86810e3af403b4336878950bb02f61e1e461b38b009e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c3b065fcc4e11f5576c86810e3af403b4336878950bb02f61e1e461b38b009e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4089ca812b00a9e7ab8b2ba1f148d929de9837384e6b0b64ea8f62dd6917fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4089ca812b00a9e7ab8b2ba1f148d929de9837384e6b0b64ea8f62dd6917fee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0fceec5800537c79268d8bad66cd51cedd7e6442e8f08ea259dd5714334a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0fceec5800537c79268d8bad66cd51cedd7e6442e8f08ea259dd5714334a81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vpj8c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:39Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.797728 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6nv8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7972115-bfc1-42ee-b756-e394806eed51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://597dabc5893cced827268c6dc222b2f1535c93e6086c25cec52e7f612952eb65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vd96\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6nv8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:39Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.814031 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nsjqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54890991cfb2ac3b404ed7c4c815f5c02e5a23fed0a82dcbc8b0071ae6bda90b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aef896286f2619adf09fb4e2f4f25543b1d0d69c90fb4d301fb1c215e9b78f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:43:38Z\\\",\\\"message\\\":\\\"2026-02-19T09:42:53+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e32317b1-e341-4aca-af44-d186bb1c6485\\\\n2026-02-19T09:42:53+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e32317b1-e341-4aca-af44-d186bb1c6485 to /host/opt/cni/bin/\\\\n2026-02-19T09:42:53Z [verbose] multus-daemon started\\\\n2026-02-19T09:42:53Z [verbose] Readiness Indicator file check\\\\n2026-02-19T09:43:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4tp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nsjqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:39Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.828304 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pjxbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3965f16-f751-4de2-9f58-db2070fc99b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d81fdde65dd95b5dd26fd2bccb3c26f4491eee9891d4e837fd01338432057878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pjxbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:39Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.841277 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lwjwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e1b431a-0390-4366-82d1-6cb782c7a9e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdh66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdh66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:43:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lwjwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:39Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.858931 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f12bbde7-ee02-4143-b0a7-af0299919dda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5177a63dec267486f4128ae0156f4cb79507b735fca5964a100bc27890e5d13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a087adce637c236bd6e6e1ee13c3742493b9f09053cd984a7c4334056f06d2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://596eb135489ffa0def98b9d17adf293522beb945db0674088c8cb37d1e83b7d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://883c5ece4dc1535a8e1ed6490e8eb103b52b27777bb8dd2244aa3ccbcac483d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://883c5ece4dc1535a8e1ed6490e8eb103b52b27777bb8dd2244aa3ccbcac483d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:39Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.878130 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.878170 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.878184 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.878231 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.878246 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:39Z","lastTransitionTime":"2026-02-19T09:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.878773 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:39Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.895416 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed9a04147ac88af087b35406b7fc4e1261b034a9fbfa0014446cdc08743f7184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27905a4c42a1d28d582484efe02020cd2b7d5a5af7c53787412705c7a6da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:39Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.906451 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ef3eb8-6103-492d-b6ef-f16081d15e83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://107d47a2c3ddc138ad383ab20f81dabe2c31af50f7bd66c31b66df79488ba837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ff237da7e509d3b4a25e8042c384a768ef0123d1687b574502f769bde3121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mhh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:39Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.928457 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c788dfa-1923-4a2b-9619-73acf92ec849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa60b6875cede631c9383845eb085f96d62a6365609f1f98b84165b54e0872a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ebb933d7238665138ec7e854756522607a2814b48116b2ce4474869b39344c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac7fd5095ec7fd8ce98b9150bd5c0a642004e2c1239a6fa1ff002efa67471df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51316b32af59fe23cdf832fbc0b37b11f74d3a57d01eed32ca30a196d4c7e2c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccba1acfe523175d218c25c2f59a6f9874426235c9cba981a80cc53aca12408a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc418c94085bcd4ed93250cce9eb6bc122cd045035b72800df2bdf4b364d6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://656d2c48a8186b05aa2582865c3075e2a72238ef8cfd816187f2737062be98ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://656d2c48a8186b05aa2582865c3075e2a72238ef8cfd816187f2737062be98ee\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:43:16Z\\\",\\\"message\\\":\\\"map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0219 09:43:16.349938 6589 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0219 09:43:16.349981 6589 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0219 09:43:16.350026 6589 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0219 09:43:16.350041 6589 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:43:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-dcfpx_openshift-ovn-kubernetes(7c788dfa-1923-4a2b-9619-73acf92ec849)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://533452e14c9d0d57a451ec0dd06097f87f60658a8f008203b29c31b2b5310eb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dcfpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:39Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.942420 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g5jnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ab24976-06f3-4373-825a-5234ff24f2cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef52c51fd38bf34f1fc3eb014d85c40137dd15030237334159ffbb71e1d6c2a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll7dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcbd8e6b02f20a249ddb3fbf20ddd72a94b40fd420cb6ad4c59ea513994ac382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll7dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:43:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-g5jnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:39Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.960554 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd2c04f5bfa6800521c39502b241dfea1a0b9d3ddde4eb92d501d28bcfad1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:39Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.972567 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:39Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.980963 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.981001 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.981020 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.981042 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:39 crc kubenswrapper[4965]: I0219 09:43:39.981058 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:39Z","lastTransitionTime":"2026-02-19T09:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.084646 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.084727 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.084744 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.084767 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.084784 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:40Z","lastTransitionTime":"2026-02-19T09:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.187919 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.187974 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.187992 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.188015 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.188034 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:40Z","lastTransitionTime":"2026-02-19T09:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.198260 4965 scope.go:117] "RemoveContainer" containerID="656d2c48a8186b05aa2582865c3075e2a72238ef8cfd816187f2737062be98ee" Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.198774 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:43:40 crc kubenswrapper[4965]: E0219 09:43:40.198934 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.199182 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:43:40 crc kubenswrapper[4965]: E0219 09:43:40.199329 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lwjwk" podUID="1e1b431a-0390-4366-82d1-6cb782c7a9e8" Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.199538 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:43:40 crc kubenswrapper[4965]: E0219 09:43:40.199671 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.199894 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:43:40 crc kubenswrapper[4965]: E0219 09:43:40.199994 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.200399 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 18:41:19.723585437 +0000 UTC Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.290524 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.290567 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.290582 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.290603 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.290619 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:40Z","lastTransitionTime":"2026-02-19T09:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.393972 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.394050 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.394077 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.394109 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.394132 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:40Z","lastTransitionTime":"2026-02-19T09:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.506519 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.506570 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.506585 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.506608 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.506622 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:40Z","lastTransitionTime":"2026-02-19T09:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.609888 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.609937 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.609948 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.609966 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.609979 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:40Z","lastTransitionTime":"2026-02-19T09:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.688770 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dcfpx_7c788dfa-1923-4a2b-9619-73acf92ec849/ovnkube-controller/2.log" Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.690929 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" event={"ID":"7c788dfa-1923-4a2b-9619-73acf92ec849","Type":"ContainerStarted","Data":"df664c777d3f25b8d74075723b13263568db42db0feb4d1c5a85cc38fc50aee9"} Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.691445 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.712617 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.712651 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.712660 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.712673 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.712682 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:40Z","lastTransitionTime":"2026-02-19T09:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.720628 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd2c04f5bfa6800521c39502b241dfea1a0b9d3ddde4eb92d501d28bcfad1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:40Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.737137 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:40Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.758299 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed9a04147ac88af087b35406b7fc4e1261b034a9fbfa0014446cdc08743f7184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27905a4c42a1d28d582484efe02020cd2b7d5a5af7c53787412705c7a6da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:40Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.775260 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ef3eb8-6103-492d-b6ef-f16081d15e83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://107d47a2c3ddc138ad383ab20f81dabe2c31af50f7bd66c31b66df79488ba837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ff237da7e509d3b4a25e8042c384a768ef0123d1687b574502f769bde3121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mhh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:40Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.793866 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c788dfa-1923-4a2b-9619-73acf92ec849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa60b6875cede631c9383845eb085f96d62a6365609f1f98b84165b54e0872a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ebb933d7238665138ec7e854756522607a2814b48116b2ce4474869b39344c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac7fd5095ec7fd8ce98b9150bd5c0a642004e2c1239a6fa1ff002efa67471df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51316b32af59fe23cdf832fbc0b37b11f74d3a57d01eed32ca30a196d4c7e2c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccba1acfe523175d218c25c2f59a6f9874426235c9cba981a80cc53aca12408a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc418c94085bcd4ed93250cce9eb6bc122cd045035b72800df2bdf4b364d6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df664c777d3f25b8d74075723b13263568db42db0feb4d1c5a85cc38fc50aee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://656d2c48a8186b05aa2582865c3075e2a72238ef8cfd816187f2737062be98ee\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:43:16Z\\\",\\\"message\\\":\\\"map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0219 09:43:16.349938 6589 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0219 09:43:16.349981 6589 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0219 09:43:16.350026 6589 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0219 09:43:16.350041 6589 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:43:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://533452e14c9d0d57a451ec0dd06097f87f60658a8f008203b29c31b2b5310eb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dcfpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:40Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.806232 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g5jnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ab24976-06f3-4373-825a-5234ff24f2cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef52c51fd38bf34f1fc3eb014d85c40137dd15030237334159ffbb71e1d6c2a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll7dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcbd8e6b02f20a249ddb3fbf20ddd72a94b40fd420cb6ad4c59ea513994ac382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll7dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:43:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-g5jnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:40Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.814841 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.814921 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.814933 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.814950 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.814963 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:40Z","lastTransitionTime":"2026-02-19T09:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.821000 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vpj8c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26ce37d0-9ace-438a-bdd4-6bb30e41bac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cab682da53d115c9e5ce5dca08aae544673283d03b3e11ba9d28ca7896fd4103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3b065fcc4e11f5576c86810e3af403b4336878950bb02f61e1e461b38b009e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c3b065fcc4e11f5576c86810e3af403b4336878950bb02f61e1e461b38b009e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4089ca812b00a9e7ab8b2ba1f148d929de9837384e6b0b64ea8f62dd6917fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4089ca812b00a9e7ab8b2ba1f148d929de9837384e6b0b64ea8f62dd6917fee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0fceec5800537c79268d8bad66cd51cedd7e6442e8f08ea259dd5714334a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0fceec5800537c79268d8bad66cd51cedd7e6442e8f08ea259dd5714334a81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vpj8c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:40Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.835423 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"210f2216-544c-43a1-813b-68e47da7447e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31cd85d70b58724b5ed5071b085e5b20b31083087fc7847dfadccc52cfd717c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d403f58f80ef84bfe879f95390ffc186d6516ce879f1ca593e9d06b6fdaa9003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86861693b1090906a34be8e134571c684c24b327ed62db509ca470e79c70c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65b4ac45cd6766a2144308a99817f41ef69812519680ebae6edf2d6c72f1c97e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c0288960f3e7739ec0587fcefc29e57c0e351c4903326474454df7b6b57a29c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:42:39Z\\\",\\\"message\\\":\\\"W0219 09:42:28.573633 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 09:42:28.575071 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771494148 cert, and key in /tmp/serving-cert-427400488/serving-signer.crt, /tmp/serving-cert-427400488/serving-signer.key\\\\nI0219 09:42:29.117984 1 observer_polling.go:159] Starting file observer\\\\nW0219 09:42:29.120780 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 09:42:29.121009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:42:29.122010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-427400488/tls.crt::/tmp/serving-cert-427400488/tls.key\\\\\\\"\\\\nF0219 09:42:39.487179 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2dcb23379da52333bd115902957e4051df5472eb6d909d45f4781487e8d6e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:40Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.851315 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85d200ad-dc81-4825-a3e0-976c042ebfd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e2ac875fca92d3c631dc7856cd9f72b9abbf3f2edcbc7efeb49ce1c03ac52a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d4ac252f5069500eef4e1579559c883095bf1c21a29cb96a36a4aab507a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29999666cb6f12b3a4a394a38d4304dd636fe7106b771ca4ef541693fbfc76a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fb1ec6375fa0345ae67191ebc522471cabd2510440f8051132b833c0fa595e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:40Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.869153 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:40Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.883708 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ca9c67a49c188984680f98e96b659087034f30727c1fcdad7dfc298157745c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:40Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.897484 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f12bbde7-ee02-4143-b0a7-af0299919dda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5177a63dec267486f4128ae0156f4cb79507b735fca5964a100bc27890e5d13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a087adce637c236bd6e6e1ee13c3742493b9f09053cd984a7c4334056f06d2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://596eb135489ffa0def98b9d17adf293522beb945db0674088c8cb37d1e83b7d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://883c5ece4dc1535a8e1ed6490e8eb103b52b27777bb8dd2244aa3ccbcac483d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://883c5ece4dc1535a8e1ed6490e8eb103b52b27777bb8dd2244aa3ccbcac483d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:40Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.911769 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:40Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.917156 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.917218 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.917232 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.917248 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.917257 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:40Z","lastTransitionTime":"2026-02-19T09:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.923877 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6nv8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7972115-bfc1-42ee-b756-e394806eed51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://597dabc5893cced827268c6dc222b2f1535c93e6086c25cec52e7f612952eb65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vd96\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6nv8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:40Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.937100 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nsjqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54890991cfb2ac3b404ed7c4c815f5c02e5a23fed0a82dcbc8b0071ae6bda90b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aef896286f2619adf09fb4e2f4f25543b1d0d69c90fb4d301fb1c215e9b78f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:43:38Z\\\",\\\"message\\\":\\\"2026-02-19T09:42:53+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e32317b1-e341-4aca-af44-d186bb1c6485\\\\n2026-02-19T09:42:53+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e32317b1-e341-4aca-af44-d186bb1c6485 to /host/opt/cni/bin/\\\\n2026-02-19T09:42:53Z [verbose] multus-daemon started\\\\n2026-02-19T09:42:53Z [verbose] Readiness Indicator file check\\\\n2026-02-19T09:43:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4tp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nsjqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:40Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.947900 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pjxbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3965f16-f751-4de2-9f58-db2070fc99b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d81fdde65dd95b5dd26fd2bccb3c26f4491eee9891d4e837fd01338432057878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pjxbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:40Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:40 crc kubenswrapper[4965]: I0219 09:43:40.958062 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lwjwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e1b431a-0390-4366-82d1-6cb782c7a9e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdh66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdh66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:43:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lwjwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:40Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.020479 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.020526 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.020538 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.020557 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.020567 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:41Z","lastTransitionTime":"2026-02-19T09:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.123687 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.123742 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.123758 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.123784 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.123797 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:41Z","lastTransitionTime":"2026-02-19T09:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.200757 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 07:55:10.75328137 +0000 UTC Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.226492 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.226525 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.226533 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.226548 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.226557 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:41Z","lastTransitionTime":"2026-02-19T09:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.330321 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.330369 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.330382 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.330399 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.330411 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:41Z","lastTransitionTime":"2026-02-19T09:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.435066 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.435246 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.435267 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.435296 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.435315 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:41Z","lastTransitionTime":"2026-02-19T09:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.539256 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.539326 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.539343 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.539364 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.539383 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:41Z","lastTransitionTime":"2026-02-19T09:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.642945 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.643027 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.643044 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.643063 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.643077 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:41Z","lastTransitionTime":"2026-02-19T09:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.696984 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dcfpx_7c788dfa-1923-4a2b-9619-73acf92ec849/ovnkube-controller/3.log" Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.697852 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dcfpx_7c788dfa-1923-4a2b-9619-73acf92ec849/ovnkube-controller/2.log" Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.701759 4965 generic.go:334] "Generic (PLEG): container finished" podID="7c788dfa-1923-4a2b-9619-73acf92ec849" containerID="df664c777d3f25b8d74075723b13263568db42db0feb4d1c5a85cc38fc50aee9" exitCode=1 Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.701826 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" event={"ID":"7c788dfa-1923-4a2b-9619-73acf92ec849","Type":"ContainerDied","Data":"df664c777d3f25b8d74075723b13263568db42db0feb4d1c5a85cc38fc50aee9"} Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.701879 4965 scope.go:117] "RemoveContainer" containerID="656d2c48a8186b05aa2582865c3075e2a72238ef8cfd816187f2737062be98ee" Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.703012 4965 scope.go:117] "RemoveContainer" containerID="df664c777d3f25b8d74075723b13263568db42db0feb4d1c5a85cc38fc50aee9" Feb 19 09:43:41 crc kubenswrapper[4965]: E0219 09:43:41.703365 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-dcfpx_openshift-ovn-kubernetes(7c788dfa-1923-4a2b-9619-73acf92ec849)\"" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" podUID="7c788dfa-1923-4a2b-9619-73acf92ec849" Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.723822 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:41Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.744569 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ca9c67a49c188984680f98e96b659087034f30727c1fcdad7dfc298157745c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:41Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.749350 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.749394 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.749404 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.749422 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.749434 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:41Z","lastTransitionTime":"2026-02-19T09:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.769456 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vpj8c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26ce37d0-9ace-438a-bdd4-6bb30e41bac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cab682da53d115c9e5ce5dca08aae544673283d03b3e11ba9d28ca7896fd4103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3b065fcc4e11f5576c86810e3af403b4336878950bb02f61e1e461b38b009e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c3b065fcc4e11f5576c86810e3af403b4336878950bb02f61e1e461b38b009e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4089ca812b00a9e7ab8b2ba1f148d929de9837384e6b0b64ea8f62dd6917fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4089ca812b00a9e7ab8b2ba1f148d929de9837384e6b0b64ea8f62dd6917fee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0fceec5800537c79268d8bad66cd51cedd7e6442e8f08ea259dd5714334a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0fceec5800537c79268d8bad66cd51cedd7e6442e8f08ea259dd5714334a81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vpj8c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:41Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.798511 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"210f2216-544c-43a1-813b-68e47da7447e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31cd85d70b58724b5ed5071b085e5b20b31083087fc7847dfadccc52cfd717c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d403f58f80ef84bfe879f95390ffc186d6516ce879f1ca593e9d06b6fdaa9003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86861693b1090906a34be8e134571c684c24b327ed62db509ca470e79c70c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65b4ac45cd6766a2144308a99817f41ef69812519680ebae6edf2d6c72f1c97e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c0288960f3e7739ec0587fcefc29e57c0e351c4903326474454df7b6b57a29c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:42:39Z\\\",\\\"message\\\":\\\"W0219 09:42:28.573633 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 09:42:28.575071 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771494148 cert, and key in /tmp/serving-cert-427400488/serving-signer.crt, /tmp/serving-cert-427400488/serving-signer.key\\\\nI0219 09:42:29.117984 1 observer_polling.go:159] Starting file observer\\\\nW0219 09:42:29.120780 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 09:42:29.121009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:42:29.122010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-427400488/tls.crt::/tmp/serving-cert-427400488/tls.key\\\\\\\"\\\\nF0219 09:42:39.487179 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2dcb23379da52333bd115902957e4051df5472eb6d909d45f4781487e8d6e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:41Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.820029 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85d200ad-dc81-4825-a3e0-976c042ebfd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e2ac875fca92d3c631dc7856cd9f72b9abbf3f2edcbc7efeb49ce1c03ac52a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d4ac252f5069500eef4e1579559c883095bf1c21a29cb96a36a4aab507a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29999666cb6f12b3a4a394a38d4304dd636fe7106b771ca4ef541693fbfc76a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fb1ec6375fa0345ae67191ebc522471cabd2510440f8051132b833c0fa595e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:41Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.834813 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pjxbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3965f16-f751-4de2-9f58-db2070fc99b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d81fdde65dd95b5dd26fd2bccb3c26f4491eee9891d4e837fd01338432057878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pjxbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:41Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.849432 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lwjwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e1b431a-0390-4366-82d1-6cb782c7a9e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdh66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdh66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:43:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lwjwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:41Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.853398 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.853442 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.853453 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.853468 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.853479 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:41Z","lastTransitionTime":"2026-02-19T09:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.864384 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f12bbde7-ee02-4143-b0a7-af0299919dda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5177a63dec267486f4128ae0156f4cb79507b735fca5964a100bc27890e5d13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a087adce637c236bd6e6e1ee13c3742493b9f09053cd984a7c4334056f06d2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://596eb135489ffa0def98b9d17adf293522beb945db0674088c8cb37d1e83b7d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://883c5ece4dc1535a8e1ed6490e8eb103b52b27777bb8dd2244aa3ccbcac483d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://883c5ece4dc1535a8e1ed6490e8eb103b52b27777bb8dd2244aa3ccbcac483d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:41Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.879770 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:41Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.892279 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6nv8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7972115-bfc1-42ee-b756-e394806eed51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://597dabc5893cced827268c6dc222b2f1535c93e6086c25cec52e7f612952eb65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vd96\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6nv8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:41Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.907413 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nsjqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54890991cfb2ac3b404ed7c4c815f5c02e5a23fed0a82dcbc8b0071ae6bda90b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aef896286f2619adf09fb4e2f4f25543b1d0d69c90fb4d301fb1c215e9b78f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:43:38Z\\\",\\\"message\\\":\\\"2026-02-19T09:42:53+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e32317b1-e341-4aca-af44-d186bb1c6485\\\\n2026-02-19T09:42:53+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e32317b1-e341-4aca-af44-d186bb1c6485 to /host/opt/cni/bin/\\\\n2026-02-19T09:42:53Z [verbose] multus-daemon started\\\\n2026-02-19T09:42:53Z [verbose] Readiness Indicator file check\\\\n2026-02-19T09:43:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4tp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nsjqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:41Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.932534 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c788dfa-1923-4a2b-9619-73acf92ec849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa60b6875cede631c9383845eb085f96d62a6365609f1f98b84165b54e0872a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ebb933d7238665138ec7e854756522607a2814b48116b2ce4474869b39344c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac7fd5095ec7fd8ce98b9150bd5c0a642004e2c1239a6fa1ff002efa67471df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51316b32af59fe23cdf832fbc0b37b11f74d3a57d01eed32ca30a196d4c7e2c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccba1acfe523175d218c25c2f59a6f9874426235c9cba981a80cc53aca12408a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc418c94085bcd4ed93250cce9eb6bc122cd045035b72800df2bdf4b364d6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df664c777d3f25b8d74075723b13263568db42db0feb4d1c5a85cc38fc50aee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://656d2c48a8186b05aa2582865c3075e2a72238ef8cfd816187f2737062be98ee\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:43:16Z\\\",\\\"message\\\":\\\"map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0219 09:43:16.349938 6589 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0219 09:43:16.349981 6589 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0219 09:43:16.350026 6589 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0219 09:43:16.350041 6589 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:43:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df664c777d3f25b8d74075723b13263568db42db0feb4d1c5a85cc38fc50aee9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:43:41Z\\\",\\\"message\\\":\\\"81 6971 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0219 09:43:41.162525 6971 lb_config.go:1031] Cluster endpoints for openshift-kube-controller-manager-operator/metrics for network=default are: map[]\\\\nF0219 09:43:41.162540 6971 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:41Z is after 2025-08-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:43:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://533452e14c9d0d57a451ec0dd06097f87f60658a8f008203b29c31b2b5310eb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dcfpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:41Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.956544 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g5jnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ab24976-06f3-4373-825a-5234ff24f2cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef52c51fd38bf34f1fc3eb014d85c40137dd15030237334159ffbb71e1d6c2a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll7dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcbd8e6b02f20a249ddb3fbf20ddd72a94b40fd420cb6ad4c59ea513994ac382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll7dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:43:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-g5jnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:41Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.958154 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.958246 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.958267 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.958291 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.958309 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:41Z","lastTransitionTime":"2026-02-19T09:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.974853 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd2c04f5bfa6800521c39502b241dfea1a0b9d3ddde4eb92d501d28bcfad1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:41Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:41 crc kubenswrapper[4965]: I0219 09:43:41.990991 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:41Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.008766 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed9a04147ac88af087b35406b7fc4e1261b034a9fbfa0014446cdc08743f7184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27905a4c42a1d28d582484efe02020cd2b7d5a5af7c53787412705c7a6da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:42Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.022907 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ef3eb8-6103-492d-b6ef-f16081d15e83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://107d47a2c3ddc138ad383ab20f81dabe2c31af50f7bd66c31b66df79488ba837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ff237da7e509d3b4a25e8042c384a768ef0123d1687b574502f769bde3121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mhh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:42Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.061084 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.061119 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.061128 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.061142 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.061153 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:42Z","lastTransitionTime":"2026-02-19T09:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.163900 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.164025 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.164087 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.164124 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.164151 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:42Z","lastTransitionTime":"2026-02-19T09:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.197491 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.197539 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.197608 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:43:42 crc kubenswrapper[4965]: E0219 09:43:42.197665 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.197491 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:43:42 crc kubenswrapper[4965]: E0219 09:43:42.197780 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:43:42 crc kubenswrapper[4965]: E0219 09:43:42.197841 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:43:42 crc kubenswrapper[4965]: E0219 09:43:42.197892 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lwjwk" podUID="1e1b431a-0390-4366-82d1-6cb782c7a9e8" Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.200894 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 11:20:21.273620361 +0000 UTC Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.267076 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.267149 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.267169 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.267221 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.267242 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:42Z","lastTransitionTime":"2026-02-19T09:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.370820 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.370870 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.370888 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.370911 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.370928 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:42Z","lastTransitionTime":"2026-02-19T09:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.475148 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.475291 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.475312 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.475374 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.475391 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:42Z","lastTransitionTime":"2026-02-19T09:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.579442 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.579508 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.579524 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.579546 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.579567 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:42Z","lastTransitionTime":"2026-02-19T09:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.682718 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.682808 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.682833 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.682867 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.682966 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:42Z","lastTransitionTime":"2026-02-19T09:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.708686 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dcfpx_7c788dfa-1923-4a2b-9619-73acf92ec849/ovnkube-controller/3.log" Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.713176 4965 scope.go:117] "RemoveContainer" containerID="df664c777d3f25b8d74075723b13263568db42db0feb4d1c5a85cc38fc50aee9" Feb 19 09:43:42 crc kubenswrapper[4965]: E0219 09:43:42.713353 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-dcfpx_openshift-ovn-kubernetes(7c788dfa-1923-4a2b-9619-73acf92ec849)\"" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" podUID="7c788dfa-1923-4a2b-9619-73acf92ec849" Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.727256 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd2c04f5bfa6800521c39502b241dfea1a0b9d3ddde4eb92d501d28bcfad1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:42Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.741069 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:42Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.757682 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed9a04147ac88af087b35406b7fc4e1261b034a9fbfa0014446cdc08743f7184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27905a4c42a1d28d582484efe02020cd2b7d5a5af7c53787412705c7a6da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:42Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.771265 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ef3eb8-6103-492d-b6ef-f16081d15e83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://107d47a2c3ddc138ad383ab20f81dabe2c31af50f7bd66c31b66df79488ba837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ff237da7e509d3b4a25e8042c384a768ef0123d1687b574502f769bde3121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mhh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:42Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.785998 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.786068 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.786085 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.786109 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.786127 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:42Z","lastTransitionTime":"2026-02-19T09:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.796265 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c788dfa-1923-4a2b-9619-73acf92ec849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa60b6875cede631c9383845eb085f96d62a6365609f1f98b84165b54e0872a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ebb933d7238665138ec7e854756522607a2814b48116b2ce4474869b39344c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac7fd5095ec7fd8ce98b9150bd5c0a642004e2c1239a6fa1ff002efa67471df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51316b32af59fe23cdf832fbc0b37b11f74d3a57d01eed32ca30a196d4c7e2c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccba1acfe523175d218c25c2f59a6f9874426235c9cba981a80cc53aca12408a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc418c94085bcd4ed93250cce9eb6bc122cd045035b72800df2bdf4b364d6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df664c777d3f25b8d74075723b13263568db42db0feb4d1c5a85cc38fc50aee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df664c777d3f25b8d74075723b13263568db42db0feb4d1c5a85cc38fc50aee9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:43:41Z\\\",\\\"message\\\":\\\"81 6971 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0219 09:43:41.162525 6971 lb_config.go:1031] Cluster endpoints for openshift-kube-controller-manager-operator/metrics for network=default are: map[]\\\\nF0219 09:43:41.162540 6971 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:41Z is after 2025-08-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:43:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-dcfpx_openshift-ovn-kubernetes(7c788dfa-1923-4a2b-9619-73acf92ec849)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://533452e14c9d0d57a451ec0dd06097f87f60658a8f008203b29c31b2b5310eb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dcfpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:42Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.814280 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g5jnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ab24976-06f3-4373-825a-5234ff24f2cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef52c51fd38bf34f1fc3eb014d85c40137dd15030237334159ffbb71e1d6c2a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll7dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcbd8e6b02f20a249ddb3fbf20ddd72a94b40fd420cb6ad4c59ea513994ac382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll7dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:43:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-g5jnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:42Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.830950 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"210f2216-544c-43a1-813b-68e47da7447e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31cd85d70b58724b5ed5071b085e5b20b31083087fc7847dfadccc52cfd717c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d403f58f80ef84bfe879f95390ffc186d6516ce879f1ca593e9d06b6fdaa9003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86861693b1090906a34be8e134571c684c24b327ed62db509ca470e79c70c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65b4ac45cd6766a2144308a99817f41ef69812519680ebae6edf2d6c72f1c97e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c0288960f3e7739ec0587fcefc29e57c0e351c4903326474454df7b6b57a29c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:42:39Z\\\",\\\"message\\\":\\\"W0219 09:42:28.573633 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 09:42:28.575071 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771494148 cert, and key in /tmp/serving-cert-427400488/serving-signer.crt, /tmp/serving-cert-427400488/serving-signer.key\\\\nI0219 09:42:29.117984 1 observer_polling.go:159] Starting file observer\\\\nW0219 09:42:29.120780 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 09:42:29.121009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:42:29.122010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-427400488/tls.crt::/tmp/serving-cert-427400488/tls.key\\\\\\\"\\\\nF0219 09:42:39.487179 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2dcb23379da52333bd115902957e4051df5472eb6d909d45f4781487e8d6e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:42Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.845808 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85d200ad-dc81-4825-a3e0-976c042ebfd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e2ac875fca92d3c631dc7856cd9f72b9abbf3f2edcbc7efeb49ce1c03ac52a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d4ac252f5069500eef4e1579559c883095bf1c21a29cb96a36a4aab507a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29999666cb6f12b3a4a394a38d4304dd636fe7106b771ca4ef541693fbfc76a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fb1ec6375fa0345ae67191ebc522471cabd2510440f8051132b833c0fa595e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:42Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.911639 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:42Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.915225 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.915279 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.915292 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.915313 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.915326 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:42Z","lastTransitionTime":"2026-02-19T09:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.933972 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ca9c67a49c188984680f98e96b659087034f30727c1fcdad7dfc298157745c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:42Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.954617 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vpj8c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26ce37d0-9ace-438a-bdd4-6bb30e41bac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cab682da53d115c9e5ce5dca08aae544673283d03b3e11ba9d28ca7896fd4103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3b065fcc4e11f5576c86810e3af403b4336878950bb02f61e1e461b38b009e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c3b065fcc4e11f5576c86810e3af403b4336878950bb02f61e1e461b38b009e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4089ca812b00a9e7ab8b2ba1f148d929de9837384e6b0b64ea8f62dd6917fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4089ca812b00a9e7ab8b2ba1f148d929de9837384e6b0b64ea8f62dd6917fee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0fceec5800537c79268d8bad66cd51cedd7e6442e8f08ea259dd5714334a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0fceec5800537c79268d8bad66cd51cedd7e6442e8f08ea259dd5714334a81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vpj8c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:42Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.969112 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f12bbde7-ee02-4143-b0a7-af0299919dda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5177a63dec267486f4128ae0156f4cb79507b735fca5964a100bc27890e5d13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a087adce637c236bd6e6e1ee13c3742493b9f09053cd984a7c4334056f06d2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://596eb135489ffa0def98b9d17adf293522beb945db0674088c8cb37d1e83b7d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://883c5ece4dc1535a8e1ed6490e8eb103b52b27777bb8dd2244aa3ccbcac483d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://883c5ece4dc1535a8e1ed6490e8eb103b52b27777bb8dd2244aa3ccbcac483d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:42Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:42 crc kubenswrapper[4965]: I0219 09:43:42.988764 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:42Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.004744 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6nv8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7972115-bfc1-42ee-b756-e394806eed51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://597dabc5893cced827268c6dc222b2f1535c93e6086c25cec52e7f612952eb65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vd96\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6nv8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:43Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.018863 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.018900 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.018911 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.018929 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.018943 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:43Z","lastTransitionTime":"2026-02-19T09:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.027070 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nsjqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54890991cfb2ac3b404ed7c4c815f5c02e5a23fed0a82dcbc8b0071ae6bda90b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aef896286f2619adf09fb4e2f4f25543b1d0d69c90fb4d301fb1c215e9b78f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:43:38Z\\\",\\\"message\\\":\\\"2026-02-19T09:42:53+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e32317b1-e341-4aca-af44-d186bb1c6485\\\\n2026-02-19T09:42:53+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e32317b1-e341-4aca-af44-d186bb1c6485 to /host/opt/cni/bin/\\\\n2026-02-19T09:42:53Z [verbose] multus-daemon started\\\\n2026-02-19T09:42:53Z [verbose] Readiness Indicator file check\\\\n2026-02-19T09:43:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4tp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nsjqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:43Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.042620 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pjxbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3965f16-f751-4de2-9f58-db2070fc99b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d81fdde65dd95b5dd26fd2bccb3c26f4491eee9891d4e837fd01338432057878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pjxbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:43Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.065661 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lwjwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e1b431a-0390-4366-82d1-6cb782c7a9e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdh66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdh66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:43:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lwjwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:43Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.121521 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.121577 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.121592 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.121612 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.121627 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:43Z","lastTransitionTime":"2026-02-19T09:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.201428 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 03:19:34.629745344 +0000 UTC Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.224338 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.224408 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.224420 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.224484 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.224499 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:43Z","lastTransitionTime":"2026-02-19T09:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.327513 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.327573 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.327596 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.327621 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.327637 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:43Z","lastTransitionTime":"2026-02-19T09:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.431084 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.431243 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.431256 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.431279 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.431295 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:43Z","lastTransitionTime":"2026-02-19T09:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.533628 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.533730 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.533753 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.533790 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.533826 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:43Z","lastTransitionTime":"2026-02-19T09:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.636838 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.636897 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.636920 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.636944 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.636960 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:43Z","lastTransitionTime":"2026-02-19T09:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.739905 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.739959 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.739973 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.739993 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.740007 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:43Z","lastTransitionTime":"2026-02-19T09:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.842807 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.842869 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.842882 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.842905 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.842921 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:43Z","lastTransitionTime":"2026-02-19T09:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.854253 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.854344 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.854365 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.854395 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.854414 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:43Z","lastTransitionTime":"2026-02-19T09:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:43 crc kubenswrapper[4965]: E0219 09:43:43.869734 4965 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1c83089-21b1-454c-b8cd-3bf0aaa04cd0\\\",\\\"systemUUID\\\":\\\"70334fb7-3860-4c43-90b6-37f049faeb9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:43Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.875874 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.875934 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.875949 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.875972 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.875989 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:43Z","lastTransitionTime":"2026-02-19T09:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:43 crc kubenswrapper[4965]: E0219 09:43:43.896422 4965 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1c83089-21b1-454c-b8cd-3bf0aaa04cd0\\\",\\\"systemUUID\\\":\\\"70334fb7-3860-4c43-90b6-37f049faeb9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:43Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.901415 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.901485 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.901506 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.901537 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.901556 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:43Z","lastTransitionTime":"2026-02-19T09:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:43 crc kubenswrapper[4965]: E0219 09:43:43.916700 4965 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1c83089-21b1-454c-b8cd-3bf0aaa04cd0\\\",\\\"systemUUID\\\":\\\"70334fb7-3860-4c43-90b6-37f049faeb9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:43Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.921479 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.921518 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.921531 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.921546 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.921555 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:43Z","lastTransitionTime":"2026-02-19T09:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:43 crc kubenswrapper[4965]: E0219 09:43:43.938650 4965 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1c83089-21b1-454c-b8cd-3bf0aaa04cd0\\\",\\\"systemUUID\\\":\\\"70334fb7-3860-4c43-90b6-37f049faeb9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:43Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.944525 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.944566 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.944578 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.944600 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.944618 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:43Z","lastTransitionTime":"2026-02-19T09:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:43 crc kubenswrapper[4965]: E0219 09:43:43.965544 4965 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1c83089-21b1-454c-b8cd-3bf0aaa04cd0\\\",\\\"systemUUID\\\":\\\"70334fb7-3860-4c43-90b6-37f049faeb9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:43Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:43 crc kubenswrapper[4965]: E0219 09:43:43.965704 4965 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.967834 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.967874 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.967886 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.967905 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:43 crc kubenswrapper[4965]: I0219 09:43:43.967920 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:43Z","lastTransitionTime":"2026-02-19T09:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:44 crc kubenswrapper[4965]: I0219 09:43:44.070622 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:44 crc kubenswrapper[4965]: I0219 09:43:44.070679 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:44 crc kubenswrapper[4965]: I0219 09:43:44.070692 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:44 crc kubenswrapper[4965]: I0219 09:43:44.070714 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:44 crc kubenswrapper[4965]: I0219 09:43:44.070728 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:44Z","lastTransitionTime":"2026-02-19T09:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:44 crc kubenswrapper[4965]: I0219 09:43:44.173589 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:44 crc kubenswrapper[4965]: I0219 09:43:44.173656 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:44 crc kubenswrapper[4965]: I0219 09:43:44.173673 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:44 crc kubenswrapper[4965]: I0219 09:43:44.173700 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:44 crc kubenswrapper[4965]: I0219 09:43:44.173718 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:44Z","lastTransitionTime":"2026-02-19T09:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:44 crc kubenswrapper[4965]: I0219 09:43:44.197617 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:43:44 crc kubenswrapper[4965]: I0219 09:43:44.197687 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:43:44 crc kubenswrapper[4965]: I0219 09:43:44.197689 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:43:44 crc kubenswrapper[4965]: I0219 09:43:44.197617 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:43:44 crc kubenswrapper[4965]: E0219 09:43:44.197824 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:43:44 crc kubenswrapper[4965]: E0219 09:43:44.197990 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lwjwk" podUID="1e1b431a-0390-4366-82d1-6cb782c7a9e8" Feb 19 09:43:44 crc kubenswrapper[4965]: E0219 09:43:44.198180 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:43:44 crc kubenswrapper[4965]: E0219 09:43:44.198404 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:43:44 crc kubenswrapper[4965]: I0219 09:43:44.202045 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 16:42:25.884496398 +0000 UTC Feb 19 09:43:44 crc kubenswrapper[4965]: I0219 09:43:44.277406 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:44 crc kubenswrapper[4965]: I0219 09:43:44.277465 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:44 crc kubenswrapper[4965]: I0219 09:43:44.277486 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:44 crc kubenswrapper[4965]: I0219 09:43:44.277513 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:44 crc kubenswrapper[4965]: I0219 09:43:44.277530 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:44Z","lastTransitionTime":"2026-02-19T09:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:44 crc kubenswrapper[4965]: I0219 09:43:44.380686 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:44 crc kubenswrapper[4965]: I0219 09:43:44.380746 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:44 crc kubenswrapper[4965]: I0219 09:43:44.380763 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:44 crc kubenswrapper[4965]: I0219 09:43:44.380787 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:44 crc kubenswrapper[4965]: I0219 09:43:44.380804 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:44Z","lastTransitionTime":"2026-02-19T09:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:44 crc kubenswrapper[4965]: I0219 09:43:44.484075 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:44 crc kubenswrapper[4965]: I0219 09:43:44.484145 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:44 crc kubenswrapper[4965]: I0219 09:43:44.484169 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:44 crc kubenswrapper[4965]: I0219 09:43:44.484234 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:44 crc kubenswrapper[4965]: I0219 09:43:44.484260 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:44Z","lastTransitionTime":"2026-02-19T09:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:44 crc kubenswrapper[4965]: I0219 09:43:44.587317 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:44 crc kubenswrapper[4965]: I0219 09:43:44.587445 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:44 crc kubenswrapper[4965]: I0219 09:43:44.587461 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:44 crc kubenswrapper[4965]: I0219 09:43:44.587491 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:44 crc kubenswrapper[4965]: I0219 09:43:44.587507 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:44Z","lastTransitionTime":"2026-02-19T09:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:44 crc kubenswrapper[4965]: I0219 09:43:44.691130 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:44 crc kubenswrapper[4965]: I0219 09:43:44.691229 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:44 crc kubenswrapper[4965]: I0219 09:43:44.691250 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:44 crc kubenswrapper[4965]: I0219 09:43:44.691281 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:44 crc kubenswrapper[4965]: I0219 09:43:44.691298 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:44Z","lastTransitionTime":"2026-02-19T09:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:44 crc kubenswrapper[4965]: I0219 09:43:44.795303 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:44 crc kubenswrapper[4965]: I0219 09:43:44.795386 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:44 crc kubenswrapper[4965]: I0219 09:43:44.795409 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:44 crc kubenswrapper[4965]: I0219 09:43:44.795441 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:44 crc kubenswrapper[4965]: I0219 09:43:44.795463 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:44Z","lastTransitionTime":"2026-02-19T09:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:44 crc kubenswrapper[4965]: I0219 09:43:44.898913 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:44 crc kubenswrapper[4965]: I0219 09:43:44.898984 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:44 crc kubenswrapper[4965]: I0219 09:43:44.898996 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:44 crc kubenswrapper[4965]: I0219 09:43:44.899021 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:44 crc kubenswrapper[4965]: I0219 09:43:44.899036 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:44Z","lastTransitionTime":"2026-02-19T09:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.003632 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.003712 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.003736 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.003768 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.003790 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:45Z","lastTransitionTime":"2026-02-19T09:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.106642 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.106715 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.106738 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.106766 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.106789 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:45Z","lastTransitionTime":"2026-02-19T09:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.202439 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 11:24:31.586677839 +0000 UTC Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.210381 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.210460 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.210484 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.210512 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.210536 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:45Z","lastTransitionTime":"2026-02-19T09:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.218043 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"210f2216-544c-43a1-813b-68e47da7447e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31cd85d70b58724b5ed5071b085e5b20b31083087fc7847dfadccc52cfd717c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d403f58f80ef84bfe879f95390ffc186d6516ce879f1ca593e9d06b6fdaa9003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86861693b1090906a34be8e134571c684c24b327ed62db509ca470e79c70c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65b4ac45cd6766a2144308a99817f41ef69812519680ebae6edf2d6c72f1c97e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c0288960f3e7739ec0587fcefc29e57c0e351c4903326474454df7b6b57a29c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:42:39Z\\\",\\\"message\\\":\\\"W0219 09:42:28.573633 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 09:42:28.575071 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771494148 cert, and key in /tmp/serving-cert-427400488/serving-signer.crt, /tmp/serving-cert-427400488/serving-signer.key\\\\nI0219 09:42:29.117984 1 observer_polling.go:159] Starting file observer\\\\nW0219 09:42:29.120780 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 09:42:29.121009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:42:29.122010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-427400488/tls.crt::/tmp/serving-cert-427400488/tls.key\\\\\\\"\\\\nF0219 09:42:39.487179 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2dcb23379da52333bd115902957e4051df5472eb6d909d45f4781487e8d6e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:45Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.237098 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85d200ad-dc81-4825-a3e0-976c042ebfd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e2ac875fca92d3c631dc7856cd9f72b9abbf3f2edcbc7efeb49ce1c03ac52a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d4ac252f5069500eef4e1579559c883095bf1c21a29cb96a36a4aab507a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29999666cb6f12b3a4a394a38d4304dd636fe7106b771ca4ef541693fbfc76a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fb1ec6375fa0345ae67191ebc522471cabd2510440f8051132b833c0fa595e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:45Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.260013 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:45Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.276880 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ca9c67a49c188984680f98e96b659087034f30727c1fcdad7dfc298157745c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:45Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.300357 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vpj8c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26ce37d0-9ace-438a-bdd4-6bb30e41bac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cab682da53d115c9e5ce5dca08aae544673283d03b3e11ba9d28ca7896fd4103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3b065fcc4e11f5576c86810e3af403b4336878950bb02f61e1e461b38b009e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c3b065fcc4e11f5576c86810e3af403b4336878950bb02f61e1e461b38b009e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4089ca812b00a9e7ab8b2ba1f148d929de9837384e6b0b64ea8f62dd6917fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4089ca812b00a9e7ab8b2ba1f148d929de9837384e6b0b64ea8f62dd6917fee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0fceec5800537c79268d8bad66cd51cedd7e6442e8f08ea259dd5714334a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0fceec5800537c79268d8bad66cd51cedd7e6442e8f08ea259dd5714334a81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vpj8c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:45Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.315947 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.317416 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.317447 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.317479 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.317506 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:45Z","lastTransitionTime":"2026-02-19T09:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.322563 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f12bbde7-ee02-4143-b0a7-af0299919dda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5177a63dec267486f4128ae0156f4cb79507b735fca5964a100bc27890e5d13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a087adce637c236bd6e6e1ee13c3742493b9f09053cd984a7c4334056f06d2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://596eb135489ffa0def98b9d17adf293522beb945db0674088c8cb37d1e83b7d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://883c5ece4dc1535a8e1ed6490e8eb103b52b27777bb8dd2244aa3ccbcac483d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://883c5ece4dc1535a8e1ed6490e8eb103b52b27777bb8dd2244aa3ccbcac483d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:45Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.343561 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:45Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.358367 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6nv8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7972115-bfc1-42ee-b756-e394806eed51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://597dabc5893cced827268c6dc222b2f1535c93e6086c25cec52e7f612952eb65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vd96\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6nv8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:45Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.374459 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nsjqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54890991cfb2ac3b404ed7c4c815f5c02e5a23fed0a82dcbc8b0071ae6bda90b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aef896286f2619adf09fb4e2f4f25543b1d0d69c90fb4d301fb1c215e9b78f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:43:38Z\\\",\\\"message\\\":\\\"2026-02-19T09:42:53+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e32317b1-e341-4aca-af44-d186bb1c6485\\\\n2026-02-19T09:42:53+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e32317b1-e341-4aca-af44-d186bb1c6485 to /host/opt/cni/bin/\\\\n2026-02-19T09:42:53Z [verbose] multus-daemon started\\\\n2026-02-19T09:42:53Z [verbose] Readiness Indicator file check\\\\n2026-02-19T09:43:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4tp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nsjqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:45Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.390751 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pjxbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3965f16-f751-4de2-9f58-db2070fc99b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d81fdde65dd95b5dd26fd2bccb3c26f4491eee9891d4e837fd01338432057878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pjxbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:45Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.407765 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lwjwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e1b431a-0390-4366-82d1-6cb782c7a9e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdh66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdh66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:43:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lwjwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:45Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.420704 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.420770 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.420789 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.420816 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.420835 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:45Z","lastTransitionTime":"2026-02-19T09:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.430562 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd2c04f5bfa6800521c39502b241dfea1a0b9d3ddde4eb92d501d28bcfad1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:45Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.445550 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:45Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.465458 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed9a04147ac88af087b35406b7fc4e1261b034a9fbfa0014446cdc08743f7184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27905a4c42a1d28d582484efe02020cd2b7d5a5af7c53787412705c7a6da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:45Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.484660 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ef3eb8-6103-492d-b6ef-f16081d15e83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://107d47a2c3ddc138ad383ab20f81dabe2c31af50f7bd66c31b66df79488ba837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ff237da7e509d3b4a25e8042c384a768ef0123d1687b574502f769bde3121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mhh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:45Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.510138 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c788dfa-1923-4a2b-9619-73acf92ec849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa60b6875cede631c9383845eb085f96d62a6365609f1f98b84165b54e0872a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ebb933d7238665138ec7e854756522607a2814b48116b2ce4474869b39344c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac7fd5095ec7fd8ce98b9150bd5c0a642004e2c1239a6fa1ff002efa67471df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51316b32af59fe23cdf832fbc0b37b11f74d3a57d01eed32ca30a196d4c7e2c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccba1acfe523175d218c25c2f59a6f9874426235c9cba981a80cc53aca12408a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc418c94085bcd4ed93250cce9eb6bc122cd045035b72800df2bdf4b364d6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df664c777d3f25b8d74075723b13263568db42db0feb4d1c5a85cc38fc50aee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df664c777d3f25b8d74075723b13263568db42db0feb4d1c5a85cc38fc50aee9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:43:41Z\\\",\\\"message\\\":\\\"81 6971 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0219 09:43:41.162525 6971 lb_config.go:1031] Cluster endpoints for openshift-kube-controller-manager-operator/metrics for network=default are: map[]\\\\nF0219 09:43:41.162540 6971 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:41Z is after 2025-08-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:43:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-dcfpx_openshift-ovn-kubernetes(7c788dfa-1923-4a2b-9619-73acf92ec849)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://533452e14c9d0d57a451ec0dd06097f87f60658a8f008203b29c31b2b5310eb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dcfpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:45Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.523757 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.523812 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.523831 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.523854 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.523869 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:45Z","lastTransitionTime":"2026-02-19T09:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.529152 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g5jnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ab24976-06f3-4373-825a-5234ff24f2cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef52c51fd38bf34f1fc3eb014d85c40137dd15030237334159ffbb71e1d6c2a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll7dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcbd8e6b02f20a249ddb3fbf20ddd72a94b40fd420cb6ad4c59ea513994ac382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll7dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:43:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-g5jnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:45Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.626174 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.626232 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.626243 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.626257 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.626267 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:45Z","lastTransitionTime":"2026-02-19T09:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.728813 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.728855 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.728864 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.728879 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.728889 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:45Z","lastTransitionTime":"2026-02-19T09:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.831353 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.831404 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.831418 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.831437 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.831454 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:45Z","lastTransitionTime":"2026-02-19T09:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.934973 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.935065 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.935086 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.935114 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:45 crc kubenswrapper[4965]: I0219 09:43:45.935134 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:45Z","lastTransitionTime":"2026-02-19T09:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:46 crc kubenswrapper[4965]: I0219 09:43:46.038945 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:46 crc kubenswrapper[4965]: I0219 09:43:46.038993 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:46 crc kubenswrapper[4965]: I0219 09:43:46.039006 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:46 crc kubenswrapper[4965]: I0219 09:43:46.039027 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:46 crc kubenswrapper[4965]: I0219 09:43:46.039041 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:46Z","lastTransitionTime":"2026-02-19T09:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:46 crc kubenswrapper[4965]: I0219 09:43:46.142085 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:46 crc kubenswrapper[4965]: I0219 09:43:46.142136 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:46 crc kubenswrapper[4965]: I0219 09:43:46.142147 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:46 crc kubenswrapper[4965]: I0219 09:43:46.142164 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:46 crc kubenswrapper[4965]: I0219 09:43:46.142176 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:46Z","lastTransitionTime":"2026-02-19T09:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:46 crc kubenswrapper[4965]: I0219 09:43:46.197241 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:43:46 crc kubenswrapper[4965]: I0219 09:43:46.197374 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:43:46 crc kubenswrapper[4965]: I0219 09:43:46.197417 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:43:46 crc kubenswrapper[4965]: E0219 09:43:46.197606 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:43:46 crc kubenswrapper[4965]: I0219 09:43:46.197636 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:43:46 crc kubenswrapper[4965]: E0219 09:43:46.197779 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:43:46 crc kubenswrapper[4965]: E0219 09:43:46.197926 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lwjwk" podUID="1e1b431a-0390-4366-82d1-6cb782c7a9e8" Feb 19 09:43:46 crc kubenswrapper[4965]: E0219 09:43:46.198049 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:43:46 crc kubenswrapper[4965]: I0219 09:43:46.203520 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 16:39:46.367946176 +0000 UTC Feb 19 09:43:46 crc kubenswrapper[4965]: I0219 09:43:46.245826 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:46 crc kubenswrapper[4965]: I0219 09:43:46.245898 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:46 crc kubenswrapper[4965]: I0219 09:43:46.245923 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:46 crc kubenswrapper[4965]: I0219 09:43:46.245952 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:46 crc kubenswrapper[4965]: I0219 09:43:46.245974 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:46Z","lastTransitionTime":"2026-02-19T09:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:46 crc kubenswrapper[4965]: I0219 09:43:46.349383 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:46 crc kubenswrapper[4965]: I0219 09:43:46.349461 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:46 crc kubenswrapper[4965]: I0219 09:43:46.349474 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:46 crc kubenswrapper[4965]: I0219 09:43:46.349496 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:46 crc kubenswrapper[4965]: I0219 09:43:46.349510 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:46Z","lastTransitionTime":"2026-02-19T09:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:46 crc kubenswrapper[4965]: I0219 09:43:46.452913 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:46 crc kubenswrapper[4965]: I0219 09:43:46.452978 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:46 crc kubenswrapper[4965]: I0219 09:43:46.452994 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:46 crc kubenswrapper[4965]: I0219 09:43:46.453016 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:46 crc kubenswrapper[4965]: I0219 09:43:46.453029 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:46Z","lastTransitionTime":"2026-02-19T09:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:46 crc kubenswrapper[4965]: I0219 09:43:46.556180 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:46 crc kubenswrapper[4965]: I0219 09:43:46.556312 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:46 crc kubenswrapper[4965]: I0219 09:43:46.556336 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:46 crc kubenswrapper[4965]: I0219 09:43:46.556423 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:46 crc kubenswrapper[4965]: I0219 09:43:46.556450 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:46Z","lastTransitionTime":"2026-02-19T09:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:46 crc kubenswrapper[4965]: I0219 09:43:46.659955 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:46 crc kubenswrapper[4965]: I0219 09:43:46.660029 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:46 crc kubenswrapper[4965]: I0219 09:43:46.660056 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:46 crc kubenswrapper[4965]: I0219 09:43:46.660086 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:46 crc kubenswrapper[4965]: I0219 09:43:46.660109 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:46Z","lastTransitionTime":"2026-02-19T09:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:46 crc kubenswrapper[4965]: I0219 09:43:46.762792 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:46 crc kubenswrapper[4965]: I0219 09:43:46.762859 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:46 crc kubenswrapper[4965]: I0219 09:43:46.762876 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:46 crc kubenswrapper[4965]: I0219 09:43:46.762903 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:46 crc kubenswrapper[4965]: I0219 09:43:46.762920 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:46Z","lastTransitionTime":"2026-02-19T09:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:46 crc kubenswrapper[4965]: I0219 09:43:46.866658 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:46 crc kubenswrapper[4965]: I0219 09:43:46.866748 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:46 crc kubenswrapper[4965]: I0219 09:43:46.866768 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:46 crc kubenswrapper[4965]: I0219 09:43:46.866796 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:46 crc kubenswrapper[4965]: I0219 09:43:46.866816 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:46Z","lastTransitionTime":"2026-02-19T09:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:46 crc kubenswrapper[4965]: I0219 09:43:46.977313 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:46 crc kubenswrapper[4965]: I0219 09:43:46.977366 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:46 crc kubenswrapper[4965]: I0219 09:43:46.977381 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:46 crc kubenswrapper[4965]: I0219 09:43:46.977401 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:46 crc kubenswrapper[4965]: I0219 09:43:46.977413 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:46Z","lastTransitionTime":"2026-02-19T09:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:47 crc kubenswrapper[4965]: I0219 09:43:47.080547 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:47 crc kubenswrapper[4965]: I0219 09:43:47.080627 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:47 crc kubenswrapper[4965]: I0219 09:43:47.080652 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:47 crc kubenswrapper[4965]: I0219 09:43:47.080682 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:47 crc kubenswrapper[4965]: I0219 09:43:47.080710 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:47Z","lastTransitionTime":"2026-02-19T09:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:47 crc kubenswrapper[4965]: I0219 09:43:47.183454 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:47 crc kubenswrapper[4965]: I0219 09:43:47.183531 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:47 crc kubenswrapper[4965]: I0219 09:43:47.183554 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:47 crc kubenswrapper[4965]: I0219 09:43:47.183582 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:47 crc kubenswrapper[4965]: I0219 09:43:47.183604 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:47Z","lastTransitionTime":"2026-02-19T09:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:47 crc kubenswrapper[4965]: I0219 09:43:47.203700 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 00:10:11.507898244 +0000 UTC Feb 19 09:43:47 crc kubenswrapper[4965]: I0219 09:43:47.287370 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:47 crc kubenswrapper[4965]: I0219 09:43:47.287458 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:47 crc kubenswrapper[4965]: I0219 09:43:47.287475 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:47 crc kubenswrapper[4965]: I0219 09:43:47.287497 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:47 crc kubenswrapper[4965]: I0219 09:43:47.287513 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:47Z","lastTransitionTime":"2026-02-19T09:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:47 crc kubenswrapper[4965]: I0219 09:43:47.391666 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:47 crc kubenswrapper[4965]: I0219 09:43:47.391735 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:47 crc kubenswrapper[4965]: I0219 09:43:47.391759 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:47 crc kubenswrapper[4965]: I0219 09:43:47.391788 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:47 crc kubenswrapper[4965]: I0219 09:43:47.391810 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:47Z","lastTransitionTime":"2026-02-19T09:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:47 crc kubenswrapper[4965]: I0219 09:43:47.494786 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:47 crc kubenswrapper[4965]: I0219 09:43:47.494873 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:47 crc kubenswrapper[4965]: I0219 09:43:47.494900 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:47 crc kubenswrapper[4965]: I0219 09:43:47.494934 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:47 crc kubenswrapper[4965]: I0219 09:43:47.494959 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:47Z","lastTransitionTime":"2026-02-19T09:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:47 crc kubenswrapper[4965]: I0219 09:43:47.598793 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:47 crc kubenswrapper[4965]: I0219 09:43:47.598884 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:47 crc kubenswrapper[4965]: I0219 09:43:47.598907 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:47 crc kubenswrapper[4965]: I0219 09:43:47.598942 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:47 crc kubenswrapper[4965]: I0219 09:43:47.599002 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:47Z","lastTransitionTime":"2026-02-19T09:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:47 crc kubenswrapper[4965]: I0219 09:43:47.703006 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:47 crc kubenswrapper[4965]: I0219 09:43:47.703077 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:47 crc kubenswrapper[4965]: I0219 09:43:47.703099 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:47 crc kubenswrapper[4965]: I0219 09:43:47.703128 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:47 crc kubenswrapper[4965]: I0219 09:43:47.703146 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:47Z","lastTransitionTime":"2026-02-19T09:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:47 crc kubenswrapper[4965]: I0219 09:43:47.807491 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:47 crc kubenswrapper[4965]: I0219 09:43:47.807568 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:47 crc kubenswrapper[4965]: I0219 09:43:47.807592 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:47 crc kubenswrapper[4965]: I0219 09:43:47.807623 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:47 crc kubenswrapper[4965]: I0219 09:43:47.807649 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:47Z","lastTransitionTime":"2026-02-19T09:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:47 crc kubenswrapper[4965]: I0219 09:43:47.910997 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:47 crc kubenswrapper[4965]: I0219 09:43:47.911055 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:47 crc kubenswrapper[4965]: I0219 09:43:47.911072 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:47 crc kubenswrapper[4965]: I0219 09:43:47.911095 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:47 crc kubenswrapper[4965]: I0219 09:43:47.911112 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:47Z","lastTransitionTime":"2026-02-19T09:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:48 crc kubenswrapper[4965]: I0219 09:43:48.014694 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:48 crc kubenswrapper[4965]: I0219 09:43:48.014785 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:48 crc kubenswrapper[4965]: I0219 09:43:48.014804 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:48 crc kubenswrapper[4965]: I0219 09:43:48.014832 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:48 crc kubenswrapper[4965]: I0219 09:43:48.014852 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:48Z","lastTransitionTime":"2026-02-19T09:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:48 crc kubenswrapper[4965]: I0219 09:43:48.118311 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:48 crc kubenswrapper[4965]: I0219 09:43:48.118389 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:48 crc kubenswrapper[4965]: I0219 09:43:48.118408 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:48 crc kubenswrapper[4965]: I0219 09:43:48.118435 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:48 crc kubenswrapper[4965]: I0219 09:43:48.118455 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:48Z","lastTransitionTime":"2026-02-19T09:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:48 crc kubenswrapper[4965]: I0219 09:43:48.197525 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:43:48 crc kubenswrapper[4965]: I0219 09:43:48.197662 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:43:48 crc kubenswrapper[4965]: E0219 09:43:48.197720 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lwjwk" podUID="1e1b431a-0390-4366-82d1-6cb782c7a9e8" Feb 19 09:43:48 crc kubenswrapper[4965]: I0219 09:43:48.197828 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:43:48 crc kubenswrapper[4965]: I0219 09:43:48.197943 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:43:48 crc kubenswrapper[4965]: E0219 09:43:48.197986 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:43:48 crc kubenswrapper[4965]: E0219 09:43:48.198073 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:43:48 crc kubenswrapper[4965]: E0219 09:43:48.198167 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:43:48 crc kubenswrapper[4965]: I0219 09:43:48.204534 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 10:48:24.199774574 +0000 UTC Feb 19 09:43:48 crc kubenswrapper[4965]: I0219 09:43:48.221414 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:48 crc kubenswrapper[4965]: I0219 09:43:48.221511 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:48 crc kubenswrapper[4965]: I0219 09:43:48.221534 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:48 crc kubenswrapper[4965]: I0219 09:43:48.221567 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:48 crc kubenswrapper[4965]: I0219 09:43:48.221589 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:48Z","lastTransitionTime":"2026-02-19T09:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:48 crc kubenswrapper[4965]: I0219 09:43:48.325170 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:48 crc kubenswrapper[4965]: I0219 09:43:48.325281 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:48 crc kubenswrapper[4965]: I0219 09:43:48.325301 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:48 crc kubenswrapper[4965]: I0219 09:43:48.325330 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:48 crc kubenswrapper[4965]: I0219 09:43:48.325350 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:48Z","lastTransitionTime":"2026-02-19T09:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:48 crc kubenswrapper[4965]: I0219 09:43:48.429240 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:48 crc kubenswrapper[4965]: I0219 09:43:48.429314 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:48 crc kubenswrapper[4965]: I0219 09:43:48.429331 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:48 crc kubenswrapper[4965]: I0219 09:43:48.429359 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:48 crc kubenswrapper[4965]: I0219 09:43:48.429380 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:48Z","lastTransitionTime":"2026-02-19T09:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:48 crc kubenswrapper[4965]: I0219 09:43:48.532078 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:48 crc kubenswrapper[4965]: I0219 09:43:48.532120 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:48 crc kubenswrapper[4965]: I0219 09:43:48.532128 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:48 crc kubenswrapper[4965]: I0219 09:43:48.532144 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:48 crc kubenswrapper[4965]: I0219 09:43:48.532154 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:48Z","lastTransitionTime":"2026-02-19T09:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:48 crc kubenswrapper[4965]: I0219 09:43:48.635602 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:48 crc kubenswrapper[4965]: I0219 09:43:48.635663 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:48 crc kubenswrapper[4965]: I0219 09:43:48.635679 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:48 crc kubenswrapper[4965]: I0219 09:43:48.635699 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:48 crc kubenswrapper[4965]: I0219 09:43:48.635712 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:48Z","lastTransitionTime":"2026-02-19T09:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:48 crc kubenswrapper[4965]: I0219 09:43:48.738729 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:48 crc kubenswrapper[4965]: I0219 09:43:48.738806 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:48 crc kubenswrapper[4965]: I0219 09:43:48.738828 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:48 crc kubenswrapper[4965]: I0219 09:43:48.738855 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:48 crc kubenswrapper[4965]: I0219 09:43:48.738876 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:48Z","lastTransitionTime":"2026-02-19T09:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:48 crc kubenswrapper[4965]: I0219 09:43:48.842308 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:48 crc kubenswrapper[4965]: I0219 09:43:48.842387 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:48 crc kubenswrapper[4965]: I0219 09:43:48.842412 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:48 crc kubenswrapper[4965]: I0219 09:43:48.842442 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:48 crc kubenswrapper[4965]: I0219 09:43:48.842467 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:48Z","lastTransitionTime":"2026-02-19T09:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:48 crc kubenswrapper[4965]: I0219 09:43:48.946390 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:48 crc kubenswrapper[4965]: I0219 09:43:48.946457 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:48 crc kubenswrapper[4965]: I0219 09:43:48.946482 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:48 crc kubenswrapper[4965]: I0219 09:43:48.946510 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:48 crc kubenswrapper[4965]: I0219 09:43:48.946535 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:48Z","lastTransitionTime":"2026-02-19T09:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:49 crc kubenswrapper[4965]: I0219 09:43:49.049341 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:49 crc kubenswrapper[4965]: I0219 09:43:49.049424 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:49 crc kubenswrapper[4965]: I0219 09:43:49.049443 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:49 crc kubenswrapper[4965]: I0219 09:43:49.049469 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:49 crc kubenswrapper[4965]: I0219 09:43:49.049485 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:49Z","lastTransitionTime":"2026-02-19T09:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:49 crc kubenswrapper[4965]: I0219 09:43:49.153234 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:49 crc kubenswrapper[4965]: I0219 09:43:49.153297 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:49 crc kubenswrapper[4965]: I0219 09:43:49.153322 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:49 crc kubenswrapper[4965]: I0219 09:43:49.153351 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:49 crc kubenswrapper[4965]: I0219 09:43:49.153376 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:49Z","lastTransitionTime":"2026-02-19T09:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:49 crc kubenswrapper[4965]: I0219 09:43:49.205140 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 21:06:49.271514663 +0000 UTC Feb 19 09:43:49 crc kubenswrapper[4965]: I0219 09:43:49.256957 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:49 crc kubenswrapper[4965]: I0219 09:43:49.257029 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:49 crc kubenswrapper[4965]: I0219 09:43:49.257049 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:49 crc kubenswrapper[4965]: I0219 09:43:49.257073 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:49 crc kubenswrapper[4965]: I0219 09:43:49.257092 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:49Z","lastTransitionTime":"2026-02-19T09:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:49 crc kubenswrapper[4965]: I0219 09:43:49.361163 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:49 crc kubenswrapper[4965]: I0219 09:43:49.361273 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:49 crc kubenswrapper[4965]: I0219 09:43:49.361296 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:49 crc kubenswrapper[4965]: I0219 09:43:49.361324 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:49 crc kubenswrapper[4965]: I0219 09:43:49.361343 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:49Z","lastTransitionTime":"2026-02-19T09:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:49 crc kubenswrapper[4965]: I0219 09:43:49.464288 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:49 crc kubenswrapper[4965]: I0219 09:43:49.464344 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:49 crc kubenswrapper[4965]: I0219 09:43:49.464355 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:49 crc kubenswrapper[4965]: I0219 09:43:49.464377 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:49 crc kubenswrapper[4965]: I0219 09:43:49.464389 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:49Z","lastTransitionTime":"2026-02-19T09:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:49 crc kubenswrapper[4965]: I0219 09:43:49.568340 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:49 crc kubenswrapper[4965]: I0219 09:43:49.568410 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:49 crc kubenswrapper[4965]: I0219 09:43:49.568429 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:49 crc kubenswrapper[4965]: I0219 09:43:49.568457 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:49 crc kubenswrapper[4965]: I0219 09:43:49.568477 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:49Z","lastTransitionTime":"2026-02-19T09:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:49 crc kubenswrapper[4965]: I0219 09:43:49.671270 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:49 crc kubenswrapper[4965]: I0219 09:43:49.671691 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:49 crc kubenswrapper[4965]: I0219 09:43:49.671811 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:49 crc kubenswrapper[4965]: I0219 09:43:49.671936 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:49 crc kubenswrapper[4965]: I0219 09:43:49.672143 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:49Z","lastTransitionTime":"2026-02-19T09:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:49 crc kubenswrapper[4965]: I0219 09:43:49.776484 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:49 crc kubenswrapper[4965]: I0219 09:43:49.776932 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:49 crc kubenswrapper[4965]: I0219 09:43:49.777070 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:49 crc kubenswrapper[4965]: I0219 09:43:49.777166 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:49 crc kubenswrapper[4965]: I0219 09:43:49.777285 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:49Z","lastTransitionTime":"2026-02-19T09:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:49 crc kubenswrapper[4965]: I0219 09:43:49.880422 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:49 crc kubenswrapper[4965]: I0219 09:43:49.880452 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:49 crc kubenswrapper[4965]: I0219 09:43:49.880464 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:49 crc kubenswrapper[4965]: I0219 09:43:49.880479 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:49 crc kubenswrapper[4965]: I0219 09:43:49.880489 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:49Z","lastTransitionTime":"2026-02-19T09:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:49 crc kubenswrapper[4965]: I0219 09:43:49.984605 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:49 crc kubenswrapper[4965]: I0219 09:43:49.984679 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:49 crc kubenswrapper[4965]: I0219 09:43:49.984704 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:49 crc kubenswrapper[4965]: I0219 09:43:49.984736 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:49 crc kubenswrapper[4965]: I0219 09:43:49.984758 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:49Z","lastTransitionTime":"2026-02-19T09:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:49 crc kubenswrapper[4965]: I0219 09:43:49.998352 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:43:49 crc kubenswrapper[4965]: E0219 09:43:49.998619 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:44:53.998568277 +0000 UTC m=+149.619889587 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:43:50 crc kubenswrapper[4965]: I0219 09:43:50.088326 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:50 crc kubenswrapper[4965]: I0219 09:43:50.088396 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:50 crc kubenswrapper[4965]: I0219 09:43:50.088413 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:50 crc kubenswrapper[4965]: I0219 09:43:50.088436 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:50 crc kubenswrapper[4965]: I0219 09:43:50.088454 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:50Z","lastTransitionTime":"2026-02-19T09:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:50 crc kubenswrapper[4965]: I0219 09:43:50.099599 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:43:50 crc kubenswrapper[4965]: I0219 09:43:50.099664 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:43:50 crc kubenswrapper[4965]: I0219 09:43:50.099710 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:43:50 crc kubenswrapper[4965]: I0219 09:43:50.099746 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:43:50 crc kubenswrapper[4965]: E0219 09:43:50.099838 4965 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 09:43:50 crc kubenswrapper[4965]: E0219 09:43:50.099871 4965 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 09:43:50 crc kubenswrapper[4965]: E0219 09:43:50.099958 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 09:44:54.099939138 +0000 UTC m=+149.721260448 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 09:43:50 crc kubenswrapper[4965]: E0219 09:43:50.099893 4965 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 09:43:50 crc kubenswrapper[4965]: E0219 09:43:50.099982 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 09:44:54.099973809 +0000 UTC m=+149.721295109 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 09:43:50 crc kubenswrapper[4965]: E0219 09:43:50.100003 4965 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 09:43:50 crc kubenswrapper[4965]: E0219 09:43:50.100000 4965 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 09:43:50 crc kubenswrapper[4965]: E0219 09:43:50.100056 4965 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 09:43:50 crc kubenswrapper[4965]: E0219 09:43:50.100084 4965 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 09:43:50 crc kubenswrapper[4965]: E0219 09:43:50.100023 4965 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 09:43:50 crc kubenswrapper[4965]: E0219 09:43:50.100161 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 09:44:54.100135733 +0000 UTC m=+149.721457073 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 09:43:50 crc kubenswrapper[4965]: E0219 09:43:50.100234 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 09:44:54.100177614 +0000 UTC m=+149.721498954 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 09:43:50 crc kubenswrapper[4965]: I0219 09:43:50.192613 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:50 crc kubenswrapper[4965]: I0219 09:43:50.192731 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:50 crc kubenswrapper[4965]: I0219 09:43:50.192748 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:50 crc kubenswrapper[4965]: I0219 09:43:50.192774 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:50 crc kubenswrapper[4965]: I0219 09:43:50.192794 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:50Z","lastTransitionTime":"2026-02-19T09:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:50 crc kubenswrapper[4965]: I0219 09:43:50.196868 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:43:50 crc kubenswrapper[4965]: I0219 09:43:50.197029 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:43:50 crc kubenswrapper[4965]: E0219 09:43:50.197055 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:43:50 crc kubenswrapper[4965]: I0219 09:43:50.197144 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:43:50 crc kubenswrapper[4965]: I0219 09:43:50.197473 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:43:50 crc kubenswrapper[4965]: E0219 09:43:50.197542 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lwjwk" podUID="1e1b431a-0390-4366-82d1-6cb782c7a9e8" Feb 19 09:43:50 crc kubenswrapper[4965]: E0219 09:43:50.197694 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:43:50 crc kubenswrapper[4965]: E0219 09:43:50.197744 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:43:50 crc kubenswrapper[4965]: I0219 09:43:50.205754 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 20:12:40.080731834 +0000 UTC Feb 19 09:43:50 crc kubenswrapper[4965]: I0219 09:43:50.215423 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 19 09:43:50 crc kubenswrapper[4965]: I0219 09:43:50.296581 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:50 crc kubenswrapper[4965]: I0219 09:43:50.296642 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:50 crc kubenswrapper[4965]: I0219 09:43:50.296665 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:50 crc kubenswrapper[4965]: I0219 09:43:50.296687 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:50 crc kubenswrapper[4965]: I0219 09:43:50.296702 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:50Z","lastTransitionTime":"2026-02-19T09:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:50 crc kubenswrapper[4965]: I0219 09:43:50.399656 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:50 crc kubenswrapper[4965]: I0219 09:43:50.399701 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:50 crc kubenswrapper[4965]: I0219 09:43:50.399712 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:50 crc kubenswrapper[4965]: I0219 09:43:50.399731 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:50 crc kubenswrapper[4965]: I0219 09:43:50.399744 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:50Z","lastTransitionTime":"2026-02-19T09:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:50 crc kubenswrapper[4965]: I0219 09:43:50.504166 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:50 crc kubenswrapper[4965]: I0219 09:43:50.504560 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:50 crc kubenswrapper[4965]: I0219 09:43:50.504749 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:50 crc kubenswrapper[4965]: I0219 09:43:50.504883 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:50 crc kubenswrapper[4965]: I0219 09:43:50.504998 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:50Z","lastTransitionTime":"2026-02-19T09:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:50 crc kubenswrapper[4965]: I0219 09:43:50.606993 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:50 crc kubenswrapper[4965]: I0219 09:43:50.607030 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:50 crc kubenswrapper[4965]: I0219 09:43:50.607041 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:50 crc kubenswrapper[4965]: I0219 09:43:50.607058 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:50 crc kubenswrapper[4965]: I0219 09:43:50.607069 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:50Z","lastTransitionTime":"2026-02-19T09:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:50 crc kubenswrapper[4965]: I0219 09:43:50.710321 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:50 crc kubenswrapper[4965]: I0219 09:43:50.710393 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:50 crc kubenswrapper[4965]: I0219 09:43:50.710413 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:50 crc kubenswrapper[4965]: I0219 09:43:50.710437 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:50 crc kubenswrapper[4965]: I0219 09:43:50.710454 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:50Z","lastTransitionTime":"2026-02-19T09:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:50 crc kubenswrapper[4965]: I0219 09:43:50.814019 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:50 crc kubenswrapper[4965]: I0219 09:43:50.814093 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:50 crc kubenswrapper[4965]: I0219 09:43:50.814110 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:50 crc kubenswrapper[4965]: I0219 09:43:50.814137 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:50 crc kubenswrapper[4965]: I0219 09:43:50.814156 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:50Z","lastTransitionTime":"2026-02-19T09:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:50 crc kubenswrapper[4965]: I0219 09:43:50.918101 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:50 crc kubenswrapper[4965]: I0219 09:43:50.918249 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:50 crc kubenswrapper[4965]: I0219 09:43:50.918269 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:50 crc kubenswrapper[4965]: I0219 09:43:50.918326 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:50 crc kubenswrapper[4965]: I0219 09:43:50.918348 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:50Z","lastTransitionTime":"2026-02-19T09:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:51 crc kubenswrapper[4965]: I0219 09:43:51.022163 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:51 crc kubenswrapper[4965]: I0219 09:43:51.022288 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:51 crc kubenswrapper[4965]: I0219 09:43:51.022313 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:51 crc kubenswrapper[4965]: I0219 09:43:51.022354 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:51 crc kubenswrapper[4965]: I0219 09:43:51.022384 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:51Z","lastTransitionTime":"2026-02-19T09:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:51 crc kubenswrapper[4965]: I0219 09:43:51.126245 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:51 crc kubenswrapper[4965]: I0219 09:43:51.126329 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:51 crc kubenswrapper[4965]: I0219 09:43:51.126349 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:51 crc kubenswrapper[4965]: I0219 09:43:51.126378 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:51 crc kubenswrapper[4965]: I0219 09:43:51.126398 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:51Z","lastTransitionTime":"2026-02-19T09:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:51 crc kubenswrapper[4965]: I0219 09:43:51.206843 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 09:19:00.466097591 +0000 UTC Feb 19 09:43:51 crc kubenswrapper[4965]: I0219 09:43:51.230083 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:51 crc kubenswrapper[4965]: I0219 09:43:51.230178 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:51 crc kubenswrapper[4965]: I0219 09:43:51.230224 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:51 crc kubenswrapper[4965]: I0219 09:43:51.230248 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:51 crc kubenswrapper[4965]: I0219 09:43:51.230265 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:51Z","lastTransitionTime":"2026-02-19T09:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:51 crc kubenswrapper[4965]: I0219 09:43:51.334366 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:51 crc kubenswrapper[4965]: I0219 09:43:51.334845 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:51 crc kubenswrapper[4965]: I0219 09:43:51.335003 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:51 crc kubenswrapper[4965]: I0219 09:43:51.335133 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:51 crc kubenswrapper[4965]: I0219 09:43:51.335325 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:51Z","lastTransitionTime":"2026-02-19T09:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:51 crc kubenswrapper[4965]: I0219 09:43:51.438943 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:51 crc kubenswrapper[4965]: I0219 09:43:51.439451 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:51 crc kubenswrapper[4965]: I0219 09:43:51.439595 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:51 crc kubenswrapper[4965]: I0219 09:43:51.439802 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:51 crc kubenswrapper[4965]: I0219 09:43:51.439940 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:51Z","lastTransitionTime":"2026-02-19T09:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:51 crc kubenswrapper[4965]: I0219 09:43:51.543358 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:51 crc kubenswrapper[4965]: I0219 09:43:51.543462 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:51 crc kubenswrapper[4965]: I0219 09:43:51.543481 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:51 crc kubenswrapper[4965]: I0219 09:43:51.543500 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:51 crc kubenswrapper[4965]: I0219 09:43:51.543513 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:51Z","lastTransitionTime":"2026-02-19T09:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:51 crc kubenswrapper[4965]: I0219 09:43:51.647822 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:51 crc kubenswrapper[4965]: I0219 09:43:51.647892 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:51 crc kubenswrapper[4965]: I0219 09:43:51.647910 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:51 crc kubenswrapper[4965]: I0219 09:43:51.647944 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:51 crc kubenswrapper[4965]: I0219 09:43:51.647963 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:51Z","lastTransitionTime":"2026-02-19T09:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:51 crc kubenswrapper[4965]: I0219 09:43:51.751516 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:51 crc kubenswrapper[4965]: I0219 09:43:51.751565 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:51 crc kubenswrapper[4965]: I0219 09:43:51.751577 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:51 crc kubenswrapper[4965]: I0219 09:43:51.751597 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:51 crc kubenswrapper[4965]: I0219 09:43:51.751611 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:51Z","lastTransitionTime":"2026-02-19T09:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:51 crc kubenswrapper[4965]: I0219 09:43:51.854998 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:51 crc kubenswrapper[4965]: I0219 09:43:51.855585 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:51 crc kubenswrapper[4965]: I0219 09:43:51.855755 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:51 crc kubenswrapper[4965]: I0219 09:43:51.855913 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:51 crc kubenswrapper[4965]: I0219 09:43:51.856155 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:51Z","lastTransitionTime":"2026-02-19T09:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:51 crc kubenswrapper[4965]: I0219 09:43:51.959844 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:51 crc kubenswrapper[4965]: I0219 09:43:51.959931 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:51 crc kubenswrapper[4965]: I0219 09:43:51.959955 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:51 crc kubenswrapper[4965]: I0219 09:43:51.959987 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:51 crc kubenswrapper[4965]: I0219 09:43:51.960009 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:51Z","lastTransitionTime":"2026-02-19T09:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:52 crc kubenswrapper[4965]: I0219 09:43:52.062750 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:52 crc kubenswrapper[4965]: I0219 09:43:52.062827 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:52 crc kubenswrapper[4965]: I0219 09:43:52.062851 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:52 crc kubenswrapper[4965]: I0219 09:43:52.062880 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:52 crc kubenswrapper[4965]: I0219 09:43:52.062903 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:52Z","lastTransitionTime":"2026-02-19T09:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:52 crc kubenswrapper[4965]: I0219 09:43:52.166010 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:52 crc kubenswrapper[4965]: I0219 09:43:52.166059 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:52 crc kubenswrapper[4965]: I0219 09:43:52.166073 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:52 crc kubenswrapper[4965]: I0219 09:43:52.166091 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:52 crc kubenswrapper[4965]: I0219 09:43:52.166105 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:52Z","lastTransitionTime":"2026-02-19T09:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:52 crc kubenswrapper[4965]: I0219 09:43:52.197609 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:43:52 crc kubenswrapper[4965]: I0219 09:43:52.197716 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:43:52 crc kubenswrapper[4965]: E0219 09:43:52.197757 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:43:52 crc kubenswrapper[4965]: E0219 09:43:52.197937 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:43:52 crc kubenswrapper[4965]: I0219 09:43:52.198016 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:43:52 crc kubenswrapper[4965]: I0219 09:43:52.198179 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:43:52 crc kubenswrapper[4965]: E0219 09:43:52.198308 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lwjwk" podUID="1e1b431a-0390-4366-82d1-6cb782c7a9e8" Feb 19 09:43:52 crc kubenswrapper[4965]: E0219 09:43:52.198384 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:43:52 crc kubenswrapper[4965]: I0219 09:43:52.207383 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 16:15:21.05438747 +0000 UTC Feb 19 09:43:52 crc kubenswrapper[4965]: I0219 09:43:52.269072 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:52 crc kubenswrapper[4965]: I0219 09:43:52.269117 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:52 crc kubenswrapper[4965]: I0219 09:43:52.269129 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:52 crc kubenswrapper[4965]: I0219 09:43:52.269148 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:52 crc kubenswrapper[4965]: I0219 09:43:52.269161 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:52Z","lastTransitionTime":"2026-02-19T09:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:52 crc kubenswrapper[4965]: I0219 09:43:52.372828 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:52 crc kubenswrapper[4965]: I0219 09:43:52.372885 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:52 crc kubenswrapper[4965]: I0219 09:43:52.372900 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:52 crc kubenswrapper[4965]: I0219 09:43:52.372921 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:52 crc kubenswrapper[4965]: I0219 09:43:52.372938 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:52Z","lastTransitionTime":"2026-02-19T09:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:52 crc kubenswrapper[4965]: I0219 09:43:52.476088 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:52 crc kubenswrapper[4965]: I0219 09:43:52.476141 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:52 crc kubenswrapper[4965]: I0219 09:43:52.476157 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:52 crc kubenswrapper[4965]: I0219 09:43:52.476178 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:52 crc kubenswrapper[4965]: I0219 09:43:52.476220 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:52Z","lastTransitionTime":"2026-02-19T09:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:52 crc kubenswrapper[4965]: I0219 09:43:52.579309 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:52 crc kubenswrapper[4965]: I0219 09:43:52.579374 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:52 crc kubenswrapper[4965]: I0219 09:43:52.579387 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:52 crc kubenswrapper[4965]: I0219 09:43:52.579407 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:52 crc kubenswrapper[4965]: I0219 09:43:52.579423 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:52Z","lastTransitionTime":"2026-02-19T09:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:52 crc kubenswrapper[4965]: I0219 09:43:52.683007 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:52 crc kubenswrapper[4965]: I0219 09:43:52.683386 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:52 crc kubenswrapper[4965]: I0219 09:43:52.683534 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:52 crc kubenswrapper[4965]: I0219 09:43:52.683643 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:52 crc kubenswrapper[4965]: I0219 09:43:52.683866 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:52Z","lastTransitionTime":"2026-02-19T09:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:52 crc kubenswrapper[4965]: I0219 09:43:52.787623 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:52 crc kubenswrapper[4965]: I0219 09:43:52.787712 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:52 crc kubenswrapper[4965]: I0219 09:43:52.787740 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:52 crc kubenswrapper[4965]: I0219 09:43:52.787772 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:52 crc kubenswrapper[4965]: I0219 09:43:52.787795 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:52Z","lastTransitionTime":"2026-02-19T09:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:52 crc kubenswrapper[4965]: I0219 09:43:52.891477 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:52 crc kubenswrapper[4965]: I0219 09:43:52.891530 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:52 crc kubenswrapper[4965]: I0219 09:43:52.891542 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:52 crc kubenswrapper[4965]: I0219 09:43:52.891563 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:52 crc kubenswrapper[4965]: I0219 09:43:52.891577 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:52Z","lastTransitionTime":"2026-02-19T09:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:52 crc kubenswrapper[4965]: I0219 09:43:52.994209 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:52 crc kubenswrapper[4965]: I0219 09:43:52.994261 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:52 crc kubenswrapper[4965]: I0219 09:43:52.994272 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:52 crc kubenswrapper[4965]: I0219 09:43:52.994291 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:52 crc kubenswrapper[4965]: I0219 09:43:52.994304 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:52Z","lastTransitionTime":"2026-02-19T09:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:53 crc kubenswrapper[4965]: I0219 09:43:53.097867 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:53 crc kubenswrapper[4965]: I0219 09:43:53.098596 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:53 crc kubenswrapper[4965]: I0219 09:43:53.098744 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:53 crc kubenswrapper[4965]: I0219 09:43:53.098868 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:53 crc kubenswrapper[4965]: I0219 09:43:53.098963 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:53Z","lastTransitionTime":"2026-02-19T09:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:53 crc kubenswrapper[4965]: I0219 09:43:53.202650 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:53 crc kubenswrapper[4965]: I0219 09:43:53.203327 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:53 crc kubenswrapper[4965]: I0219 09:43:53.203604 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:53 crc kubenswrapper[4965]: I0219 09:43:53.203820 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:53 crc kubenswrapper[4965]: I0219 09:43:53.203956 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:53Z","lastTransitionTime":"2026-02-19T09:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:53 crc kubenswrapper[4965]: I0219 09:43:53.207874 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 18:31:18.040926067 +0000 UTC Feb 19 09:43:53 crc kubenswrapper[4965]: I0219 09:43:53.308115 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:53 crc kubenswrapper[4965]: I0219 09:43:53.308668 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:53 crc kubenswrapper[4965]: I0219 09:43:53.308967 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:53 crc kubenswrapper[4965]: I0219 09:43:53.309389 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:53 crc kubenswrapper[4965]: I0219 09:43:53.309944 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:53Z","lastTransitionTime":"2026-02-19T09:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:53 crc kubenswrapper[4965]: I0219 09:43:53.413353 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:53 crc kubenswrapper[4965]: I0219 09:43:53.413835 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:53 crc kubenswrapper[4965]: I0219 09:43:53.414072 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:53 crc kubenswrapper[4965]: I0219 09:43:53.414258 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:53 crc kubenswrapper[4965]: I0219 09:43:53.414445 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:53Z","lastTransitionTime":"2026-02-19T09:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:53 crc kubenswrapper[4965]: I0219 09:43:53.518569 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:53 crc kubenswrapper[4965]: I0219 09:43:53.518627 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:53 crc kubenswrapper[4965]: I0219 09:43:53.518645 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:53 crc kubenswrapper[4965]: I0219 09:43:53.518669 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:53 crc kubenswrapper[4965]: I0219 09:43:53.518690 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:53Z","lastTransitionTime":"2026-02-19T09:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:53 crc kubenswrapper[4965]: I0219 09:43:53.621577 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:53 crc kubenswrapper[4965]: I0219 09:43:53.621936 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:53 crc kubenswrapper[4965]: I0219 09:43:53.622041 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:53 crc kubenswrapper[4965]: I0219 09:43:53.622130 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:53 crc kubenswrapper[4965]: I0219 09:43:53.622245 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:53Z","lastTransitionTime":"2026-02-19T09:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:53 crc kubenswrapper[4965]: I0219 09:43:53.725385 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:53 crc kubenswrapper[4965]: I0219 09:43:53.725454 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:53 crc kubenswrapper[4965]: I0219 09:43:53.725477 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:53 crc kubenswrapper[4965]: I0219 09:43:53.725504 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:53 crc kubenswrapper[4965]: I0219 09:43:53.725524 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:53Z","lastTransitionTime":"2026-02-19T09:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:53 crc kubenswrapper[4965]: I0219 09:43:53.829252 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:53 crc kubenswrapper[4965]: I0219 09:43:53.829341 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:53 crc kubenswrapper[4965]: I0219 09:43:53.829372 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:53 crc kubenswrapper[4965]: I0219 09:43:53.829411 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:53 crc kubenswrapper[4965]: I0219 09:43:53.829515 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:53Z","lastTransitionTime":"2026-02-19T09:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:53 crc kubenswrapper[4965]: I0219 09:43:53.933254 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:53 crc kubenswrapper[4965]: I0219 09:43:53.933763 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:53 crc kubenswrapper[4965]: I0219 09:43:53.933927 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:53 crc kubenswrapper[4965]: I0219 09:43:53.934262 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:53 crc kubenswrapper[4965]: I0219 09:43:53.934413 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:53Z","lastTransitionTime":"2026-02-19T09:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.037666 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.037991 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.038094 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.038186 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.038369 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:54Z","lastTransitionTime":"2026-02-19T09:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.141548 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.141593 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.141634 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.141657 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.141671 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:54Z","lastTransitionTime":"2026-02-19T09:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.180762 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.180840 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.180861 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.180890 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.180913 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:54Z","lastTransitionTime":"2026-02-19T09:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.197132 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.197293 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.197388 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:43:54 crc kubenswrapper[4965]: E0219 09:43:54.197399 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lwjwk" podUID="1e1b431a-0390-4366-82d1-6cb782c7a9e8" Feb 19 09:43:54 crc kubenswrapper[4965]: E0219 09:43:54.197531 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.197559 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:43:54 crc kubenswrapper[4965]: E0219 09:43:54.197710 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:43:54 crc kubenswrapper[4965]: E0219 09:43:54.197879 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.199441 4965 scope.go:117] "RemoveContainer" containerID="df664c777d3f25b8d74075723b13263568db42db0feb4d1c5a85cc38fc50aee9" Feb 19 09:43:54 crc kubenswrapper[4965]: E0219 09:43:54.199776 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-dcfpx_openshift-ovn-kubernetes(7c788dfa-1923-4a2b-9619-73acf92ec849)\"" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" podUID="7c788dfa-1923-4a2b-9619-73acf92ec849" Feb 19 09:43:54 crc kubenswrapper[4965]: E0219 09:43:54.204512 4965 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1c83089-21b1-454c-b8cd-3bf0aaa04cd0\\\",\\\"systemUUID\\\":\\\"70334fb7-3860-4c43-90b6-37f049faeb9d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:54Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.208048 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 01:51:51.512198084 +0000 UTC Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.210703 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.210767 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.210788 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.210815 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.210836 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:54Z","lastTransitionTime":"2026-02-19T09:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:54 crc kubenswrapper[4965]: E0219 09:43:54.236040 4965 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1c83089-21b1-454c-b8cd-3bf0aaa04cd0\\\",\\\"systemUUID\\\":\\\"70334fb7-3860-4c43-90b6-37f049faeb9d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:54Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.242302 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.242367 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.242392 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.242424 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.242447 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:54Z","lastTransitionTime":"2026-02-19T09:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:54 crc kubenswrapper[4965]: E0219 09:43:54.261978 4965 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1c83089-21b1-454c-b8cd-3bf0aaa04cd0\\\",\\\"systemUUID\\\":\\\"70334fb7-3860-4c43-90b6-37f049faeb9d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:54Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.267768 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.267838 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.267859 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.268261 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.268488 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:54Z","lastTransitionTime":"2026-02-19T09:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:54 crc kubenswrapper[4965]: E0219 09:43:54.290853 4965 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1c83089-21b1-454c-b8cd-3bf0aaa04cd0\\\",\\\"systemUUID\\\":\\\"70334fb7-3860-4c43-90b6-37f049faeb9d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:54Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.296307 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.296373 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.296386 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.296406 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.296419 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:54Z","lastTransitionTime":"2026-02-19T09:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:54 crc kubenswrapper[4965]: E0219 09:43:54.313094 4965 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:43:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1c83089-21b1-454c-b8cd-3bf0aaa04cd0\\\",\\\"systemUUID\\\":\\\"70334fb7-3860-4c43-90b6-37f049faeb9d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:54Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:54 crc kubenswrapper[4965]: E0219 09:43:54.314017 4965 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.316140 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.316176 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.316189 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.316263 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.316282 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:54Z","lastTransitionTime":"2026-02-19T09:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.420234 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.420272 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.420283 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.420299 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.420310 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:54Z","lastTransitionTime":"2026-02-19T09:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.523373 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.523437 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.523457 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.523526 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.523557 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:54Z","lastTransitionTime":"2026-02-19T09:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.627621 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.627679 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.627703 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.627732 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.627756 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:54Z","lastTransitionTime":"2026-02-19T09:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.730779 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.730855 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.730879 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.730909 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.730929 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:54Z","lastTransitionTime":"2026-02-19T09:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.834297 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.834394 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.834413 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.834446 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.834465 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:54Z","lastTransitionTime":"2026-02-19T09:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.937529 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.937594 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.937611 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.937633 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:54 crc kubenswrapper[4965]: I0219 09:43:54.937651 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:54Z","lastTransitionTime":"2026-02-19T09:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.041765 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.041859 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.041884 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.041915 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.041938 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:55Z","lastTransitionTime":"2026-02-19T09:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.144795 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.144854 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.144866 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.144886 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.144900 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:55Z","lastTransitionTime":"2026-02-19T09:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.208226 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 19:01:56.262758083 +0000 UTC Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.222621 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c788dfa-1923-4a2b-9619-73acf92ec849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa60b6875cede631c9383845eb085f96d62a6365609f1f98b84165b54e0872a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ebb933d7238665138ec7e854756522607a2814b48116b2ce4474869b39344c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac7fd5095ec7fd8ce98b9150bd5c0a642004e2c1239a6fa1ff002efa67471df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51316b32af59fe23cdf832fbc0b37b11f74d3a57d01eed32ca30a196d4c7e2c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccba1acfe523175d218c25c2f59a6f9874426235c9cba981a80cc53aca12408a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc418c94085bcd4ed93250cce9eb6bc122cd045035b72800df2bdf4b364d6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df664c777d3f25b8d74075723b13263568db42db0feb4d1c5a85cc38fc50aee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df664c777d3f25b8d74075723b13263568db42db0feb4d1c5a85cc38fc50aee9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:43:41Z\\\",\\\"message\\\":\\\"81 6971 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0219 09:43:41.162525 6971 lb_config.go:1031] Cluster endpoints for openshift-kube-controller-manager-operator/metrics for network=default are: map[]\\\\nF0219 09:43:41.162540 6971 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:41Z is after 2025-08-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:43:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-dcfpx_openshift-ovn-kubernetes(7c788dfa-1923-4a2b-9619-73acf92ec849)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://533452e14c9d0d57a451ec0dd06097f87f60658a8f008203b29c31b2b5310eb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dcfpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:55Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.242908 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g5jnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ab24976-06f3-4373-825a-5234ff24f2cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef52c51fd38bf34f1fc3eb014d85c40137dd15030237334159ffbb71e1d6c2a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll7dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcbd8e6b02f20a249ddb3fbf20ddd72a94b40fd420cb6ad4c59ea513994ac382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll7dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:43:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-g5jnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:55Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.247869 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.247943 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.247968 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.247994 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.248013 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:55Z","lastTransitionTime":"2026-02-19T09:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.259903 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd2c04f5bfa6800521c39502b241dfea1a0b9d3ddde4eb92d501d28bcfad1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:55Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.276013 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:55Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.292423 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed9a04147ac88af087b35406b7fc4e1261b034a9fbfa0014446cdc08743f7184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27905a4c42a1d28d582484efe02020cd2b7d5a5af7c53787412705c7a6da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:55Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.308016 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ef3eb8-6103-492d-b6ef-f16081d15e83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://107d47a2c3ddc138ad383ab20f81dabe2c31af50f7bd66c31b66df79488ba837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ff237da7e509d3b4a25e8042c384a768ef0123d1687b574502f769bde3121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mhh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:55Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.323651 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:55Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.340753 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ca9c67a49c188984680f98e96b659087034f30727c1fcdad7dfc298157745c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:55Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.350858 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.350911 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.350923 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.350941 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.350950 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:55Z","lastTransitionTime":"2026-02-19T09:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.357465 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vpj8c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26ce37d0-9ace-438a-bdd4-6bb30e41bac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cab682da53d115c9e5ce5dca08aae544673283d03b3e11ba9d28ca7896fd4103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3b065fcc4e11f5576c86810e3af403b4336878950bb02f61e1e461b38b009e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c3b065fcc4e11f5576c86810e3af403b4336878950bb02f61e1e461b38b009e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4089ca812b00a9e7ab8b2ba1f148d929de9837384e6b0b64ea8f62dd6917fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4089ca812b00a9e7ab8b2ba1f148d929de9837384e6b0b64ea8f62dd6917fee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0fceec5800537c79268d8bad66cd51cedd7e6442e8f08ea259dd5714334a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0fceec5800537c79268d8bad66cd51cedd7e6442e8f08ea259dd5714334a81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vpj8c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:55Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.371271 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3302b1d7-5be5-4bc8-94c7-0f68e1ce30d9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86427e811b857d8f0f6218893ae936774c7c1b84d8a418371613371462202511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d39c4c4d5e6839588819d3a113bf207592e5adc5cc2c0651d95da2106b7ec16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d39c4c4d5e6839588819d3a113bf207592e5adc5cc2c0651d95da2106b7ec16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:55Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.392684 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"210f2216-544c-43a1-813b-68e47da7447e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31cd85d70b58724b5ed5071b085e5b20b31083087fc7847dfadccc52cfd717c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d403f58f80ef84bfe879f95390ffc186d6516ce879f1ca593e9d06b6fdaa9003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86861693b1090906a34be8e134571c684c24b327ed62db509ca470e79c70c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65b4ac45cd6766a2144308a99817f41ef69812519680ebae6edf2d6c72f1c97e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c0288960f3e7739ec0587fcefc29e57c0e351c4903326474454df7b6b57a29c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:42:39Z\\\",\\\"message\\\":\\\"W0219 09:42:28.573633 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 09:42:28.575071 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771494148 cert, and key in /tmp/serving-cert-427400488/serving-signer.crt, /tmp/serving-cert-427400488/serving-signer.key\\\\nI0219 09:42:29.117984 1 observer_polling.go:159] Starting file observer\\\\nW0219 09:42:29.120780 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 09:42:29.121009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:42:29.122010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-427400488/tls.crt::/tmp/serving-cert-427400488/tls.key\\\\\\\"\\\\nF0219 09:42:39.487179 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2dcb23379da52333bd115902957e4051df5472eb6d909d45f4781487e8d6e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:55Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.413773 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85d200ad-dc81-4825-a3e0-976c042ebfd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e2ac875fca92d3c631dc7856cd9f72b9abbf3f2edcbc7efeb49ce1c03ac52a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d4ac252f5069500eef4e1579559c883095bf1c21a29cb96a36a4aab507a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29999666cb6f12b3a4a394a38d4304dd636fe7106b771ca4ef541693fbfc76a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fb1ec6375fa0345ae67191ebc522471cabd2510440f8051132b833c0fa595e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:55Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.427295 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pjxbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3965f16-f751-4de2-9f58-db2070fc99b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d81fdde65dd95b5dd26fd2bccb3c26f4491eee9891d4e837fd01338432057878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pjxbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:55Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.443993 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lwjwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e1b431a-0390-4366-82d1-6cb782c7a9e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdh66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdh66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:43:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lwjwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:55Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.453634 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.453907 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.454052 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.454177 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.454482 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:55Z","lastTransitionTime":"2026-02-19T09:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.457232 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f12bbde7-ee02-4143-b0a7-af0299919dda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5177a63dec267486f4128ae0156f4cb79507b735fca5964a100bc27890e5d13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a087adce637c236bd6e6e1ee13c3742493b9f09053cd984a7c4334056f06d2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://596eb135489ffa0def98b9d17adf293522beb945db0674088c8cb37d1e83b7d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://883c5ece4dc1535a8e1ed6490e8eb103b52b27777bb8dd2244aa3ccbcac483d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://883c5ece4dc1535a8e1ed6490e8eb103b52b27777bb8dd2244aa3ccbcac483d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:55Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.470495 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:55Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.508097 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6nv8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7972115-bfc1-42ee-b756-e394806eed51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://597dabc5893cced827268c6dc222b2f1535c93e6086c25cec52e7f612952eb65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vd96\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6nv8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:55Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.542131 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nsjqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54890991cfb2ac3b404ed7c4c815f5c02e5a23fed0a82dcbc8b0071ae6bda90b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aef896286f2619adf09fb4e2f4f25543b1d0d69c90fb4d301fb1c215e9b78f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:43:38Z\\\",\\\"message\\\":\\\"2026-02-19T09:42:53+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e32317b1-e341-4aca-af44-d186bb1c6485\\\\n2026-02-19T09:42:53+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e32317b1-e341-4aca-af44-d186bb1c6485 to /host/opt/cni/bin/\\\\n2026-02-19T09:42:53Z [verbose] multus-daemon started\\\\n2026-02-19T09:42:53Z [verbose] Readiness Indicator file check\\\\n2026-02-19T09:43:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4tp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nsjqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:55Z is after 2025-08-24T17:21:41Z" Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.557478 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.557704 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.557773 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.557857 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.557947 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:55Z","lastTransitionTime":"2026-02-19T09:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.661976 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.662037 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.662056 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.662079 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.662097 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:55Z","lastTransitionTime":"2026-02-19T09:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.764325 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.764383 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.764404 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.764432 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.764452 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:55Z","lastTransitionTime":"2026-02-19T09:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.868229 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.868310 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.868330 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.868366 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.868387 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:55Z","lastTransitionTime":"2026-02-19T09:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.971921 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.972431 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.972718 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.972833 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:55 crc kubenswrapper[4965]: I0219 09:43:55.972958 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:55Z","lastTransitionTime":"2026-02-19T09:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:56 crc kubenswrapper[4965]: I0219 09:43:56.076073 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:56 crc kubenswrapper[4965]: I0219 09:43:56.076153 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:56 crc kubenswrapper[4965]: I0219 09:43:56.076173 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:56 crc kubenswrapper[4965]: I0219 09:43:56.076249 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:56 crc kubenswrapper[4965]: I0219 09:43:56.076278 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:56Z","lastTransitionTime":"2026-02-19T09:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:56 crc kubenswrapper[4965]: I0219 09:43:56.180163 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:56 crc kubenswrapper[4965]: I0219 09:43:56.180276 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:56 crc kubenswrapper[4965]: I0219 09:43:56.180301 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:56 crc kubenswrapper[4965]: I0219 09:43:56.180330 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:56 crc kubenswrapper[4965]: I0219 09:43:56.180353 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:56Z","lastTransitionTime":"2026-02-19T09:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:56 crc kubenswrapper[4965]: I0219 09:43:56.197348 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:43:56 crc kubenswrapper[4965]: E0219 09:43:56.197697 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:43:56 crc kubenswrapper[4965]: I0219 09:43:56.197365 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:43:56 crc kubenswrapper[4965]: I0219 09:43:56.197817 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:43:56 crc kubenswrapper[4965]: I0219 09:43:56.197817 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:43:56 crc kubenswrapper[4965]: E0219 09:43:56.197980 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:43:56 crc kubenswrapper[4965]: E0219 09:43:56.198187 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:43:56 crc kubenswrapper[4965]: E0219 09:43:56.198379 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lwjwk" podUID="1e1b431a-0390-4366-82d1-6cb782c7a9e8" Feb 19 09:43:56 crc kubenswrapper[4965]: I0219 09:43:56.208672 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 23:10:13.146059266 +0000 UTC Feb 19 09:43:56 crc kubenswrapper[4965]: I0219 09:43:56.283592 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:56 crc kubenswrapper[4965]: I0219 09:43:56.283673 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:56 crc kubenswrapper[4965]: I0219 09:43:56.283707 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:56 crc kubenswrapper[4965]: I0219 09:43:56.283737 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:56 crc kubenswrapper[4965]: I0219 09:43:56.283760 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:56Z","lastTransitionTime":"2026-02-19T09:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:56 crc kubenswrapper[4965]: I0219 09:43:56.387278 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:56 crc kubenswrapper[4965]: I0219 09:43:56.387367 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:56 crc kubenswrapper[4965]: I0219 09:43:56.387392 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:56 crc kubenswrapper[4965]: I0219 09:43:56.387425 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:56 crc kubenswrapper[4965]: I0219 09:43:56.387450 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:56Z","lastTransitionTime":"2026-02-19T09:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:56 crc kubenswrapper[4965]: I0219 09:43:56.491864 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:56 crc kubenswrapper[4965]: I0219 09:43:56.491976 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:56 crc kubenswrapper[4965]: I0219 09:43:56.491998 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:56 crc kubenswrapper[4965]: I0219 09:43:56.492054 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:56 crc kubenswrapper[4965]: I0219 09:43:56.492075 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:56Z","lastTransitionTime":"2026-02-19T09:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:56 crc kubenswrapper[4965]: I0219 09:43:56.595662 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:56 crc kubenswrapper[4965]: I0219 09:43:56.595736 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:56 crc kubenswrapper[4965]: I0219 09:43:56.595754 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:56 crc kubenswrapper[4965]: I0219 09:43:56.595780 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:56 crc kubenswrapper[4965]: I0219 09:43:56.595800 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:56Z","lastTransitionTime":"2026-02-19T09:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:56 crc kubenswrapper[4965]: I0219 09:43:56.698905 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:56 crc kubenswrapper[4965]: I0219 09:43:56.698976 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:56 crc kubenswrapper[4965]: I0219 09:43:56.698995 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:56 crc kubenswrapper[4965]: I0219 09:43:56.699021 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:56 crc kubenswrapper[4965]: I0219 09:43:56.699043 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:56Z","lastTransitionTime":"2026-02-19T09:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:56 crc kubenswrapper[4965]: I0219 09:43:56.802997 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:56 crc kubenswrapper[4965]: I0219 09:43:56.803052 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:56 crc kubenswrapper[4965]: I0219 09:43:56.803063 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:56 crc kubenswrapper[4965]: I0219 09:43:56.803082 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:56 crc kubenswrapper[4965]: I0219 09:43:56.803095 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:56Z","lastTransitionTime":"2026-02-19T09:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:56 crc kubenswrapper[4965]: I0219 09:43:56.905634 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:56 crc kubenswrapper[4965]: I0219 09:43:56.905712 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:56 crc kubenswrapper[4965]: I0219 09:43:56.905732 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:56 crc kubenswrapper[4965]: I0219 09:43:56.905764 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:56 crc kubenswrapper[4965]: I0219 09:43:56.905784 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:56Z","lastTransitionTime":"2026-02-19T09:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:57 crc kubenswrapper[4965]: I0219 09:43:57.009779 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:57 crc kubenswrapper[4965]: I0219 09:43:57.009836 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:57 crc kubenswrapper[4965]: I0219 09:43:57.009847 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:57 crc kubenswrapper[4965]: I0219 09:43:57.009865 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:57 crc kubenswrapper[4965]: I0219 09:43:57.009875 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:57Z","lastTransitionTime":"2026-02-19T09:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:57 crc kubenswrapper[4965]: I0219 09:43:57.113434 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:57 crc kubenswrapper[4965]: I0219 09:43:57.113466 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:57 crc kubenswrapper[4965]: I0219 09:43:57.113475 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:57 crc kubenswrapper[4965]: I0219 09:43:57.113487 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:57 crc kubenswrapper[4965]: I0219 09:43:57.113497 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:57Z","lastTransitionTime":"2026-02-19T09:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:57 crc kubenswrapper[4965]: I0219 09:43:57.209560 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 02:47:33.283205447 +0000 UTC Feb 19 09:43:57 crc kubenswrapper[4965]: I0219 09:43:57.217126 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:57 crc kubenswrapper[4965]: I0219 09:43:57.217184 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:57 crc kubenswrapper[4965]: I0219 09:43:57.217212 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:57 crc kubenswrapper[4965]: I0219 09:43:57.217230 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:57 crc kubenswrapper[4965]: I0219 09:43:57.217280 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:57Z","lastTransitionTime":"2026-02-19T09:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:57 crc kubenswrapper[4965]: I0219 09:43:57.321189 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:57 crc kubenswrapper[4965]: I0219 09:43:57.321239 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:57 crc kubenswrapper[4965]: I0219 09:43:57.321249 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:57 crc kubenswrapper[4965]: I0219 09:43:57.321262 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:57 crc kubenswrapper[4965]: I0219 09:43:57.321272 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:57Z","lastTransitionTime":"2026-02-19T09:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:57 crc kubenswrapper[4965]: I0219 09:43:57.424468 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:57 crc kubenswrapper[4965]: I0219 09:43:57.424743 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:57 crc kubenswrapper[4965]: I0219 09:43:57.424770 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:57 crc kubenswrapper[4965]: I0219 09:43:57.424810 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:57 crc kubenswrapper[4965]: I0219 09:43:57.424837 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:57Z","lastTransitionTime":"2026-02-19T09:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:57 crc kubenswrapper[4965]: I0219 09:43:57.529419 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:57 crc kubenswrapper[4965]: I0219 09:43:57.529728 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:57 crc kubenswrapper[4965]: I0219 09:43:57.529767 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:57 crc kubenswrapper[4965]: I0219 09:43:57.529795 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:57 crc kubenswrapper[4965]: I0219 09:43:57.529898 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:57Z","lastTransitionTime":"2026-02-19T09:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:57 crc kubenswrapper[4965]: I0219 09:43:57.633779 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:57 crc kubenswrapper[4965]: I0219 09:43:57.633833 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:57 crc kubenswrapper[4965]: I0219 09:43:57.633848 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:57 crc kubenswrapper[4965]: I0219 09:43:57.633868 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:57 crc kubenswrapper[4965]: I0219 09:43:57.633883 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:57Z","lastTransitionTime":"2026-02-19T09:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:57 crc kubenswrapper[4965]: I0219 09:43:57.737387 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:57 crc kubenswrapper[4965]: I0219 09:43:57.737462 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:57 crc kubenswrapper[4965]: I0219 09:43:57.737487 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:57 crc kubenswrapper[4965]: I0219 09:43:57.737516 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:57 crc kubenswrapper[4965]: I0219 09:43:57.737543 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:57Z","lastTransitionTime":"2026-02-19T09:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:57 crc kubenswrapper[4965]: I0219 09:43:57.841605 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:57 crc kubenswrapper[4965]: I0219 09:43:57.841657 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:57 crc kubenswrapper[4965]: I0219 09:43:57.841669 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:57 crc kubenswrapper[4965]: I0219 09:43:57.841687 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:57 crc kubenswrapper[4965]: I0219 09:43:57.841699 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:57Z","lastTransitionTime":"2026-02-19T09:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:57 crc kubenswrapper[4965]: I0219 09:43:57.945178 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:57 crc kubenswrapper[4965]: I0219 09:43:57.945307 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:57 crc kubenswrapper[4965]: I0219 09:43:57.945332 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:57 crc kubenswrapper[4965]: I0219 09:43:57.945373 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:57 crc kubenswrapper[4965]: I0219 09:43:57.945398 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:57Z","lastTransitionTime":"2026-02-19T09:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:58 crc kubenswrapper[4965]: I0219 09:43:58.048759 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:58 crc kubenswrapper[4965]: I0219 09:43:58.048833 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:58 crc kubenswrapper[4965]: I0219 09:43:58.048857 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:58 crc kubenswrapper[4965]: I0219 09:43:58.048887 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:58 crc kubenswrapper[4965]: I0219 09:43:58.048911 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:58Z","lastTransitionTime":"2026-02-19T09:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:58 crc kubenswrapper[4965]: I0219 09:43:58.152110 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:58 crc kubenswrapper[4965]: I0219 09:43:58.152187 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:58 crc kubenswrapper[4965]: I0219 09:43:58.152246 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:58 crc kubenswrapper[4965]: I0219 09:43:58.152282 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:58 crc kubenswrapper[4965]: I0219 09:43:58.152306 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:58Z","lastTransitionTime":"2026-02-19T09:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:58 crc kubenswrapper[4965]: I0219 09:43:58.197561 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:43:58 crc kubenswrapper[4965]: I0219 09:43:58.197656 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:43:58 crc kubenswrapper[4965]: I0219 09:43:58.197656 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:43:58 crc kubenswrapper[4965]: E0219 09:43:58.197743 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lwjwk" podUID="1e1b431a-0390-4366-82d1-6cb782c7a9e8" Feb 19 09:43:58 crc kubenswrapper[4965]: E0219 09:43:58.197864 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:43:58 crc kubenswrapper[4965]: E0219 09:43:58.197954 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:43:58 crc kubenswrapper[4965]: I0219 09:43:58.198377 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:43:58 crc kubenswrapper[4965]: E0219 09:43:58.198615 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:43:58 crc kubenswrapper[4965]: I0219 09:43:58.210262 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 23:08:27.313120625 +0000 UTC Feb 19 09:43:58 crc kubenswrapper[4965]: I0219 09:43:58.256278 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:58 crc kubenswrapper[4965]: I0219 09:43:58.256343 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:58 crc kubenswrapper[4965]: I0219 09:43:58.256367 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:58 crc kubenswrapper[4965]: I0219 09:43:58.256393 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:58 crc kubenswrapper[4965]: I0219 09:43:58.256410 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:58Z","lastTransitionTime":"2026-02-19T09:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:58 crc kubenswrapper[4965]: I0219 09:43:58.360041 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:58 crc kubenswrapper[4965]: I0219 09:43:58.360104 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:58 crc kubenswrapper[4965]: I0219 09:43:58.360128 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:58 crc kubenswrapper[4965]: I0219 09:43:58.360157 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:58 crc kubenswrapper[4965]: I0219 09:43:58.360179 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:58Z","lastTransitionTime":"2026-02-19T09:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:58 crc kubenswrapper[4965]: I0219 09:43:58.462655 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:58 crc kubenswrapper[4965]: I0219 09:43:58.462683 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:58 crc kubenswrapper[4965]: I0219 09:43:58.462691 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:58 crc kubenswrapper[4965]: I0219 09:43:58.462703 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:58 crc kubenswrapper[4965]: I0219 09:43:58.462713 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:58Z","lastTransitionTime":"2026-02-19T09:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:58 crc kubenswrapper[4965]: I0219 09:43:58.565687 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:58 crc kubenswrapper[4965]: I0219 09:43:58.565726 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:58 crc kubenswrapper[4965]: I0219 09:43:58.565737 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:58 crc kubenswrapper[4965]: I0219 09:43:58.565752 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:58 crc kubenswrapper[4965]: I0219 09:43:58.565764 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:58Z","lastTransitionTime":"2026-02-19T09:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:58 crc kubenswrapper[4965]: I0219 09:43:58.669247 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:58 crc kubenswrapper[4965]: I0219 09:43:58.669297 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:58 crc kubenswrapper[4965]: I0219 09:43:58.669314 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:58 crc kubenswrapper[4965]: I0219 09:43:58.669336 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:58 crc kubenswrapper[4965]: I0219 09:43:58.669353 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:58Z","lastTransitionTime":"2026-02-19T09:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:58 crc kubenswrapper[4965]: I0219 09:43:58.774559 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:58 crc kubenswrapper[4965]: I0219 09:43:58.774614 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:58 crc kubenswrapper[4965]: I0219 09:43:58.774631 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:58 crc kubenswrapper[4965]: I0219 09:43:58.774653 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:58 crc kubenswrapper[4965]: I0219 09:43:58.774669 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:58Z","lastTransitionTime":"2026-02-19T09:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:58 crc kubenswrapper[4965]: I0219 09:43:58.877583 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:58 crc kubenswrapper[4965]: I0219 09:43:58.877629 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:58 crc kubenswrapper[4965]: I0219 09:43:58.877641 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:58 crc kubenswrapper[4965]: I0219 09:43:58.877657 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:58 crc kubenswrapper[4965]: I0219 09:43:58.877668 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:58Z","lastTransitionTime":"2026-02-19T09:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:58 crc kubenswrapper[4965]: I0219 09:43:58.981725 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:58 crc kubenswrapper[4965]: I0219 09:43:58.981790 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:58 crc kubenswrapper[4965]: I0219 09:43:58.981805 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:58 crc kubenswrapper[4965]: I0219 09:43:58.981828 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:58 crc kubenswrapper[4965]: I0219 09:43:58.981845 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:58Z","lastTransitionTime":"2026-02-19T09:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:59 crc kubenswrapper[4965]: I0219 09:43:59.084867 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:59 crc kubenswrapper[4965]: I0219 09:43:59.084953 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:59 crc kubenswrapper[4965]: I0219 09:43:59.084978 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:59 crc kubenswrapper[4965]: I0219 09:43:59.085008 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:59 crc kubenswrapper[4965]: I0219 09:43:59.085037 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:59Z","lastTransitionTime":"2026-02-19T09:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:59 crc kubenswrapper[4965]: I0219 09:43:59.187871 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:59 crc kubenswrapper[4965]: I0219 09:43:59.187946 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:59 crc kubenswrapper[4965]: I0219 09:43:59.187973 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:59 crc kubenswrapper[4965]: I0219 09:43:59.188001 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:59 crc kubenswrapper[4965]: I0219 09:43:59.188023 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:59Z","lastTransitionTime":"2026-02-19T09:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:59 crc kubenswrapper[4965]: I0219 09:43:59.210770 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 12:12:27.159124007 +0000 UTC Feb 19 09:43:59 crc kubenswrapper[4965]: I0219 09:43:59.290946 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:59 crc kubenswrapper[4965]: I0219 09:43:59.291017 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:59 crc kubenswrapper[4965]: I0219 09:43:59.291041 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:59 crc kubenswrapper[4965]: I0219 09:43:59.291064 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:59 crc kubenswrapper[4965]: I0219 09:43:59.291082 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:59Z","lastTransitionTime":"2026-02-19T09:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:59 crc kubenswrapper[4965]: I0219 09:43:59.394537 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:59 crc kubenswrapper[4965]: I0219 09:43:59.394592 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:59 crc kubenswrapper[4965]: I0219 09:43:59.394605 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:59 crc kubenswrapper[4965]: I0219 09:43:59.394621 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:59 crc kubenswrapper[4965]: I0219 09:43:59.394636 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:59Z","lastTransitionTime":"2026-02-19T09:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:59 crc kubenswrapper[4965]: I0219 09:43:59.497419 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:59 crc kubenswrapper[4965]: I0219 09:43:59.497482 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:59 crc kubenswrapper[4965]: I0219 09:43:59.497500 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:59 crc kubenswrapper[4965]: I0219 09:43:59.497524 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:59 crc kubenswrapper[4965]: I0219 09:43:59.497543 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:59Z","lastTransitionTime":"2026-02-19T09:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:59 crc kubenswrapper[4965]: I0219 09:43:59.601643 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:59 crc kubenswrapper[4965]: I0219 09:43:59.601730 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:59 crc kubenswrapper[4965]: I0219 09:43:59.601756 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:59 crc kubenswrapper[4965]: I0219 09:43:59.601790 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:59 crc kubenswrapper[4965]: I0219 09:43:59.601815 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:59Z","lastTransitionTime":"2026-02-19T09:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:59 crc kubenswrapper[4965]: I0219 09:43:59.704967 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:59 crc kubenswrapper[4965]: I0219 09:43:59.705036 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:59 crc kubenswrapper[4965]: I0219 09:43:59.705054 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:59 crc kubenswrapper[4965]: I0219 09:43:59.705079 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:59 crc kubenswrapper[4965]: I0219 09:43:59.705100 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:59Z","lastTransitionTime":"2026-02-19T09:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:59 crc kubenswrapper[4965]: I0219 09:43:59.807927 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:59 crc kubenswrapper[4965]: I0219 09:43:59.807985 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:59 crc kubenswrapper[4965]: I0219 09:43:59.807998 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:59 crc kubenswrapper[4965]: I0219 09:43:59.808018 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:59 crc kubenswrapper[4965]: I0219 09:43:59.808038 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:59Z","lastTransitionTime":"2026-02-19T09:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:43:59 crc kubenswrapper[4965]: I0219 09:43:59.912253 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:43:59 crc kubenswrapper[4965]: I0219 09:43:59.912701 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:43:59 crc kubenswrapper[4965]: I0219 09:43:59.912847 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:43:59 crc kubenswrapper[4965]: I0219 09:43:59.912986 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:43:59 crc kubenswrapper[4965]: I0219 09:43:59.913126 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:43:59Z","lastTransitionTime":"2026-02-19T09:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:00 crc kubenswrapper[4965]: I0219 09:44:00.016118 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:00 crc kubenswrapper[4965]: I0219 09:44:00.016156 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:00 crc kubenswrapper[4965]: I0219 09:44:00.016171 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:00 crc kubenswrapper[4965]: I0219 09:44:00.016209 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:00 crc kubenswrapper[4965]: I0219 09:44:00.016224 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:00Z","lastTransitionTime":"2026-02-19T09:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:00 crc kubenswrapper[4965]: I0219 09:44:00.118997 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:00 crc kubenswrapper[4965]: I0219 09:44:00.119068 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:00 crc kubenswrapper[4965]: I0219 09:44:00.119092 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:00 crc kubenswrapper[4965]: I0219 09:44:00.119164 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:00 crc kubenswrapper[4965]: I0219 09:44:00.119187 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:00Z","lastTransitionTime":"2026-02-19T09:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:00 crc kubenswrapper[4965]: I0219 09:44:00.197435 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:44:00 crc kubenswrapper[4965]: I0219 09:44:00.197435 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:44:00 crc kubenswrapper[4965]: I0219 09:44:00.197738 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:44:00 crc kubenswrapper[4965]: E0219 09:44:00.197967 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:44:00 crc kubenswrapper[4965]: E0219 09:44:00.198148 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:44:00 crc kubenswrapper[4965]: E0219 09:44:00.198528 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lwjwk" podUID="1e1b431a-0390-4366-82d1-6cb782c7a9e8" Feb 19 09:44:00 crc kubenswrapper[4965]: I0219 09:44:00.198774 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:44:00 crc kubenswrapper[4965]: E0219 09:44:00.199057 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:44:00 crc kubenswrapper[4965]: I0219 09:44:00.210919 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 05:25:36.956939756 +0000 UTC Feb 19 09:44:00 crc kubenswrapper[4965]: I0219 09:44:00.222803 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:00 crc kubenswrapper[4965]: I0219 09:44:00.222860 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:00 crc kubenswrapper[4965]: I0219 09:44:00.222878 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:00 crc kubenswrapper[4965]: I0219 09:44:00.222925 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:00 crc kubenswrapper[4965]: I0219 09:44:00.222944 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:00Z","lastTransitionTime":"2026-02-19T09:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:00 crc kubenswrapper[4965]: I0219 09:44:00.325462 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:00 crc kubenswrapper[4965]: I0219 09:44:00.325975 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:00 crc kubenswrapper[4965]: I0219 09:44:00.326098 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:00 crc kubenswrapper[4965]: I0219 09:44:00.326217 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:00 crc kubenswrapper[4965]: I0219 09:44:00.326334 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:00Z","lastTransitionTime":"2026-02-19T09:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:00 crc kubenswrapper[4965]: I0219 09:44:00.431088 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:00 crc kubenswrapper[4965]: I0219 09:44:00.431225 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:00 crc kubenswrapper[4965]: I0219 09:44:00.431249 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:00 crc kubenswrapper[4965]: I0219 09:44:00.431284 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:00 crc kubenswrapper[4965]: I0219 09:44:00.431317 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:00Z","lastTransitionTime":"2026-02-19T09:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:00 crc kubenswrapper[4965]: I0219 09:44:00.534943 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:00 crc kubenswrapper[4965]: I0219 09:44:00.534998 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:00 crc kubenswrapper[4965]: I0219 09:44:00.535011 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:00 crc kubenswrapper[4965]: I0219 09:44:00.535030 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:00 crc kubenswrapper[4965]: I0219 09:44:00.535042 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:00Z","lastTransitionTime":"2026-02-19T09:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:00 crc kubenswrapper[4965]: I0219 09:44:00.638284 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:00 crc kubenswrapper[4965]: I0219 09:44:00.638354 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:00 crc kubenswrapper[4965]: I0219 09:44:00.638367 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:00 crc kubenswrapper[4965]: I0219 09:44:00.638389 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:00 crc kubenswrapper[4965]: I0219 09:44:00.638403 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:00Z","lastTransitionTime":"2026-02-19T09:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:00 crc kubenswrapper[4965]: I0219 09:44:00.741664 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:00 crc kubenswrapper[4965]: I0219 09:44:00.741913 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:00 crc kubenswrapper[4965]: I0219 09:44:00.741925 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:00 crc kubenswrapper[4965]: I0219 09:44:00.741944 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:00 crc kubenswrapper[4965]: I0219 09:44:00.741956 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:00Z","lastTransitionTime":"2026-02-19T09:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:00 crc kubenswrapper[4965]: I0219 09:44:00.845775 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:00 crc kubenswrapper[4965]: I0219 09:44:00.845832 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:00 crc kubenswrapper[4965]: I0219 09:44:00.845844 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:00 crc kubenswrapper[4965]: I0219 09:44:00.845863 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:00 crc kubenswrapper[4965]: I0219 09:44:00.845877 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:00Z","lastTransitionTime":"2026-02-19T09:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:00 crc kubenswrapper[4965]: I0219 09:44:00.949228 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:00 crc kubenswrapper[4965]: I0219 09:44:00.949290 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:00 crc kubenswrapper[4965]: I0219 09:44:00.949303 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:00 crc kubenswrapper[4965]: I0219 09:44:00.949324 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:00 crc kubenswrapper[4965]: I0219 09:44:00.949339 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:00Z","lastTransitionTime":"2026-02-19T09:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:01 crc kubenswrapper[4965]: I0219 09:44:01.052915 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:01 crc kubenswrapper[4965]: I0219 09:44:01.052998 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:01 crc kubenswrapper[4965]: I0219 09:44:01.053021 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:01 crc kubenswrapper[4965]: I0219 09:44:01.053046 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:01 crc kubenswrapper[4965]: I0219 09:44:01.053063 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:01Z","lastTransitionTime":"2026-02-19T09:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:01 crc kubenswrapper[4965]: I0219 09:44:01.156553 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:01 crc kubenswrapper[4965]: I0219 09:44:01.156625 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:01 crc kubenswrapper[4965]: I0219 09:44:01.156648 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:01 crc kubenswrapper[4965]: I0219 09:44:01.156674 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:01 crc kubenswrapper[4965]: I0219 09:44:01.156696 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:01Z","lastTransitionTime":"2026-02-19T09:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:01 crc kubenswrapper[4965]: I0219 09:44:01.211844 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 03:46:14.740155967 +0000 UTC Feb 19 09:44:01 crc kubenswrapper[4965]: I0219 09:44:01.259787 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:01 crc kubenswrapper[4965]: I0219 09:44:01.260235 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:01 crc kubenswrapper[4965]: I0219 09:44:01.260259 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:01 crc kubenswrapper[4965]: I0219 09:44:01.260285 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:01 crc kubenswrapper[4965]: I0219 09:44:01.260347 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:01Z","lastTransitionTime":"2026-02-19T09:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:01 crc kubenswrapper[4965]: I0219 09:44:01.362721 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:01 crc kubenswrapper[4965]: I0219 09:44:01.362766 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:01 crc kubenswrapper[4965]: I0219 09:44:01.362778 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:01 crc kubenswrapper[4965]: I0219 09:44:01.362795 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:01 crc kubenswrapper[4965]: I0219 09:44:01.362807 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:01Z","lastTransitionTime":"2026-02-19T09:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:01 crc kubenswrapper[4965]: I0219 09:44:01.465674 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:01 crc kubenswrapper[4965]: I0219 09:44:01.465744 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:01 crc kubenswrapper[4965]: I0219 09:44:01.465760 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:01 crc kubenswrapper[4965]: I0219 09:44:01.465783 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:01 crc kubenswrapper[4965]: I0219 09:44:01.465799 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:01Z","lastTransitionTime":"2026-02-19T09:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:01 crc kubenswrapper[4965]: I0219 09:44:01.568995 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:01 crc kubenswrapper[4965]: I0219 09:44:01.569072 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:01 crc kubenswrapper[4965]: I0219 09:44:01.569092 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:01 crc kubenswrapper[4965]: I0219 09:44:01.569120 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:01 crc kubenswrapper[4965]: I0219 09:44:01.569144 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:01Z","lastTransitionTime":"2026-02-19T09:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:01 crc kubenswrapper[4965]: I0219 09:44:01.672849 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:01 crc kubenswrapper[4965]: I0219 09:44:01.672910 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:01 crc kubenswrapper[4965]: I0219 09:44:01.672923 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:01 crc kubenswrapper[4965]: I0219 09:44:01.672944 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:01 crc kubenswrapper[4965]: I0219 09:44:01.672957 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:01Z","lastTransitionTime":"2026-02-19T09:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:01 crc kubenswrapper[4965]: I0219 09:44:01.776775 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:01 crc kubenswrapper[4965]: I0219 09:44:01.776842 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:01 crc kubenswrapper[4965]: I0219 09:44:01.776859 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:01 crc kubenswrapper[4965]: I0219 09:44:01.776882 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:01 crc kubenswrapper[4965]: I0219 09:44:01.776898 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:01Z","lastTransitionTime":"2026-02-19T09:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:01 crc kubenswrapper[4965]: I0219 09:44:01.880345 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:01 crc kubenswrapper[4965]: I0219 09:44:01.880842 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:01 crc kubenswrapper[4965]: I0219 09:44:01.881072 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:01 crc kubenswrapper[4965]: I0219 09:44:01.881359 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:01 crc kubenswrapper[4965]: I0219 09:44:01.881552 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:01Z","lastTransitionTime":"2026-02-19T09:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:01 crc kubenswrapper[4965]: I0219 09:44:01.985834 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:01 crc kubenswrapper[4965]: I0219 09:44:01.985897 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:01 crc kubenswrapper[4965]: I0219 09:44:01.985911 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:01 crc kubenswrapper[4965]: I0219 09:44:01.985932 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:01 crc kubenswrapper[4965]: I0219 09:44:01.985946 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:01Z","lastTransitionTime":"2026-02-19T09:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:02 crc kubenswrapper[4965]: I0219 09:44:02.089947 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:02 crc kubenswrapper[4965]: I0219 09:44:02.090029 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:02 crc kubenswrapper[4965]: I0219 09:44:02.090042 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:02 crc kubenswrapper[4965]: I0219 09:44:02.090071 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:02 crc kubenswrapper[4965]: I0219 09:44:02.090089 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:02Z","lastTransitionTime":"2026-02-19T09:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:02 crc kubenswrapper[4965]: I0219 09:44:02.193617 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:02 crc kubenswrapper[4965]: I0219 09:44:02.193694 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:02 crc kubenswrapper[4965]: I0219 09:44:02.193717 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:02 crc kubenswrapper[4965]: I0219 09:44:02.193745 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:02 crc kubenswrapper[4965]: I0219 09:44:02.193763 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:02Z","lastTransitionTime":"2026-02-19T09:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:02 crc kubenswrapper[4965]: I0219 09:44:02.197024 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:44:02 crc kubenswrapper[4965]: I0219 09:44:02.197086 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:44:02 crc kubenswrapper[4965]: I0219 09:44:02.197097 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:44:02 crc kubenswrapper[4965]: E0219 09:44:02.197306 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:44:02 crc kubenswrapper[4965]: E0219 09:44:02.197434 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lwjwk" podUID="1e1b431a-0390-4366-82d1-6cb782c7a9e8" Feb 19 09:44:02 crc kubenswrapper[4965]: I0219 09:44:02.197576 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:44:02 crc kubenswrapper[4965]: E0219 09:44:02.197772 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:44:02 crc kubenswrapper[4965]: E0219 09:44:02.198104 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:44:02 crc kubenswrapper[4965]: I0219 09:44:02.212395 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 19:13:42.339710054 +0000 UTC Feb 19 09:44:02 crc kubenswrapper[4965]: I0219 09:44:02.298884 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:02 crc kubenswrapper[4965]: I0219 09:44:02.298954 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:02 crc kubenswrapper[4965]: I0219 09:44:02.298973 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:02 crc kubenswrapper[4965]: I0219 09:44:02.298998 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:02 crc kubenswrapper[4965]: I0219 09:44:02.299020 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:02Z","lastTransitionTime":"2026-02-19T09:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:02 crc kubenswrapper[4965]: I0219 09:44:02.402543 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:02 crc kubenswrapper[4965]: I0219 09:44:02.402631 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:02 crc kubenswrapper[4965]: I0219 09:44:02.402652 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:02 crc kubenswrapper[4965]: I0219 09:44:02.402683 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:02 crc kubenswrapper[4965]: I0219 09:44:02.402704 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:02Z","lastTransitionTime":"2026-02-19T09:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:02 crc kubenswrapper[4965]: I0219 09:44:02.506746 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:02 crc kubenswrapper[4965]: I0219 09:44:02.506812 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:02 crc kubenswrapper[4965]: I0219 09:44:02.506825 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:02 crc kubenswrapper[4965]: I0219 09:44:02.506846 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:02 crc kubenswrapper[4965]: I0219 09:44:02.506858 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:02Z","lastTransitionTime":"2026-02-19T09:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:02 crc kubenswrapper[4965]: I0219 09:44:02.611539 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:02 crc kubenswrapper[4965]: I0219 09:44:02.611600 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:02 crc kubenswrapper[4965]: I0219 09:44:02.611613 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:02 crc kubenswrapper[4965]: I0219 09:44:02.611633 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:02 crc kubenswrapper[4965]: I0219 09:44:02.611648 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:02Z","lastTransitionTime":"2026-02-19T09:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:02 crc kubenswrapper[4965]: I0219 09:44:02.714339 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:02 crc kubenswrapper[4965]: I0219 09:44:02.714421 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:02 crc kubenswrapper[4965]: I0219 09:44:02.714443 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:02 crc kubenswrapper[4965]: I0219 09:44:02.714472 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:02 crc kubenswrapper[4965]: I0219 09:44:02.714494 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:02Z","lastTransitionTime":"2026-02-19T09:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:02 crc kubenswrapper[4965]: I0219 09:44:02.818068 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:02 crc kubenswrapper[4965]: I0219 09:44:02.818138 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:02 crc kubenswrapper[4965]: I0219 09:44:02.818156 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:02 crc kubenswrapper[4965]: I0219 09:44:02.818185 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:02 crc kubenswrapper[4965]: I0219 09:44:02.818264 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:02Z","lastTransitionTime":"2026-02-19T09:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:02 crc kubenswrapper[4965]: I0219 09:44:02.921944 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:02 crc kubenswrapper[4965]: I0219 09:44:02.922006 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:02 crc kubenswrapper[4965]: I0219 09:44:02.922023 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:02 crc kubenswrapper[4965]: I0219 09:44:02.922047 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:02 crc kubenswrapper[4965]: I0219 09:44:02.922065 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:02Z","lastTransitionTime":"2026-02-19T09:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:03 crc kubenswrapper[4965]: I0219 09:44:03.025449 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:03 crc kubenswrapper[4965]: I0219 09:44:03.025500 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:03 crc kubenswrapper[4965]: I0219 09:44:03.025517 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:03 crc kubenswrapper[4965]: I0219 09:44:03.025539 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:03 crc kubenswrapper[4965]: I0219 09:44:03.025556 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:03Z","lastTransitionTime":"2026-02-19T09:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:03 crc kubenswrapper[4965]: I0219 09:44:03.129301 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:03 crc kubenswrapper[4965]: I0219 09:44:03.129361 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:03 crc kubenswrapper[4965]: I0219 09:44:03.129379 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:03 crc kubenswrapper[4965]: I0219 09:44:03.129404 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:03 crc kubenswrapper[4965]: I0219 09:44:03.129424 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:03Z","lastTransitionTime":"2026-02-19T09:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:03 crc kubenswrapper[4965]: I0219 09:44:03.213046 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 14:33:01.219917124 +0000 UTC Feb 19 09:44:03 crc kubenswrapper[4965]: I0219 09:44:03.232100 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:03 crc kubenswrapper[4965]: I0219 09:44:03.232163 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:03 crc kubenswrapper[4965]: I0219 09:44:03.232183 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:03 crc kubenswrapper[4965]: I0219 09:44:03.232233 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:03 crc kubenswrapper[4965]: I0219 09:44:03.232253 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:03Z","lastTransitionTime":"2026-02-19T09:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:03 crc kubenswrapper[4965]: I0219 09:44:03.334804 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:03 crc kubenswrapper[4965]: I0219 09:44:03.334877 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:03 crc kubenswrapper[4965]: I0219 09:44:03.334902 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:03 crc kubenswrapper[4965]: I0219 09:44:03.334932 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:03 crc kubenswrapper[4965]: I0219 09:44:03.334951 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:03Z","lastTransitionTime":"2026-02-19T09:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:03 crc kubenswrapper[4965]: I0219 09:44:03.438571 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:03 crc kubenswrapper[4965]: I0219 09:44:03.438643 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:03 crc kubenswrapper[4965]: I0219 09:44:03.438667 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:03 crc kubenswrapper[4965]: I0219 09:44:03.438697 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:03 crc kubenswrapper[4965]: I0219 09:44:03.438720 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:03Z","lastTransitionTime":"2026-02-19T09:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:03 crc kubenswrapper[4965]: I0219 09:44:03.541847 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:03 crc kubenswrapper[4965]: I0219 09:44:03.541908 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:03 crc kubenswrapper[4965]: I0219 09:44:03.541920 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:03 crc kubenswrapper[4965]: I0219 09:44:03.541937 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:03 crc kubenswrapper[4965]: I0219 09:44:03.541947 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:03Z","lastTransitionTime":"2026-02-19T09:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:03 crc kubenswrapper[4965]: I0219 09:44:03.645690 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:03 crc kubenswrapper[4965]: I0219 09:44:03.645737 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:03 crc kubenswrapper[4965]: I0219 09:44:03.645749 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:03 crc kubenswrapper[4965]: I0219 09:44:03.645769 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:03 crc kubenswrapper[4965]: I0219 09:44:03.645781 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:03Z","lastTransitionTime":"2026-02-19T09:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:03 crc kubenswrapper[4965]: I0219 09:44:03.749548 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:03 crc kubenswrapper[4965]: I0219 09:44:03.749648 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:03 crc kubenswrapper[4965]: I0219 09:44:03.749680 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:03 crc kubenswrapper[4965]: I0219 09:44:03.749712 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:03 crc kubenswrapper[4965]: I0219 09:44:03.749734 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:03Z","lastTransitionTime":"2026-02-19T09:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:03 crc kubenswrapper[4965]: I0219 09:44:03.853250 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:03 crc kubenswrapper[4965]: I0219 09:44:03.853319 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:03 crc kubenswrapper[4965]: I0219 09:44:03.853335 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:03 crc kubenswrapper[4965]: I0219 09:44:03.853360 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:03 crc kubenswrapper[4965]: I0219 09:44:03.853375 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:03Z","lastTransitionTime":"2026-02-19T09:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:03 crc kubenswrapper[4965]: I0219 09:44:03.956768 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:03 crc kubenswrapper[4965]: I0219 09:44:03.956816 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:03 crc kubenswrapper[4965]: I0219 09:44:03.956826 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:03 crc kubenswrapper[4965]: I0219 09:44:03.956843 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:03 crc kubenswrapper[4965]: I0219 09:44:03.956854 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:03Z","lastTransitionTime":"2026-02-19T09:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.059141 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.059184 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.059208 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.059226 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.059236 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:04Z","lastTransitionTime":"2026-02-19T09:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.161641 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.161690 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.161702 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.161721 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.161733 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:04Z","lastTransitionTime":"2026-02-19T09:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.197430 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.197476 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.197476 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.197626 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:44:04 crc kubenswrapper[4965]: E0219 09:44:04.197766 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lwjwk" podUID="1e1b431a-0390-4366-82d1-6cb782c7a9e8" Feb 19 09:44:04 crc kubenswrapper[4965]: E0219 09:44:04.197863 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:44:04 crc kubenswrapper[4965]: E0219 09:44:04.197980 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:44:04 crc kubenswrapper[4965]: E0219 09:44:04.198046 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.213862 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 07:44:41.748010565 +0000 UTC Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.216843 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.264891 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.265226 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.265320 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.265388 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.265461 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:04Z","lastTransitionTime":"2026-02-19T09:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.317509 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.317782 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.317850 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.317962 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.318030 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:04Z","lastTransitionTime":"2026-02-19T09:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:04 crc kubenswrapper[4965]: E0219 09:44:04.335273 4965 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:44:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:44:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:44:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:44:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1c83089-21b1-454c-b8cd-3bf0aaa04cd0\\\",\\\"systemUUID\\\":\\\"70334fb7-3860-4c43-90b6-37f049faeb9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:44:04Z is after 2025-08-24T17:21:41Z" Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.340267 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.340401 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.340505 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.340608 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.340695 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:04Z","lastTransitionTime":"2026-02-19T09:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:04 crc kubenswrapper[4965]: E0219 09:44:04.357332 4965 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:44:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:44:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:44:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:44:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1c83089-21b1-454c-b8cd-3bf0aaa04cd0\\\",\\\"systemUUID\\\":\\\"70334fb7-3860-4c43-90b6-37f049faeb9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:44:04Z is after 2025-08-24T17:21:41Z" Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.362373 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.362513 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.362601 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.362692 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.362775 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:04Z","lastTransitionTime":"2026-02-19T09:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:04 crc kubenswrapper[4965]: E0219 09:44:04.376849 4965 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:44:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:44:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:44:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:44:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1c83089-21b1-454c-b8cd-3bf0aaa04cd0\\\",\\\"systemUUID\\\":\\\"70334fb7-3860-4c43-90b6-37f049faeb9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:44:04Z is after 2025-08-24T17:21:41Z" Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.381691 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.381732 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.381748 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.381769 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.381782 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:04Z","lastTransitionTime":"2026-02-19T09:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:04 crc kubenswrapper[4965]: E0219 09:44:04.394807 4965 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:44:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:44:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:44:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:44:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1c83089-21b1-454c-b8cd-3bf0aaa04cd0\\\",\\\"systemUUID\\\":\\\"70334fb7-3860-4c43-90b6-37f049faeb9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:44:04Z is after 2025-08-24T17:21:41Z" Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.398926 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.398972 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.398984 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.399007 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.399020 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:04Z","lastTransitionTime":"2026-02-19T09:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:04 crc kubenswrapper[4965]: E0219 09:44:04.415245 4965 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:44:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:44:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:44:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T09:44:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T09:44:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f1c83089-21b1-454c-b8cd-3bf0aaa04cd0\\\",\\\"systemUUID\\\":\\\"70334fb7-3860-4c43-90b6-37f049faeb9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:44:04Z is after 2025-08-24T17:21:41Z" Feb 19 09:44:04 crc kubenswrapper[4965]: E0219 09:44:04.415712 4965 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.417842 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.417894 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.417905 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.417923 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.417934 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:04Z","lastTransitionTime":"2026-02-19T09:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.520434 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.520518 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.520543 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.520575 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.520600 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:04Z","lastTransitionTime":"2026-02-19T09:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.624104 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.624170 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.624219 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.624243 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.624260 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:04Z","lastTransitionTime":"2026-02-19T09:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.727637 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.727715 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.727737 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.727769 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.727796 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:04Z","lastTransitionTime":"2026-02-19T09:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.830702 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.830777 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.830797 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.830824 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.830843 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:04Z","lastTransitionTime":"2026-02-19T09:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.934718 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.934760 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.934771 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.934785 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:04 crc kubenswrapper[4965]: I0219 09:44:04.934794 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:04Z","lastTransitionTime":"2026-02-19T09:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.038746 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.038821 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.038840 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.038865 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.038885 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:05Z","lastTransitionTime":"2026-02-19T09:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.142446 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.142528 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.142548 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.143027 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.143079 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:05Z","lastTransitionTime":"2026-02-19T09:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.214984 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 07:34:24.951527277 +0000 UTC Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.217545 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ef3eb8-6103-492d-b6ef-f16081d15e83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://107d47a2c3ddc138ad383ab20f81dabe2c31af50f7bd66c31b66df79488ba837\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ff237da7e509d3b4a25e8042c384a768ef0123d1687b574502f769bde3121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9g9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7mhh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:44:05Z is after 2025-08-24T17:21:41Z" Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.246570 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.246606 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.246620 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.246639 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.246652 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:05Z","lastTransitionTime":"2026-02-19T09:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.251809 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c788dfa-1923-4a2b-9619-73acf92ec849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa60b6875cede631c9383845eb085f96d62a6365609f1f98b84165b54e0872a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ebb933d7238665138ec7e854756522607a2814b48116b2ce4474869b39344c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac7fd5095ec7fd8ce98b9150bd5c0a642004e2c1239a6fa1ff002efa67471df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51316b32af59fe23cdf832fbc0b37b11f74d3a57d01eed32ca30a196d4c7e2c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccba1acfe523175d218c25c2f59a6f9874426235c9cba981a80cc53aca12408a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc418c94085bcd4ed93250cce9eb6bc122cd045035b72800df2bdf4b364d6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df664c777d3f25b8d74075723b13263568db42db0feb4d1c5a85cc38fc50aee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df664c777d3f25b8d74075723b13263568db42db0feb4d1c5a85cc38fc50aee9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:43:41Z\\\",\\\"message\\\":\\\"81 6971 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0219 09:43:41.162525 6971 lb_config.go:1031] Cluster endpoints for openshift-kube-controller-manager-operator/metrics for network=default are: map[]\\\\nF0219 09:43:41.162540 6971 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:43:41Z is after 2025-08-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:43:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-dcfpx_openshift-ovn-kubernetes(7c788dfa-1923-4a2b-9619-73acf92ec849)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://533452e14c9d0d57a451ec0dd06097f87f60658a8f008203b29c31b2b5310eb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d758w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dcfpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:44:05Z is after 2025-08-24T17:21:41Z" Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.266684 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g5jnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ab24976-06f3-4373-825a-5234ff24f2cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef52c51fd38bf34f1fc3eb014d85c40137dd15030237334159ffbb71e1d6c2a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll7dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcbd8e6b02f20a249ddb3fbf20ddd72a94b40fd420cb6ad4c59ea513994ac382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:43:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ll7dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:43:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-g5jnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:44:05Z is after 2025-08-24T17:21:41Z" Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.286683 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdd2c04f5bfa6800521c39502b241dfea1a0b9d3ddde4eb92d501d28bcfad1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:44:05Z is after 2025-08-24T17:21:41Z" Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.304693 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:44:05Z is after 2025-08-24T17:21:41Z" Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.321779 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed9a04147ac88af087b35406b7fc4e1261b034a9fbfa0014446cdc08743f7184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a27905a4c42a1d28d582484efe02020cd2b7d5a5af7c53787412705c7a6da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:44:05Z is after 2025-08-24T17:21:41Z" Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.336681 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85d200ad-dc81-4825-a3e0-976c042ebfd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e2ac875fca92d3c631dc7856cd9f72b9abbf3f2edcbc7efeb49ce1c03ac52a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5d4ac252f5069500eef4e1579559c883095bf1c21a29cb96a36a4aab507a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29999666cb6f12b3a4a394a38d4304dd636fe7106b771ca4ef541693fbfc76a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9fb1ec6375fa0345ae67191ebc522471cabd2510440f8051132b833c0fa595e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:44:05Z is after 2025-08-24T17:21:41Z" Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.349562 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.349592 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.349605 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.349622 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.349636 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:05Z","lastTransitionTime":"2026-02-19T09:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.350742 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:44:05Z is after 2025-08-24T17:21:41Z" Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.366035 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ca9c67a49c188984680f98e96b659087034f30727c1fcdad7dfc298157745c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:44:05Z is after 2025-08-24T17:21:41Z" Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.383686 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vpj8c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26ce37d0-9ace-438a-bdd4-6bb30e41bac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cab682da53d115c9e5ce5dca08aae544673283d03b3e11ba9d28ca7896fd4103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb824510a2535cb8bba0af3bf818326f23c38b99cdbba6ec0222868fd138383\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0430f7a884b6910109eb79b8045aa2d46a9e9fa3891d5c963ac1cc317f97e93d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8947fc4018a31dfb6e4a4999cd0ce125e31019a6e04b2a14edac4a29c3a372e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3b065fcc4e11f5576c86810e3af403b4336878950bb02f61e1e461b38b009e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c3b065fcc4e11f5576c86810e3af403b4336878950bb02f61e1e461b38b009e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4089ca812b00a9e7ab8b2ba1f148d929de9837384e6b0b64ea8f62dd6917fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4089ca812b00a9e7ab8b2ba1f148d929de9837384e6b0b64ea8f62dd6917fee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0fceec5800537c79268d8bad66cd51cedd7e6442e8f08ea259dd5714334a81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c0fceec5800537c79268d8bad66cd51cedd7e6442e8f08ea259dd5714334a81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m5tr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vpj8c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:44:05Z is after 2025-08-24T17:21:41Z" Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.396585 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3302b1d7-5be5-4bc8-94c7-0f68e1ce30d9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86427e811b857d8f0f6218893ae936774c7c1b84d8a418371613371462202511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d39c4c4d5e6839588819d3a113bf207592e5adc5cc2c0651d95da2106b7ec16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d39c4c4d5e6839588819d3a113bf207592e5adc5cc2c0651d95da2106b7ec16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:44:05Z is after 2025-08-24T17:21:41Z" Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.427096 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7d88239-9e31-438d-8fbe-b888737049fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fe4d25afcf03528dd5a9f63a4e01dc1bd8cb1280d135a9e45354093435639dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5aab11bb7bd1e0dd00db41d01e040eb635b79e9daaf464d85179fa060e28280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19154ece5f71f57be5700b6a6c2deb66f98cf89c16a8d934a4492e7d35310ff6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d3c713c862d929a27db9588fe204a97afcfcb577e4776e2ead6515515e3f250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f486bc90f80feb22ce88788928b27268b87b6e524ba40c933d430aa44c3e09d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0a24c92eed70e5380e04109d46430cd9e493cf83e4041ffd2ed24c57515650c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0a24c92eed70e5380e04109d46430cd9e493cf83e4041ffd2ed24c57515650c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3853b7baecf2ddd6650964380f069aff3c8b12148cf59440020e28749494073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3853b7baecf2ddd6650964380f069aff3c8b12148cf59440020e28749494073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9cb4d8037687c686e9508a91aa37660803621e7f12087b0a76f97ee2fddbb6f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9cb4d8037687c686e9508a91aa37660803621e7f12087b0a76f97ee2fddbb6f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:44:05Z is after 2025-08-24T17:21:41Z" Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.446723 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"210f2216-544c-43a1-813b-68e47da7447e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31cd85d70b58724b5ed5071b085e5b20b31083087fc7847dfadccc52cfd717c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d403f58f80ef84bfe879f95390ffc186d6516ce879f1ca593e9d06b6fdaa9003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df86861693b1090906a34be8e134571c684c24b327ed62db509ca470e79c70c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65b4ac45cd6766a2144308a99817f41ef69812519680ebae6edf2d6c72f1c97e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c0288960f3e7739ec0587fcefc29e57c0e351c4903326474454df7b6b57a29c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T09:42:39Z\\\",\\\"message\\\":\\\"W0219 09:42:28.573633 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 09:42:28.575071 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771494148 cert, and key in /tmp/serving-cert-427400488/serving-signer.crt, /tmp/serving-cert-427400488/serving-signer.key\\\\nI0219 09:42:29.117984 1 observer_polling.go:159] Starting file observer\\\\nW0219 09:42:29.120780 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 09:42:29.121009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 09:42:29.122010 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-427400488/tls.crt::/tmp/serving-cert-427400488/tls.key\\\\\\\"\\\\nF0219 09:42:39.487179 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2dcb23379da52333bd115902957e4051df5472eb6d909d45f4781487e8d6e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:44:05Z is after 2025-08-24T17:21:41Z" Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.453822 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.453910 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.453930 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.453958 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.453976 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:05Z","lastTransitionTime":"2026-02-19T09:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.467473 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-nsjqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e0b10c6-02b7-49d0-9a76-e89ebbb00528\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54890991cfb2ac3b404ed7c4c815f5c02e5a23fed0a82dcbc8b0071ae6bda90b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aef896286f2619adf09fb4e2f4f25543b1d0d69c90fb4d301fb1c215e9b78f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T09:43:38Z\\\",\\\"message\\\":\\\"2026-02-19T09:42:53+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e32317b1-e341-4aca-af44-d186bb1c6485\\\\n2026-02-19T09:42:53+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e32317b1-e341-4aca-af44-d186bb1c6485 to /host/opt/cni/bin/\\\\n2026-02-19T09:42:53Z [verbose] multus-daemon started\\\\n2026-02-19T09:42:53Z [verbose] Readiness Indicator file check\\\\n2026-02-19T09:43:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4tp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-nsjqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:44:05Z is after 2025-08-24T17:21:41Z" Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.477849 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pjxbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3965f16-f751-4de2-9f58-db2070fc99b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d81fdde65dd95b5dd26fd2bccb3c26f4491eee9891d4e837fd01338432057878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pjxbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:44:05Z is after 2025-08-24T17:21:41Z" Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.489085 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lwjwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e1b431a-0390-4366-82d1-6cb782c7a9e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdh66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdh66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:43:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lwjwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:44:05Z is after 2025-08-24T17:21:41Z" Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.504763 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f12bbde7-ee02-4143-b0a7-af0299919dda\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:43:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5177a63dec267486f4128ae0156f4cb79507b735fca5964a100bc27890e5d13a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a087adce637c236bd6e6e1ee13c3742493b9f09053cd984a7c4334056f06d2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://596eb135489ffa0def98b9d17adf293522beb945db0674088c8cb37d1e83b7d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://883c5ece4dc1535a8e1ed6490e8eb103b52b27777bb8dd2244aa3ccbcac483d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://883c5ece4dc1535a8e1ed6490e8eb103b52b27777bb8dd2244aa3ccbcac483d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T09:42:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T09:42:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:44:05Z is after 2025-08-24T17:21:41Z" Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.521229 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:44:05Z is after 2025-08-24T17:21:41Z" Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.535357 4965 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6nv8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7972115-bfc1-42ee-b756-e394806eed51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://597dabc5893cced827268c6dc222b2f1535c93e6086c25cec52e7f612952eb65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T09:42:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vd96\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T09:42:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6nv8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T09:44:05Z is after 2025-08-24T17:21:41Z" Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.557353 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.557399 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.557409 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.557425 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.557439 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:05Z","lastTransitionTime":"2026-02-19T09:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.660627 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.660718 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.660732 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.660759 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.660775 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:05Z","lastTransitionTime":"2026-02-19T09:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.763656 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.763707 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.763719 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.763740 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.763753 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:05Z","lastTransitionTime":"2026-02-19T09:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.867134 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.867183 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.867214 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.867235 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.867253 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:05Z","lastTransitionTime":"2026-02-19T09:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.975306 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.975411 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.975427 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.975452 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:05 crc kubenswrapper[4965]: I0219 09:44:05.975472 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:05Z","lastTransitionTime":"2026-02-19T09:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:06 crc kubenswrapper[4965]: I0219 09:44:06.079141 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:06 crc kubenswrapper[4965]: I0219 09:44:06.079188 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:06 crc kubenswrapper[4965]: I0219 09:44:06.079220 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:06 crc kubenswrapper[4965]: I0219 09:44:06.079238 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:06 crc kubenswrapper[4965]: I0219 09:44:06.079251 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:06Z","lastTransitionTime":"2026-02-19T09:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:06 crc kubenswrapper[4965]: I0219 09:44:06.088668 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1e1b431a-0390-4366-82d1-6cb782c7a9e8-metrics-certs\") pod \"network-metrics-daemon-lwjwk\" (UID: \"1e1b431a-0390-4366-82d1-6cb782c7a9e8\") " pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:44:06 crc kubenswrapper[4965]: E0219 09:44:06.088865 4965 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 09:44:06 crc kubenswrapper[4965]: E0219 09:44:06.088999 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e1b431a-0390-4366-82d1-6cb782c7a9e8-metrics-certs podName:1e1b431a-0390-4366-82d1-6cb782c7a9e8 nodeName:}" failed. No retries permitted until 2026-02-19 09:45:10.088977221 +0000 UTC m=+165.710298601 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1e1b431a-0390-4366-82d1-6cb782c7a9e8-metrics-certs") pod "network-metrics-daemon-lwjwk" (UID: "1e1b431a-0390-4366-82d1-6cb782c7a9e8") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 09:44:06 crc kubenswrapper[4965]: I0219 09:44:06.182652 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:06 crc kubenswrapper[4965]: I0219 09:44:06.182766 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:06 crc kubenswrapper[4965]: I0219 09:44:06.182800 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:06 crc kubenswrapper[4965]: I0219 09:44:06.183051 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:06 crc kubenswrapper[4965]: I0219 09:44:06.183093 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:06Z","lastTransitionTime":"2026-02-19T09:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:06 crc kubenswrapper[4965]: I0219 09:44:06.197336 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:44:06 crc kubenswrapper[4965]: I0219 09:44:06.197418 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:44:06 crc kubenswrapper[4965]: I0219 09:44:06.197469 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:44:06 crc kubenswrapper[4965]: I0219 09:44:06.197408 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:44:06 crc kubenswrapper[4965]: E0219 09:44:06.197830 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:44:06 crc kubenswrapper[4965]: E0219 09:44:06.198027 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lwjwk" podUID="1e1b431a-0390-4366-82d1-6cb782c7a9e8" Feb 19 09:44:06 crc kubenswrapper[4965]: E0219 09:44:06.198159 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:44:06 crc kubenswrapper[4965]: E0219 09:44:06.198353 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:44:06 crc kubenswrapper[4965]: I0219 09:44:06.216152 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 21:20:12.916871119 +0000 UTC Feb 19 09:44:06 crc kubenswrapper[4965]: I0219 09:44:06.286733 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:06 crc kubenswrapper[4965]: I0219 09:44:06.286765 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:06 crc kubenswrapper[4965]: I0219 09:44:06.286778 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:06 crc kubenswrapper[4965]: I0219 09:44:06.286794 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:06 crc kubenswrapper[4965]: I0219 09:44:06.286807 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:06Z","lastTransitionTime":"2026-02-19T09:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:06 crc kubenswrapper[4965]: I0219 09:44:06.389853 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:06 crc kubenswrapper[4965]: I0219 09:44:06.389898 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:06 crc kubenswrapper[4965]: I0219 09:44:06.389909 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:06 crc kubenswrapper[4965]: I0219 09:44:06.389925 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:06 crc kubenswrapper[4965]: I0219 09:44:06.389936 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:06Z","lastTransitionTime":"2026-02-19T09:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:06 crc kubenswrapper[4965]: I0219 09:44:06.493115 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:06 crc kubenswrapper[4965]: I0219 09:44:06.493176 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:06 crc kubenswrapper[4965]: I0219 09:44:06.493225 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:06 crc kubenswrapper[4965]: I0219 09:44:06.493251 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:06 crc kubenswrapper[4965]: I0219 09:44:06.493270 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:06Z","lastTransitionTime":"2026-02-19T09:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:06 crc kubenswrapper[4965]: I0219 09:44:06.596563 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:06 crc kubenswrapper[4965]: I0219 09:44:06.596616 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:06 crc kubenswrapper[4965]: I0219 09:44:06.596635 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:06 crc kubenswrapper[4965]: I0219 09:44:06.596659 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:06 crc kubenswrapper[4965]: I0219 09:44:06.596677 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:06Z","lastTransitionTime":"2026-02-19T09:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:06 crc kubenswrapper[4965]: I0219 09:44:06.699733 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:06 crc kubenswrapper[4965]: I0219 09:44:06.699819 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:06 crc kubenswrapper[4965]: I0219 09:44:06.699840 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:06 crc kubenswrapper[4965]: I0219 09:44:06.700306 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:06 crc kubenswrapper[4965]: I0219 09:44:06.700553 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:06Z","lastTransitionTime":"2026-02-19T09:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:06 crc kubenswrapper[4965]: I0219 09:44:06.804943 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:06 crc kubenswrapper[4965]: I0219 09:44:06.805016 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:06 crc kubenswrapper[4965]: I0219 09:44:06.805036 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:06 crc kubenswrapper[4965]: I0219 09:44:06.805060 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:06 crc kubenswrapper[4965]: I0219 09:44:06.805078 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:06Z","lastTransitionTime":"2026-02-19T09:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:06 crc kubenswrapper[4965]: I0219 09:44:06.909089 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:06 crc kubenswrapper[4965]: I0219 09:44:06.909153 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:06 crc kubenswrapper[4965]: I0219 09:44:06.909170 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:06 crc kubenswrapper[4965]: I0219 09:44:06.909226 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:06 crc kubenswrapper[4965]: I0219 09:44:06.909246 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:06Z","lastTransitionTime":"2026-02-19T09:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:07 crc kubenswrapper[4965]: I0219 09:44:07.012478 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:07 crc kubenswrapper[4965]: I0219 09:44:07.012985 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:07 crc kubenswrapper[4965]: I0219 09:44:07.013365 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:07 crc kubenswrapper[4965]: I0219 09:44:07.013578 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:07 crc kubenswrapper[4965]: I0219 09:44:07.013748 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:07Z","lastTransitionTime":"2026-02-19T09:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:07 crc kubenswrapper[4965]: I0219 09:44:07.117927 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:07 crc kubenswrapper[4965]: I0219 09:44:07.118012 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:07 crc kubenswrapper[4965]: I0219 09:44:07.118026 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:07 crc kubenswrapper[4965]: I0219 09:44:07.118050 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:07 crc kubenswrapper[4965]: I0219 09:44:07.118068 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:07Z","lastTransitionTime":"2026-02-19T09:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:07 crc kubenswrapper[4965]: I0219 09:44:07.217206 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 16:05:10.184380479 +0000 UTC Feb 19 09:44:07 crc kubenswrapper[4965]: I0219 09:44:07.219954 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:07 crc kubenswrapper[4965]: I0219 09:44:07.219998 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:07 crc kubenswrapper[4965]: I0219 09:44:07.220016 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:07 crc kubenswrapper[4965]: I0219 09:44:07.220038 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:07 crc kubenswrapper[4965]: I0219 09:44:07.220215 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:07Z","lastTransitionTime":"2026-02-19T09:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:07 crc kubenswrapper[4965]: I0219 09:44:07.323935 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:07 crc kubenswrapper[4965]: I0219 09:44:07.324449 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:07 crc kubenswrapper[4965]: I0219 09:44:07.324605 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:07 crc kubenswrapper[4965]: I0219 09:44:07.324788 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:07 crc kubenswrapper[4965]: I0219 09:44:07.324949 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:07Z","lastTransitionTime":"2026-02-19T09:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:07 crc kubenswrapper[4965]: I0219 09:44:07.429029 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:07 crc kubenswrapper[4965]: I0219 09:44:07.429456 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:07 crc kubenswrapper[4965]: I0219 09:44:07.429684 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:07 crc kubenswrapper[4965]: I0219 09:44:07.429827 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:07 crc kubenswrapper[4965]: I0219 09:44:07.429959 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:07Z","lastTransitionTime":"2026-02-19T09:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:07 crc kubenswrapper[4965]: I0219 09:44:07.533587 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:07 crc kubenswrapper[4965]: I0219 09:44:07.534057 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:07 crc kubenswrapper[4965]: I0219 09:44:07.534218 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:07 crc kubenswrapper[4965]: I0219 09:44:07.534346 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:07 crc kubenswrapper[4965]: I0219 09:44:07.534451 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:07Z","lastTransitionTime":"2026-02-19T09:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:07 crc kubenswrapper[4965]: I0219 09:44:07.637791 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:07 crc kubenswrapper[4965]: I0219 09:44:07.637865 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:07 crc kubenswrapper[4965]: I0219 09:44:07.637881 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:07 crc kubenswrapper[4965]: I0219 09:44:07.637899 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:07 crc kubenswrapper[4965]: I0219 09:44:07.637912 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:07Z","lastTransitionTime":"2026-02-19T09:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:07 crc kubenswrapper[4965]: I0219 09:44:07.740500 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:07 crc kubenswrapper[4965]: I0219 09:44:07.740569 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:07 crc kubenswrapper[4965]: I0219 09:44:07.740580 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:07 crc kubenswrapper[4965]: I0219 09:44:07.740597 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:07 crc kubenswrapper[4965]: I0219 09:44:07.740620 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:07Z","lastTransitionTime":"2026-02-19T09:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:07 crc kubenswrapper[4965]: I0219 09:44:07.843610 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:07 crc kubenswrapper[4965]: I0219 09:44:07.843676 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:07 crc kubenswrapper[4965]: I0219 09:44:07.843693 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:07 crc kubenswrapper[4965]: I0219 09:44:07.843716 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:07 crc kubenswrapper[4965]: I0219 09:44:07.843736 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:07Z","lastTransitionTime":"2026-02-19T09:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:07 crc kubenswrapper[4965]: I0219 09:44:07.947015 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:07 crc kubenswrapper[4965]: I0219 09:44:07.947088 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:07 crc kubenswrapper[4965]: I0219 09:44:07.947105 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:07 crc kubenswrapper[4965]: I0219 09:44:07.947133 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:07 crc kubenswrapper[4965]: I0219 09:44:07.947155 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:07Z","lastTransitionTime":"2026-02-19T09:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:08 crc kubenswrapper[4965]: I0219 09:44:08.049665 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:08 crc kubenswrapper[4965]: I0219 09:44:08.049853 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:08 crc kubenswrapper[4965]: I0219 09:44:08.049884 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:08 crc kubenswrapper[4965]: I0219 09:44:08.050146 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:08 crc kubenswrapper[4965]: I0219 09:44:08.050243 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:08Z","lastTransitionTime":"2026-02-19T09:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:08 crc kubenswrapper[4965]: I0219 09:44:08.154079 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:08 crc kubenswrapper[4965]: I0219 09:44:08.154159 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:08 crc kubenswrapper[4965]: I0219 09:44:08.154187 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:08 crc kubenswrapper[4965]: I0219 09:44:08.154257 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:08 crc kubenswrapper[4965]: I0219 09:44:08.154284 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:08Z","lastTransitionTime":"2026-02-19T09:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:08 crc kubenswrapper[4965]: I0219 09:44:08.197638 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:44:08 crc kubenswrapper[4965]: E0219 09:44:08.197839 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:44:08 crc kubenswrapper[4965]: I0219 09:44:08.197874 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:44:08 crc kubenswrapper[4965]: I0219 09:44:08.197947 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:44:08 crc kubenswrapper[4965]: I0219 09:44:08.197959 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:44:08 crc kubenswrapper[4965]: E0219 09:44:08.198079 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:44:08 crc kubenswrapper[4965]: E0219 09:44:08.198294 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lwjwk" podUID="1e1b431a-0390-4366-82d1-6cb782c7a9e8" Feb 19 09:44:08 crc kubenswrapper[4965]: E0219 09:44:08.198606 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:44:08 crc kubenswrapper[4965]: I0219 09:44:08.218961 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 01:47:46.220428686 +0000 UTC Feb 19 09:44:08 crc kubenswrapper[4965]: I0219 09:44:08.257621 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:08 crc kubenswrapper[4965]: I0219 09:44:08.257698 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:08 crc kubenswrapper[4965]: I0219 09:44:08.257720 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:08 crc kubenswrapper[4965]: I0219 09:44:08.257751 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:08 crc kubenswrapper[4965]: I0219 09:44:08.257775 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:08Z","lastTransitionTime":"2026-02-19T09:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:08 crc kubenswrapper[4965]: I0219 09:44:08.363495 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:08 crc kubenswrapper[4965]: I0219 09:44:08.363578 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:08 crc kubenswrapper[4965]: I0219 09:44:08.363601 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:08 crc kubenswrapper[4965]: I0219 09:44:08.363633 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:08 crc kubenswrapper[4965]: I0219 09:44:08.363655 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:08Z","lastTransitionTime":"2026-02-19T09:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:08 crc kubenswrapper[4965]: I0219 09:44:08.468120 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:08 crc kubenswrapper[4965]: I0219 09:44:08.468219 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:08 crc kubenswrapper[4965]: I0219 09:44:08.468241 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:08 crc kubenswrapper[4965]: I0219 09:44:08.468268 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:08 crc kubenswrapper[4965]: I0219 09:44:08.468288 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:08Z","lastTransitionTime":"2026-02-19T09:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:08 crc kubenswrapper[4965]: I0219 09:44:08.570898 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:08 crc kubenswrapper[4965]: I0219 09:44:08.570967 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:08 crc kubenswrapper[4965]: I0219 09:44:08.570988 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:08 crc kubenswrapper[4965]: I0219 09:44:08.571011 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:08 crc kubenswrapper[4965]: I0219 09:44:08.571028 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:08Z","lastTransitionTime":"2026-02-19T09:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:08 crc kubenswrapper[4965]: I0219 09:44:08.674347 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:08 crc kubenswrapper[4965]: I0219 09:44:08.674544 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:08 crc kubenswrapper[4965]: I0219 09:44:08.674571 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:08 crc kubenswrapper[4965]: I0219 09:44:08.674596 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:08 crc kubenswrapper[4965]: I0219 09:44:08.674619 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:08Z","lastTransitionTime":"2026-02-19T09:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:08 crc kubenswrapper[4965]: I0219 09:44:08.777385 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:08 crc kubenswrapper[4965]: I0219 09:44:08.777487 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:08 crc kubenswrapper[4965]: I0219 09:44:08.777508 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:08 crc kubenswrapper[4965]: I0219 09:44:08.777533 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:08 crc kubenswrapper[4965]: I0219 09:44:08.777552 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:08Z","lastTransitionTime":"2026-02-19T09:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:08 crc kubenswrapper[4965]: I0219 09:44:08.880211 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:08 crc kubenswrapper[4965]: I0219 09:44:08.880246 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:08 crc kubenswrapper[4965]: I0219 09:44:08.880256 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:08 crc kubenswrapper[4965]: I0219 09:44:08.880269 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:08 crc kubenswrapper[4965]: I0219 09:44:08.880278 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:08Z","lastTransitionTime":"2026-02-19T09:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:08 crc kubenswrapper[4965]: I0219 09:44:08.983441 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:08 crc kubenswrapper[4965]: I0219 09:44:08.983507 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:08 crc kubenswrapper[4965]: I0219 09:44:08.983524 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:08 crc kubenswrapper[4965]: I0219 09:44:08.983553 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:08 crc kubenswrapper[4965]: I0219 09:44:08.983572 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:08Z","lastTransitionTime":"2026-02-19T09:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:09 crc kubenswrapper[4965]: I0219 09:44:09.086835 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:09 crc kubenswrapper[4965]: I0219 09:44:09.086959 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:09 crc kubenswrapper[4965]: I0219 09:44:09.086979 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:09 crc kubenswrapper[4965]: I0219 09:44:09.087011 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:09 crc kubenswrapper[4965]: I0219 09:44:09.087031 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:09Z","lastTransitionTime":"2026-02-19T09:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:09 crc kubenswrapper[4965]: I0219 09:44:09.190477 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:09 crc kubenswrapper[4965]: I0219 09:44:09.190533 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:09 crc kubenswrapper[4965]: I0219 09:44:09.190552 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:09 crc kubenswrapper[4965]: I0219 09:44:09.190576 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:09 crc kubenswrapper[4965]: I0219 09:44:09.190593 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:09Z","lastTransitionTime":"2026-02-19T09:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:09 crc kubenswrapper[4965]: I0219 09:44:09.198522 4965 scope.go:117] "RemoveContainer" containerID="df664c777d3f25b8d74075723b13263568db42db0feb4d1c5a85cc38fc50aee9" Feb 19 09:44:09 crc kubenswrapper[4965]: E0219 09:44:09.198847 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-dcfpx_openshift-ovn-kubernetes(7c788dfa-1923-4a2b-9619-73acf92ec849)\"" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" podUID="7c788dfa-1923-4a2b-9619-73acf92ec849" Feb 19 09:44:09 crc kubenswrapper[4965]: I0219 09:44:09.219145 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 18:27:07.297839857 +0000 UTC Feb 19 09:44:09 crc kubenswrapper[4965]: I0219 09:44:09.293785 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:09 crc kubenswrapper[4965]: I0219 09:44:09.293840 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:09 crc kubenswrapper[4965]: I0219 09:44:09.293853 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:09 crc kubenswrapper[4965]: I0219 09:44:09.293873 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:09 crc kubenswrapper[4965]: I0219 09:44:09.293889 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:09Z","lastTransitionTime":"2026-02-19T09:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:09 crc kubenswrapper[4965]: I0219 09:44:09.397040 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:09 crc kubenswrapper[4965]: I0219 09:44:09.397109 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:09 crc kubenswrapper[4965]: I0219 09:44:09.397126 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:09 crc kubenswrapper[4965]: I0219 09:44:09.397154 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:09 crc kubenswrapper[4965]: I0219 09:44:09.397172 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:09Z","lastTransitionTime":"2026-02-19T09:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:09 crc kubenswrapper[4965]: I0219 09:44:09.500656 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:09 crc kubenswrapper[4965]: I0219 09:44:09.500743 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:09 crc kubenswrapper[4965]: I0219 09:44:09.500771 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:09 crc kubenswrapper[4965]: I0219 09:44:09.500805 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:09 crc kubenswrapper[4965]: I0219 09:44:09.500828 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:09Z","lastTransitionTime":"2026-02-19T09:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:09 crc kubenswrapper[4965]: I0219 09:44:09.604289 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:09 crc kubenswrapper[4965]: I0219 09:44:09.604366 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:09 crc kubenswrapper[4965]: I0219 09:44:09.604381 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:09 crc kubenswrapper[4965]: I0219 09:44:09.604402 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:09 crc kubenswrapper[4965]: I0219 09:44:09.604419 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:09Z","lastTransitionTime":"2026-02-19T09:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:09 crc kubenswrapper[4965]: I0219 09:44:09.707489 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:09 crc kubenswrapper[4965]: I0219 09:44:09.707591 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:09 crc kubenswrapper[4965]: I0219 09:44:09.707619 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:09 crc kubenswrapper[4965]: I0219 09:44:09.707649 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:09 crc kubenswrapper[4965]: I0219 09:44:09.707667 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:09Z","lastTransitionTime":"2026-02-19T09:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:09 crc kubenswrapper[4965]: I0219 09:44:09.811464 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:09 crc kubenswrapper[4965]: I0219 09:44:09.811528 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:09 crc kubenswrapper[4965]: I0219 09:44:09.811541 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:09 crc kubenswrapper[4965]: I0219 09:44:09.811556 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:09 crc kubenswrapper[4965]: I0219 09:44:09.811570 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:09Z","lastTransitionTime":"2026-02-19T09:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:09 crc kubenswrapper[4965]: I0219 09:44:09.915705 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:09 crc kubenswrapper[4965]: I0219 09:44:09.915758 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:09 crc kubenswrapper[4965]: I0219 09:44:09.915771 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:09 crc kubenswrapper[4965]: I0219 09:44:09.915788 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:09 crc kubenswrapper[4965]: I0219 09:44:09.915799 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:09Z","lastTransitionTime":"2026-02-19T09:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:10 crc kubenswrapper[4965]: I0219 09:44:10.019635 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:10 crc kubenswrapper[4965]: I0219 09:44:10.019685 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:10 crc kubenswrapper[4965]: I0219 09:44:10.019720 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:10 crc kubenswrapper[4965]: I0219 09:44:10.019740 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:10 crc kubenswrapper[4965]: I0219 09:44:10.019750 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:10Z","lastTransitionTime":"2026-02-19T09:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:10 crc kubenswrapper[4965]: I0219 09:44:10.124362 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:10 crc kubenswrapper[4965]: I0219 09:44:10.124429 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:10 crc kubenswrapper[4965]: I0219 09:44:10.124443 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:10 crc kubenswrapper[4965]: I0219 09:44:10.124469 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:10 crc kubenswrapper[4965]: I0219 09:44:10.124485 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:10Z","lastTransitionTime":"2026-02-19T09:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:10 crc kubenswrapper[4965]: I0219 09:44:10.197771 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:44:10 crc kubenswrapper[4965]: I0219 09:44:10.197913 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:44:10 crc kubenswrapper[4965]: I0219 09:44:10.197827 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:44:10 crc kubenswrapper[4965]: I0219 09:44:10.198022 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:44:10 crc kubenswrapper[4965]: E0219 09:44:10.198041 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:44:10 crc kubenswrapper[4965]: E0219 09:44:10.198224 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lwjwk" podUID="1e1b431a-0390-4366-82d1-6cb782c7a9e8" Feb 19 09:44:10 crc kubenswrapper[4965]: E0219 09:44:10.198374 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:44:10 crc kubenswrapper[4965]: E0219 09:44:10.198490 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:44:10 crc kubenswrapper[4965]: I0219 09:44:10.219763 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 22:41:57.280728549 +0000 UTC Feb 19 09:44:10 crc kubenswrapper[4965]: I0219 09:44:10.227046 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:10 crc kubenswrapper[4965]: I0219 09:44:10.227077 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:10 crc kubenswrapper[4965]: I0219 09:44:10.227107 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:10 crc kubenswrapper[4965]: I0219 09:44:10.227123 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:10 crc kubenswrapper[4965]: I0219 09:44:10.227133 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:10Z","lastTransitionTime":"2026-02-19T09:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:10 crc kubenswrapper[4965]: I0219 09:44:10.331246 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:10 crc kubenswrapper[4965]: I0219 09:44:10.331332 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:10 crc kubenswrapper[4965]: I0219 09:44:10.331366 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:10 crc kubenswrapper[4965]: I0219 09:44:10.331382 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:10 crc kubenswrapper[4965]: I0219 09:44:10.331392 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:10Z","lastTransitionTime":"2026-02-19T09:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:10 crc kubenswrapper[4965]: I0219 09:44:10.434993 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:10 crc kubenswrapper[4965]: I0219 09:44:10.435056 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:10 crc kubenswrapper[4965]: I0219 09:44:10.435073 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:10 crc kubenswrapper[4965]: I0219 09:44:10.435098 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:10 crc kubenswrapper[4965]: I0219 09:44:10.435116 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:10Z","lastTransitionTime":"2026-02-19T09:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:10 crc kubenswrapper[4965]: I0219 09:44:10.538736 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:10 crc kubenswrapper[4965]: I0219 09:44:10.538819 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:10 crc kubenswrapper[4965]: I0219 09:44:10.538846 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:10 crc kubenswrapper[4965]: I0219 09:44:10.538873 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:10 crc kubenswrapper[4965]: I0219 09:44:10.538893 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:10Z","lastTransitionTime":"2026-02-19T09:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:10 crc kubenswrapper[4965]: I0219 09:44:10.642629 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:10 crc kubenswrapper[4965]: I0219 09:44:10.642710 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:10 crc kubenswrapper[4965]: I0219 09:44:10.642728 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:10 crc kubenswrapper[4965]: I0219 09:44:10.642758 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:10 crc kubenswrapper[4965]: I0219 09:44:10.642779 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:10Z","lastTransitionTime":"2026-02-19T09:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:10 crc kubenswrapper[4965]: I0219 09:44:10.746752 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:10 crc kubenswrapper[4965]: I0219 09:44:10.746857 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:10 crc kubenswrapper[4965]: I0219 09:44:10.746877 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:10 crc kubenswrapper[4965]: I0219 09:44:10.746902 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:10 crc kubenswrapper[4965]: I0219 09:44:10.746921 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:10Z","lastTransitionTime":"2026-02-19T09:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:10 crc kubenswrapper[4965]: I0219 09:44:10.850305 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:10 crc kubenswrapper[4965]: I0219 09:44:10.850415 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:10 crc kubenswrapper[4965]: I0219 09:44:10.850433 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:10 crc kubenswrapper[4965]: I0219 09:44:10.850465 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:10 crc kubenswrapper[4965]: I0219 09:44:10.850483 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:10Z","lastTransitionTime":"2026-02-19T09:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:10 crc kubenswrapper[4965]: I0219 09:44:10.953151 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:10 crc kubenswrapper[4965]: I0219 09:44:10.953247 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:10 crc kubenswrapper[4965]: I0219 09:44:10.953269 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:10 crc kubenswrapper[4965]: I0219 09:44:10.953293 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:10 crc kubenswrapper[4965]: I0219 09:44:10.953310 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:10Z","lastTransitionTime":"2026-02-19T09:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:11 crc kubenswrapper[4965]: I0219 09:44:11.056929 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:11 crc kubenswrapper[4965]: I0219 09:44:11.056997 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:11 crc kubenswrapper[4965]: I0219 09:44:11.057010 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:11 crc kubenswrapper[4965]: I0219 09:44:11.057039 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:11 crc kubenswrapper[4965]: I0219 09:44:11.057052 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:11Z","lastTransitionTime":"2026-02-19T09:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:11 crc kubenswrapper[4965]: I0219 09:44:11.160464 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:11 crc kubenswrapper[4965]: I0219 09:44:11.160523 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:11 crc kubenswrapper[4965]: I0219 09:44:11.160547 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:11 crc kubenswrapper[4965]: I0219 09:44:11.160569 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:11 crc kubenswrapper[4965]: I0219 09:44:11.160583 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:11Z","lastTransitionTime":"2026-02-19T09:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:11 crc kubenswrapper[4965]: I0219 09:44:11.220701 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 22:30:59.12668379 +0000 UTC Feb 19 09:44:11 crc kubenswrapper[4965]: I0219 09:44:11.263895 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:11 crc kubenswrapper[4965]: I0219 09:44:11.263957 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:11 crc kubenswrapper[4965]: I0219 09:44:11.263984 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:11 crc kubenswrapper[4965]: I0219 09:44:11.264016 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:11 crc kubenswrapper[4965]: I0219 09:44:11.264039 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:11Z","lastTransitionTime":"2026-02-19T09:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:11 crc kubenswrapper[4965]: I0219 09:44:11.366980 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:11 crc kubenswrapper[4965]: I0219 09:44:11.367030 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:11 crc kubenswrapper[4965]: I0219 09:44:11.367041 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:11 crc kubenswrapper[4965]: I0219 09:44:11.367057 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:11 crc kubenswrapper[4965]: I0219 09:44:11.367069 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:11Z","lastTransitionTime":"2026-02-19T09:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:11 crc kubenswrapper[4965]: I0219 09:44:11.470282 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:11 crc kubenswrapper[4965]: I0219 09:44:11.470327 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:11 crc kubenswrapper[4965]: I0219 09:44:11.470335 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:11 crc kubenswrapper[4965]: I0219 09:44:11.470353 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:11 crc kubenswrapper[4965]: I0219 09:44:11.470363 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:11Z","lastTransitionTime":"2026-02-19T09:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:11 crc kubenswrapper[4965]: I0219 09:44:11.573870 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:11 crc kubenswrapper[4965]: I0219 09:44:11.573971 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:11 crc kubenswrapper[4965]: I0219 09:44:11.573995 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:11 crc kubenswrapper[4965]: I0219 09:44:11.574025 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:11 crc kubenswrapper[4965]: I0219 09:44:11.574050 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:11Z","lastTransitionTime":"2026-02-19T09:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:11 crc kubenswrapper[4965]: I0219 09:44:11.677369 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:11 crc kubenswrapper[4965]: I0219 09:44:11.677425 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:11 crc kubenswrapper[4965]: I0219 09:44:11.677433 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:11 crc kubenswrapper[4965]: I0219 09:44:11.677453 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:11 crc kubenswrapper[4965]: I0219 09:44:11.677464 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:11Z","lastTransitionTime":"2026-02-19T09:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:11 crc kubenswrapper[4965]: I0219 09:44:11.780742 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:11 crc kubenswrapper[4965]: I0219 09:44:11.780812 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:11 crc kubenswrapper[4965]: I0219 09:44:11.780821 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:11 crc kubenswrapper[4965]: I0219 09:44:11.780841 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:11 crc kubenswrapper[4965]: I0219 09:44:11.780854 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:11Z","lastTransitionTime":"2026-02-19T09:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:11 crc kubenswrapper[4965]: I0219 09:44:11.883485 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:11 crc kubenswrapper[4965]: I0219 09:44:11.883552 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:11 crc kubenswrapper[4965]: I0219 09:44:11.883573 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:11 crc kubenswrapper[4965]: I0219 09:44:11.883602 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:11 crc kubenswrapper[4965]: I0219 09:44:11.883620 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:11Z","lastTransitionTime":"2026-02-19T09:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:11 crc kubenswrapper[4965]: I0219 09:44:11.985980 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:11 crc kubenswrapper[4965]: I0219 09:44:11.986031 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:11 crc kubenswrapper[4965]: I0219 09:44:11.986045 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:11 crc kubenswrapper[4965]: I0219 09:44:11.986064 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:11 crc kubenswrapper[4965]: I0219 09:44:11.986074 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:11Z","lastTransitionTime":"2026-02-19T09:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:12 crc kubenswrapper[4965]: I0219 09:44:12.088434 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:12 crc kubenswrapper[4965]: I0219 09:44:12.088492 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:12 crc kubenswrapper[4965]: I0219 09:44:12.088505 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:12 crc kubenswrapper[4965]: I0219 09:44:12.088527 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:12 crc kubenswrapper[4965]: I0219 09:44:12.088540 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:12Z","lastTransitionTime":"2026-02-19T09:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:12 crc kubenswrapper[4965]: I0219 09:44:12.191256 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:12 crc kubenswrapper[4965]: I0219 09:44:12.191343 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:12 crc kubenswrapper[4965]: I0219 09:44:12.191354 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:12 crc kubenswrapper[4965]: I0219 09:44:12.191369 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:12 crc kubenswrapper[4965]: I0219 09:44:12.191378 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:12Z","lastTransitionTime":"2026-02-19T09:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:12 crc kubenswrapper[4965]: I0219 09:44:12.197761 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:44:12 crc kubenswrapper[4965]: I0219 09:44:12.197851 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:44:12 crc kubenswrapper[4965]: I0219 09:44:12.197979 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:44:12 crc kubenswrapper[4965]: I0219 09:44:12.198021 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:44:12 crc kubenswrapper[4965]: E0219 09:44:12.198091 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:44:12 crc kubenswrapper[4965]: E0219 09:44:12.198130 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:44:12 crc kubenswrapper[4965]: E0219 09:44:12.198316 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:44:12 crc kubenswrapper[4965]: E0219 09:44:12.198377 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lwjwk" podUID="1e1b431a-0390-4366-82d1-6cb782c7a9e8" Feb 19 09:44:12 crc kubenswrapper[4965]: I0219 09:44:12.221773 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 22:34:48.595815169 +0000 UTC Feb 19 09:44:12 crc kubenswrapper[4965]: I0219 09:44:12.295685 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:12 crc kubenswrapper[4965]: I0219 09:44:12.295740 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:12 crc kubenswrapper[4965]: I0219 09:44:12.295752 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:12 crc kubenswrapper[4965]: I0219 09:44:12.295772 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:12 crc kubenswrapper[4965]: I0219 09:44:12.295784 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:12Z","lastTransitionTime":"2026-02-19T09:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:12 crc kubenswrapper[4965]: I0219 09:44:12.399031 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:12 crc kubenswrapper[4965]: I0219 09:44:12.399130 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:12 crc kubenswrapper[4965]: I0219 09:44:12.399151 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:12 crc kubenswrapper[4965]: I0219 09:44:12.399179 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:12 crc kubenswrapper[4965]: I0219 09:44:12.399225 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:12Z","lastTransitionTime":"2026-02-19T09:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:12 crc kubenswrapper[4965]: I0219 09:44:12.502301 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:12 crc kubenswrapper[4965]: I0219 09:44:12.502404 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:12 crc kubenswrapper[4965]: I0219 09:44:12.502430 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:12 crc kubenswrapper[4965]: I0219 09:44:12.502460 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:12 crc kubenswrapper[4965]: I0219 09:44:12.502484 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:12Z","lastTransitionTime":"2026-02-19T09:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:12 crc kubenswrapper[4965]: I0219 09:44:12.606136 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:12 crc kubenswrapper[4965]: I0219 09:44:12.606258 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:12 crc kubenswrapper[4965]: I0219 09:44:12.606291 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:12 crc kubenswrapper[4965]: I0219 09:44:12.606321 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:12 crc kubenswrapper[4965]: I0219 09:44:12.606339 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:12Z","lastTransitionTime":"2026-02-19T09:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:12 crc kubenswrapper[4965]: I0219 09:44:12.709625 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:12 crc kubenswrapper[4965]: I0219 09:44:12.709692 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:12 crc kubenswrapper[4965]: I0219 09:44:12.709715 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:12 crc kubenswrapper[4965]: I0219 09:44:12.709746 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:12 crc kubenswrapper[4965]: I0219 09:44:12.709769 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:12Z","lastTransitionTime":"2026-02-19T09:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:12 crc kubenswrapper[4965]: I0219 09:44:12.812638 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:12 crc kubenswrapper[4965]: I0219 09:44:12.812700 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:12 crc kubenswrapper[4965]: I0219 09:44:12.812717 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:12 crc kubenswrapper[4965]: I0219 09:44:12.812742 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:12 crc kubenswrapper[4965]: I0219 09:44:12.812764 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:12Z","lastTransitionTime":"2026-02-19T09:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:12 crc kubenswrapper[4965]: I0219 09:44:12.915969 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:12 crc kubenswrapper[4965]: I0219 09:44:12.916024 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:12 crc kubenswrapper[4965]: I0219 09:44:12.916039 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:12 crc kubenswrapper[4965]: I0219 09:44:12.916059 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:12 crc kubenswrapper[4965]: I0219 09:44:12.916076 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:12Z","lastTransitionTime":"2026-02-19T09:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:13 crc kubenswrapper[4965]: I0219 09:44:13.019394 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:13 crc kubenswrapper[4965]: I0219 09:44:13.019473 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:13 crc kubenswrapper[4965]: I0219 09:44:13.019501 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:13 crc kubenswrapper[4965]: I0219 09:44:13.019525 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:13 crc kubenswrapper[4965]: I0219 09:44:13.019543 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:13Z","lastTransitionTime":"2026-02-19T09:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:13 crc kubenswrapper[4965]: I0219 09:44:13.121915 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:13 crc kubenswrapper[4965]: I0219 09:44:13.121952 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:13 crc kubenswrapper[4965]: I0219 09:44:13.121961 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:13 crc kubenswrapper[4965]: I0219 09:44:13.121979 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:13 crc kubenswrapper[4965]: I0219 09:44:13.121991 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:13Z","lastTransitionTime":"2026-02-19T09:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:13 crc kubenswrapper[4965]: I0219 09:44:13.222451 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 03:04:56.696687309 +0000 UTC Feb 19 09:44:13 crc kubenswrapper[4965]: I0219 09:44:13.224632 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:13 crc kubenswrapper[4965]: I0219 09:44:13.224668 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:13 crc kubenswrapper[4965]: I0219 09:44:13.224676 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:13 crc kubenswrapper[4965]: I0219 09:44:13.224690 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:13 crc kubenswrapper[4965]: I0219 09:44:13.224699 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:13Z","lastTransitionTime":"2026-02-19T09:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:13 crc kubenswrapper[4965]: I0219 09:44:13.327783 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:13 crc kubenswrapper[4965]: I0219 09:44:13.327846 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:13 crc kubenswrapper[4965]: I0219 09:44:13.327863 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:13 crc kubenswrapper[4965]: I0219 09:44:13.327887 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:13 crc kubenswrapper[4965]: I0219 09:44:13.327905 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:13Z","lastTransitionTime":"2026-02-19T09:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:13 crc kubenswrapper[4965]: I0219 09:44:13.430737 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:13 crc kubenswrapper[4965]: I0219 09:44:13.430810 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:13 crc kubenswrapper[4965]: I0219 09:44:13.430829 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:13 crc kubenswrapper[4965]: I0219 09:44:13.430859 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:13 crc kubenswrapper[4965]: I0219 09:44:13.430878 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:13Z","lastTransitionTime":"2026-02-19T09:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:13 crc kubenswrapper[4965]: I0219 09:44:13.534957 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:13 crc kubenswrapper[4965]: I0219 09:44:13.535004 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:13 crc kubenswrapper[4965]: I0219 09:44:13.535039 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:13 crc kubenswrapper[4965]: I0219 09:44:13.535106 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:13 crc kubenswrapper[4965]: I0219 09:44:13.535118 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:13Z","lastTransitionTime":"2026-02-19T09:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:13 crc kubenswrapper[4965]: I0219 09:44:13.638783 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:13 crc kubenswrapper[4965]: I0219 09:44:13.638864 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:13 crc kubenswrapper[4965]: I0219 09:44:13.638888 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:13 crc kubenswrapper[4965]: I0219 09:44:13.638913 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:13 crc kubenswrapper[4965]: I0219 09:44:13.638931 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:13Z","lastTransitionTime":"2026-02-19T09:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:13 crc kubenswrapper[4965]: I0219 09:44:13.741894 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:13 crc kubenswrapper[4965]: I0219 09:44:13.741942 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:13 crc kubenswrapper[4965]: I0219 09:44:13.741953 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:13 crc kubenswrapper[4965]: I0219 09:44:13.741976 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:13 crc kubenswrapper[4965]: I0219 09:44:13.741990 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:13Z","lastTransitionTime":"2026-02-19T09:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:13 crc kubenswrapper[4965]: I0219 09:44:13.844448 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:13 crc kubenswrapper[4965]: I0219 09:44:13.844521 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:13 crc kubenswrapper[4965]: I0219 09:44:13.844541 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:13 crc kubenswrapper[4965]: I0219 09:44:13.844567 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:13 crc kubenswrapper[4965]: I0219 09:44:13.844591 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:13Z","lastTransitionTime":"2026-02-19T09:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:13 crc kubenswrapper[4965]: I0219 09:44:13.948292 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:13 crc kubenswrapper[4965]: I0219 09:44:13.948357 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:13 crc kubenswrapper[4965]: I0219 09:44:13.948369 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:13 crc kubenswrapper[4965]: I0219 09:44:13.948387 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:13 crc kubenswrapper[4965]: I0219 09:44:13.948398 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:13Z","lastTransitionTime":"2026-02-19T09:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.052568 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.052622 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.052634 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.052652 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.052663 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:14Z","lastTransitionTime":"2026-02-19T09:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.156640 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.156710 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.156727 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.156752 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.156771 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:14Z","lastTransitionTime":"2026-02-19T09:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.197077 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:44:14 crc kubenswrapper[4965]: E0219 09:44:14.197361 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.197552 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:44:14 crc kubenswrapper[4965]: E0219 09:44:14.197701 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.197727 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.197818 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:44:14 crc kubenswrapper[4965]: E0219 09:44:14.198337 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:44:14 crc kubenswrapper[4965]: E0219 09:44:14.198474 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lwjwk" podUID="1e1b431a-0390-4366-82d1-6cb782c7a9e8" Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.222659 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 13:00:24.985401206 +0000 UTC Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.260394 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.260464 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.260487 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.260516 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.260539 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:14Z","lastTransitionTime":"2026-02-19T09:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.364218 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.364275 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.364287 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.364302 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.364311 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:14Z","lastTransitionTime":"2026-02-19T09:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.467663 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.467716 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.467733 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.467758 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.467776 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:14Z","lastTransitionTime":"2026-02-19T09:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.521475 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.521559 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.521578 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.521604 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.521624 4965 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T09:44:14Z","lastTransitionTime":"2026-02-19T09:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.607234 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-g4kcq"] Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.607736 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g4kcq" Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.610086 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.610623 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.610822 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.611528 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.657983 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=56.657948767 podStartE2EDuration="56.657948767s" podCreationTimestamp="2026-02-19 09:43:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:44:14.657369644 +0000 UTC m=+110.278690994" watchObservedRunningTime="2026-02-19 09:44:14.657948767 +0000 UTC m=+110.279270117" Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.693774 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-6nv8r" podStartSLOduration=87.69374673 podStartE2EDuration="1m27.69374673s" podCreationTimestamp="2026-02-19 09:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:44:14.693657018 +0000 UTC m=+110.314978338" watchObservedRunningTime="2026-02-19 09:44:14.69374673 +0000 UTC m=+110.315068060" Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.706822 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/04c63e06-adcd-4fcc-b5fc-b1f226829fbd-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-g4kcq\" (UID: \"04c63e06-adcd-4fcc-b5fc-b1f226829fbd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g4kcq" Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.706873 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04c63e06-adcd-4fcc-b5fc-b1f226829fbd-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-g4kcq\" (UID: \"04c63e06-adcd-4fcc-b5fc-b1f226829fbd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g4kcq" Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.706976 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/04c63e06-adcd-4fcc-b5fc-b1f226829fbd-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-g4kcq\" (UID: \"04c63e06-adcd-4fcc-b5fc-b1f226829fbd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g4kcq" Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.707149 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/04c63e06-adcd-4fcc-b5fc-b1f226829fbd-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-g4kcq\" (UID: \"04c63e06-adcd-4fcc-b5fc-b1f226829fbd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g4kcq" Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.707260 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/04c63e06-adcd-4fcc-b5fc-b1f226829fbd-service-ca\") pod \"cluster-version-operator-5c965bbfc6-g4kcq\" (UID: \"04c63e06-adcd-4fcc-b5fc-b1f226829fbd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g4kcq" Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.711298 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-nsjqz" podStartSLOduration=87.711281637 podStartE2EDuration="1m27.711281637s" podCreationTimestamp="2026-02-19 09:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:44:14.711136444 +0000 UTC m=+110.332457794" watchObservedRunningTime="2026-02-19 09:44:14.711281637 +0000 UTC m=+110.332602957" Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.728123 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-pjxbf" podStartSLOduration=87.728096347 podStartE2EDuration="1m27.728096347s" podCreationTimestamp="2026-02-19 09:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:44:14.727727088 +0000 UTC m=+110.349048408" watchObservedRunningTime="2026-02-19 09:44:14.728096347 +0000 UTC m=+110.349417667" Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.742942 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g5jnt" podStartSLOduration=86.742921088 podStartE2EDuration="1m26.742921088s" podCreationTimestamp="2026-02-19 09:42:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:44:14.742382026 +0000 UTC m=+110.363703336" watchObservedRunningTime="2026-02-19 09:44:14.742921088 +0000 UTC m=+110.364242428" Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.808449 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/04c63e06-adcd-4fcc-b5fc-b1f226829fbd-service-ca\") pod \"cluster-version-operator-5c965bbfc6-g4kcq\" (UID: \"04c63e06-adcd-4fcc-b5fc-b1f226829fbd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g4kcq" Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.808500 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/04c63e06-adcd-4fcc-b5fc-b1f226829fbd-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-g4kcq\" (UID: \"04c63e06-adcd-4fcc-b5fc-b1f226829fbd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g4kcq" Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.808517 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04c63e06-adcd-4fcc-b5fc-b1f226829fbd-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-g4kcq\" (UID: \"04c63e06-adcd-4fcc-b5fc-b1f226829fbd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g4kcq" Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.808537 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/04c63e06-adcd-4fcc-b5fc-b1f226829fbd-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-g4kcq\" (UID: \"04c63e06-adcd-4fcc-b5fc-b1f226829fbd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g4kcq" Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.808586 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/04c63e06-adcd-4fcc-b5fc-b1f226829fbd-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-g4kcq\" (UID: \"04c63e06-adcd-4fcc-b5fc-b1f226829fbd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g4kcq" Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.808685 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/04c63e06-adcd-4fcc-b5fc-b1f226829fbd-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-g4kcq\" (UID: \"04c63e06-adcd-4fcc-b5fc-b1f226829fbd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g4kcq" Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.808671 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/04c63e06-adcd-4fcc-b5fc-b1f226829fbd-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-g4kcq\" (UID: \"04c63e06-adcd-4fcc-b5fc-b1f226829fbd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g4kcq" Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.810046 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/04c63e06-adcd-4fcc-b5fc-b1f226829fbd-service-ca\") pod \"cluster-version-operator-5c965bbfc6-g4kcq\" (UID: \"04c63e06-adcd-4fcc-b5fc-b1f226829fbd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g4kcq" Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.816587 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podStartSLOduration=87.816573124 podStartE2EDuration="1m27.816573124s" podCreationTimestamp="2026-02-19 09:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:44:14.81603941 +0000 UTC m=+110.437360720" watchObservedRunningTime="2026-02-19 09:44:14.816573124 +0000 UTC m=+110.437894434" Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.827153 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04c63e06-adcd-4fcc-b5fc-b1f226829fbd-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-g4kcq\" (UID: \"04c63e06-adcd-4fcc-b5fc-b1f226829fbd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g4kcq" Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.835036 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/04c63e06-adcd-4fcc-b5fc-b1f226829fbd-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-g4kcq\" (UID: \"04c63e06-adcd-4fcc-b5fc-b1f226829fbd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g4kcq" Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.901122 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-vpj8c" podStartSLOduration=87.901099473 podStartE2EDuration="1m27.901099473s" podCreationTimestamp="2026-02-19 09:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:44:14.88248509 +0000 UTC m=+110.503806400" watchObservedRunningTime="2026-02-19 09:44:14.901099473 +0000 UTC m=+110.522420783" Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.925042 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=24.925017006 podStartE2EDuration="24.925017006s" podCreationTimestamp="2026-02-19 09:43:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:44:14.901570414 +0000 UTC m=+110.522891714" watchObservedRunningTime="2026-02-19 09:44:14.925017006 +0000 UTC m=+110.546338316" Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.925152 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=10.925146529 podStartE2EDuration="10.925146529s" podCreationTimestamp="2026-02-19 09:44:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:44:14.923710504 +0000 UTC m=+110.545031824" watchObservedRunningTime="2026-02-19 09:44:14.925146529 +0000 UTC m=+110.546467839" Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.939430 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g4kcq" Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.940899 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=88.940886502 podStartE2EDuration="1m28.940886502s" podCreationTimestamp="2026-02-19 09:42:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:44:14.940736619 +0000 UTC m=+110.562057949" watchObservedRunningTime="2026-02-19 09:44:14.940886502 +0000 UTC m=+110.562207812" Feb 19 09:44:14 crc kubenswrapper[4965]: I0219 09:44:14.962366 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=86.962341736 podStartE2EDuration="1m26.962341736s" podCreationTimestamp="2026-02-19 09:42:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:44:14.962333616 +0000 UTC m=+110.583654946" watchObservedRunningTime="2026-02-19 09:44:14.962341736 +0000 UTC m=+110.583663046" Feb 19 09:44:15 crc kubenswrapper[4965]: I0219 09:44:15.223842 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 23:49:47.647219066 +0000 UTC Feb 19 09:44:15 crc kubenswrapper[4965]: I0219 09:44:15.223932 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 19 09:44:15 crc kubenswrapper[4965]: I0219 09:44:15.235445 4965 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 19 09:44:15 crc kubenswrapper[4965]: I0219 09:44:15.844504 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g4kcq" event={"ID":"04c63e06-adcd-4fcc-b5fc-b1f226829fbd","Type":"ContainerStarted","Data":"2e9640f2b6629a4fe3c41deb736dfceb091f70dc89601f06c7a699916a46b3ac"} Feb 19 09:44:15 crc kubenswrapper[4965]: I0219 09:44:15.844561 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g4kcq" event={"ID":"04c63e06-adcd-4fcc-b5fc-b1f226829fbd","Type":"ContainerStarted","Data":"44837ad665bc2d4e7792b6ba70f1dbc7528f99468aaf5b51419049ff31684240"} Feb 19 09:44:16 crc kubenswrapper[4965]: I0219 09:44:16.197438 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:44:16 crc kubenswrapper[4965]: I0219 09:44:16.197492 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:44:16 crc kubenswrapper[4965]: I0219 09:44:16.197509 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:44:16 crc kubenswrapper[4965]: I0219 09:44:16.197678 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:44:16 crc kubenswrapper[4965]: E0219 09:44:16.197828 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:44:16 crc kubenswrapper[4965]: E0219 09:44:16.198037 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:44:16 crc kubenswrapper[4965]: E0219 09:44:16.198124 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lwjwk" podUID="1e1b431a-0390-4366-82d1-6cb782c7a9e8" Feb 19 09:44:16 crc kubenswrapper[4965]: E0219 09:44:16.198338 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:44:18 crc kubenswrapper[4965]: I0219 09:44:18.197869 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:44:18 crc kubenswrapper[4965]: I0219 09:44:18.197885 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:44:18 crc kubenswrapper[4965]: I0219 09:44:18.197963 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:44:18 crc kubenswrapper[4965]: E0219 09:44:18.198613 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:44:18 crc kubenswrapper[4965]: E0219 09:44:18.198385 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lwjwk" podUID="1e1b431a-0390-4366-82d1-6cb782c7a9e8" Feb 19 09:44:18 crc kubenswrapper[4965]: I0219 09:44:18.198104 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:44:18 crc kubenswrapper[4965]: E0219 09:44:18.198700 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:44:18 crc kubenswrapper[4965]: E0219 09:44:18.198766 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:44:20 crc kubenswrapper[4965]: I0219 09:44:20.197154 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:44:20 crc kubenswrapper[4965]: I0219 09:44:20.197289 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:44:20 crc kubenswrapper[4965]: I0219 09:44:20.197289 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:44:20 crc kubenswrapper[4965]: E0219 09:44:20.197450 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:44:20 crc kubenswrapper[4965]: I0219 09:44:20.197494 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:44:20 crc kubenswrapper[4965]: E0219 09:44:20.197575 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:44:20 crc kubenswrapper[4965]: E0219 09:44:20.197819 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lwjwk" podUID="1e1b431a-0390-4366-82d1-6cb782c7a9e8" Feb 19 09:44:20 crc kubenswrapper[4965]: E0219 09:44:20.197988 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:44:22 crc kubenswrapper[4965]: I0219 09:44:22.196827 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:44:22 crc kubenswrapper[4965]: I0219 09:44:22.196861 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:44:22 crc kubenswrapper[4965]: I0219 09:44:22.196887 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:44:22 crc kubenswrapper[4965]: E0219 09:44:22.197000 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:44:22 crc kubenswrapper[4965]: I0219 09:44:22.196903 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:44:22 crc kubenswrapper[4965]: E0219 09:44:22.197122 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:44:22 crc kubenswrapper[4965]: E0219 09:44:22.197109 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:44:22 crc kubenswrapper[4965]: E0219 09:44:22.197182 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lwjwk" podUID="1e1b431a-0390-4366-82d1-6cb782c7a9e8" Feb 19 09:44:23 crc kubenswrapper[4965]: I0219 09:44:23.198519 4965 scope.go:117] "RemoveContainer" containerID="df664c777d3f25b8d74075723b13263568db42db0feb4d1c5a85cc38fc50aee9" Feb 19 09:44:23 crc kubenswrapper[4965]: I0219 09:44:23.878068 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dcfpx_7c788dfa-1923-4a2b-9619-73acf92ec849/ovnkube-controller/3.log" Feb 19 09:44:23 crc kubenswrapper[4965]: I0219 09:44:23.881743 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" event={"ID":"7c788dfa-1923-4a2b-9619-73acf92ec849","Type":"ContainerStarted","Data":"48a894cd63123be13228cd57371260ac740cdceb7c7280f8d0d01608f0008dff"} Feb 19 09:44:23 crc kubenswrapper[4965]: I0219 09:44:23.882187 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:44:23 crc kubenswrapper[4965]: I0219 09:44:23.914919 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g4kcq" podStartSLOduration=96.914899539 podStartE2EDuration="1m36.914899539s" podCreationTimestamp="2026-02-19 09:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:44:15.862581893 +0000 UTC m=+111.483903243" watchObservedRunningTime="2026-02-19 09:44:23.914899539 +0000 UTC m=+119.536220849" Feb 19 09:44:24 crc kubenswrapper[4965]: I0219 09:44:24.197774 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:44:24 crc kubenswrapper[4965]: I0219 09:44:24.197845 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:44:24 crc kubenswrapper[4965]: E0219 09:44:24.197950 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lwjwk" podUID="1e1b431a-0390-4366-82d1-6cb782c7a9e8" Feb 19 09:44:24 crc kubenswrapper[4965]: I0219 09:44:24.198024 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:44:24 crc kubenswrapper[4965]: I0219 09:44:24.197802 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:44:24 crc kubenswrapper[4965]: E0219 09:44:24.198287 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:44:24 crc kubenswrapper[4965]: E0219 09:44:24.198376 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:44:24 crc kubenswrapper[4965]: E0219 09:44:24.198454 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:44:24 crc kubenswrapper[4965]: I0219 09:44:24.239282 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" podStartSLOduration=97.239255443 podStartE2EDuration="1m37.239255443s" podCreationTimestamp="2026-02-19 09:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:44:23.916095958 +0000 UTC m=+119.537417288" watchObservedRunningTime="2026-02-19 09:44:24.239255443 +0000 UTC m=+119.860576773" Feb 19 09:44:24 crc kubenswrapper[4965]: I0219 09:44:24.240684 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-lwjwk"] Feb 19 09:44:24 crc kubenswrapper[4965]: I0219 09:44:24.887457 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nsjqz_5e0b10c6-02b7-49d0-9a76-e89ebbb00528/kube-multus/1.log" Feb 19 09:44:24 crc kubenswrapper[4965]: I0219 09:44:24.889214 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nsjqz_5e0b10c6-02b7-49d0-9a76-e89ebbb00528/kube-multus/0.log" Feb 19 09:44:24 crc kubenswrapper[4965]: I0219 09:44:24.889286 4965 generic.go:334] "Generic (PLEG): container finished" podID="5e0b10c6-02b7-49d0-9a76-e89ebbb00528" containerID="54890991cfb2ac3b404ed7c4c815f5c02e5a23fed0a82dcbc8b0071ae6bda90b" exitCode=1 Feb 19 09:44:24 crc kubenswrapper[4965]: I0219 09:44:24.889380 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-nsjqz" event={"ID":"5e0b10c6-02b7-49d0-9a76-e89ebbb00528","Type":"ContainerDied","Data":"54890991cfb2ac3b404ed7c4c815f5c02e5a23fed0a82dcbc8b0071ae6bda90b"} Feb 19 09:44:24 crc kubenswrapper[4965]: I0219 09:44:24.889416 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:44:24 crc kubenswrapper[4965]: I0219 09:44:24.889447 4965 scope.go:117] "RemoveContainer" containerID="8aef896286f2619adf09fb4e2f4f25543b1d0d69c90fb4d301fb1c215e9b78f8" Feb 19 09:44:24 crc kubenswrapper[4965]: E0219 09:44:24.889555 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lwjwk" podUID="1e1b431a-0390-4366-82d1-6cb782c7a9e8" Feb 19 09:44:24 crc kubenswrapper[4965]: I0219 09:44:24.890399 4965 scope.go:117] "RemoveContainer" containerID="54890991cfb2ac3b404ed7c4c815f5c02e5a23fed0a82dcbc8b0071ae6bda90b" Feb 19 09:44:24 crc kubenswrapper[4965]: E0219 09:44:24.890684 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-nsjqz_openshift-multus(5e0b10c6-02b7-49d0-9a76-e89ebbb00528)\"" pod="openshift-multus/multus-nsjqz" podUID="5e0b10c6-02b7-49d0-9a76-e89ebbb00528" Feb 19 09:44:25 crc kubenswrapper[4965]: E0219 09:44:25.136774 4965 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 19 09:44:25 crc kubenswrapper[4965]: E0219 09:44:25.304603 4965 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 09:44:25 crc kubenswrapper[4965]: I0219 09:44:25.895554 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nsjqz_5e0b10c6-02b7-49d0-9a76-e89ebbb00528/kube-multus/1.log" Feb 19 09:44:26 crc kubenswrapper[4965]: I0219 09:44:26.196968 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:44:26 crc kubenswrapper[4965]: I0219 09:44:26.196989 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:44:26 crc kubenswrapper[4965]: E0219 09:44:26.197172 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:44:26 crc kubenswrapper[4965]: I0219 09:44:26.196995 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:44:26 crc kubenswrapper[4965]: E0219 09:44:26.197336 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:44:26 crc kubenswrapper[4965]: E0219 09:44:26.197479 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:44:27 crc kubenswrapper[4965]: I0219 09:44:27.197846 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:44:27 crc kubenswrapper[4965]: E0219 09:44:27.198066 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lwjwk" podUID="1e1b431a-0390-4366-82d1-6cb782c7a9e8" Feb 19 09:44:28 crc kubenswrapper[4965]: I0219 09:44:28.197720 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:44:28 crc kubenswrapper[4965]: I0219 09:44:28.197813 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:44:28 crc kubenswrapper[4965]: I0219 09:44:28.197871 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:44:28 crc kubenswrapper[4965]: E0219 09:44:28.197916 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:44:28 crc kubenswrapper[4965]: E0219 09:44:28.198046 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:44:28 crc kubenswrapper[4965]: E0219 09:44:28.198141 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:44:29 crc kubenswrapper[4965]: I0219 09:44:29.197226 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:44:29 crc kubenswrapper[4965]: E0219 09:44:29.197697 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lwjwk" podUID="1e1b431a-0390-4366-82d1-6cb782c7a9e8" Feb 19 09:44:30 crc kubenswrapper[4965]: I0219 09:44:30.197103 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:44:30 crc kubenswrapper[4965]: I0219 09:44:30.197243 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:44:30 crc kubenswrapper[4965]: E0219 09:44:30.197375 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:44:30 crc kubenswrapper[4965]: E0219 09:44:30.197715 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:44:30 crc kubenswrapper[4965]: I0219 09:44:30.197414 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:44:30 crc kubenswrapper[4965]: E0219 09:44:30.198659 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:44:30 crc kubenswrapper[4965]: E0219 09:44:30.306134 4965 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 09:44:30 crc kubenswrapper[4965]: I0219 09:44:30.357719 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:44:31 crc kubenswrapper[4965]: I0219 09:44:31.197552 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:44:31 crc kubenswrapper[4965]: E0219 09:44:31.197814 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lwjwk" podUID="1e1b431a-0390-4366-82d1-6cb782c7a9e8" Feb 19 09:44:32 crc kubenswrapper[4965]: I0219 09:44:32.197265 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:44:32 crc kubenswrapper[4965]: I0219 09:44:32.197265 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:44:32 crc kubenswrapper[4965]: I0219 09:44:32.197441 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:44:32 crc kubenswrapper[4965]: E0219 09:44:32.197552 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:44:32 crc kubenswrapper[4965]: E0219 09:44:32.197655 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:44:32 crc kubenswrapper[4965]: E0219 09:44:32.197770 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:44:33 crc kubenswrapper[4965]: I0219 09:44:33.196970 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:44:33 crc kubenswrapper[4965]: E0219 09:44:33.197140 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lwjwk" podUID="1e1b431a-0390-4366-82d1-6cb782c7a9e8" Feb 19 09:44:34 crc kubenswrapper[4965]: I0219 09:44:34.197451 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:44:34 crc kubenswrapper[4965]: I0219 09:44:34.197504 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:44:34 crc kubenswrapper[4965]: I0219 09:44:34.197547 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:44:34 crc kubenswrapper[4965]: E0219 09:44:34.197661 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:44:34 crc kubenswrapper[4965]: E0219 09:44:34.197799 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:44:34 crc kubenswrapper[4965]: E0219 09:44:34.197909 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:44:35 crc kubenswrapper[4965]: I0219 09:44:35.197447 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:44:35 crc kubenswrapper[4965]: E0219 09:44:35.199023 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lwjwk" podUID="1e1b431a-0390-4366-82d1-6cb782c7a9e8" Feb 19 09:44:35 crc kubenswrapper[4965]: E0219 09:44:35.307476 4965 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 09:44:36 crc kubenswrapper[4965]: I0219 09:44:36.197447 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:44:36 crc kubenswrapper[4965]: I0219 09:44:36.197564 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:44:36 crc kubenswrapper[4965]: I0219 09:44:36.197455 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:44:36 crc kubenswrapper[4965]: E0219 09:44:36.197644 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:44:36 crc kubenswrapper[4965]: E0219 09:44:36.197734 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:44:36 crc kubenswrapper[4965]: E0219 09:44:36.197826 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:44:37 crc kubenswrapper[4965]: I0219 09:44:37.197788 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:44:37 crc kubenswrapper[4965]: E0219 09:44:37.198091 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lwjwk" podUID="1e1b431a-0390-4366-82d1-6cb782c7a9e8" Feb 19 09:44:38 crc kubenswrapper[4965]: I0219 09:44:38.197641 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:44:38 crc kubenswrapper[4965]: E0219 09:44:38.197830 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:44:38 crc kubenswrapper[4965]: I0219 09:44:38.197642 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:44:38 crc kubenswrapper[4965]: I0219 09:44:38.197672 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:44:38 crc kubenswrapper[4965]: I0219 09:44:38.198158 4965 scope.go:117] "RemoveContainer" containerID="54890991cfb2ac3b404ed7c4c815f5c02e5a23fed0a82dcbc8b0071ae6bda90b" Feb 19 09:44:38 crc kubenswrapper[4965]: E0219 09:44:38.198359 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:44:38 crc kubenswrapper[4965]: E0219 09:44:38.198570 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:44:38 crc kubenswrapper[4965]: I0219 09:44:38.947275 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nsjqz_5e0b10c6-02b7-49d0-9a76-e89ebbb00528/kube-multus/1.log" Feb 19 09:44:38 crc kubenswrapper[4965]: I0219 09:44:38.947343 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-nsjqz" event={"ID":"5e0b10c6-02b7-49d0-9a76-e89ebbb00528","Type":"ContainerStarted","Data":"5ce78b16779886d7dcc4f414531a624941d19304ad86ccb93cd0f009d3274b40"} Feb 19 09:44:39 crc kubenswrapper[4965]: I0219 09:44:39.196869 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:44:39 crc kubenswrapper[4965]: E0219 09:44:39.197030 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lwjwk" podUID="1e1b431a-0390-4366-82d1-6cb782c7a9e8" Feb 19 09:44:40 crc kubenswrapper[4965]: I0219 09:44:40.196990 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:44:40 crc kubenswrapper[4965]: I0219 09:44:40.197073 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:44:40 crc kubenswrapper[4965]: I0219 09:44:40.197136 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:44:40 crc kubenswrapper[4965]: E0219 09:44:40.197181 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:44:40 crc kubenswrapper[4965]: E0219 09:44:40.197424 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:44:40 crc kubenswrapper[4965]: E0219 09:44:40.197532 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:44:40 crc kubenswrapper[4965]: E0219 09:44:40.309216 4965 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 09:44:41 crc kubenswrapper[4965]: I0219 09:44:41.197256 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:44:41 crc kubenswrapper[4965]: E0219 09:44:41.197527 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lwjwk" podUID="1e1b431a-0390-4366-82d1-6cb782c7a9e8" Feb 19 09:44:42 crc kubenswrapper[4965]: I0219 09:44:42.197336 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:44:42 crc kubenswrapper[4965]: I0219 09:44:42.197336 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:44:42 crc kubenswrapper[4965]: E0219 09:44:42.198111 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:44:42 crc kubenswrapper[4965]: E0219 09:44:42.198154 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:44:42 crc kubenswrapper[4965]: I0219 09:44:42.197444 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:44:42 crc kubenswrapper[4965]: E0219 09:44:42.198301 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:44:43 crc kubenswrapper[4965]: I0219 09:44:43.197270 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:44:43 crc kubenswrapper[4965]: E0219 09:44:43.197579 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lwjwk" podUID="1e1b431a-0390-4366-82d1-6cb782c7a9e8" Feb 19 09:44:44 crc kubenswrapper[4965]: I0219 09:44:44.197375 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:44:44 crc kubenswrapper[4965]: E0219 09:44:44.197535 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 09:44:44 crc kubenswrapper[4965]: I0219 09:44:44.197399 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:44:44 crc kubenswrapper[4965]: I0219 09:44:44.197375 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:44:44 crc kubenswrapper[4965]: E0219 09:44:44.197619 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 09:44:44 crc kubenswrapper[4965]: E0219 09:44:44.198027 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 09:44:45 crc kubenswrapper[4965]: I0219 09:44:45.197984 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:44:45 crc kubenswrapper[4965]: E0219 09:44:45.199755 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lwjwk" podUID="1e1b431a-0390-4366-82d1-6cb782c7a9e8" Feb 19 09:44:46 crc kubenswrapper[4965]: I0219 09:44:46.197803 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:44:46 crc kubenswrapper[4965]: I0219 09:44:46.197907 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:44:46 crc kubenswrapper[4965]: I0219 09:44:46.197978 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:44:46 crc kubenswrapper[4965]: I0219 09:44:46.200512 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 19 09:44:46 crc kubenswrapper[4965]: I0219 09:44:46.200746 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 19 09:44:46 crc kubenswrapper[4965]: I0219 09:44:46.201450 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 19 09:44:46 crc kubenswrapper[4965]: I0219 09:44:46.201563 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 19 09:44:47 crc kubenswrapper[4965]: I0219 09:44:47.197622 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:44:47 crc kubenswrapper[4965]: I0219 09:44:47.202237 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 19 09:44:47 crc kubenswrapper[4965]: I0219 09:44:47.203298 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 19 09:44:54 crc kubenswrapper[4965]: I0219 09:44:54.072009 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:44:54 crc kubenswrapper[4965]: E0219 09:44:54.072295 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:46:56.072261338 +0000 UTC m=+271.693582678 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:44:54 crc kubenswrapper[4965]: I0219 09:44:54.173568 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:44:54 crc kubenswrapper[4965]: I0219 09:44:54.173673 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:44:54 crc kubenswrapper[4965]: I0219 09:44:54.173726 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:44:54 crc kubenswrapper[4965]: I0219 09:44:54.173818 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:44:54 crc kubenswrapper[4965]: I0219 09:44:54.175090 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:44:54 crc kubenswrapper[4965]: I0219 09:44:54.182806 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:44:54 crc kubenswrapper[4965]: I0219 09:44:54.183512 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:44:54 crc kubenswrapper[4965]: I0219 09:44:54.185374 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:44:54 crc kubenswrapper[4965]: I0219 09:44:54.314525 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 09:44:54 crc kubenswrapper[4965]: I0219 09:44:54.323810 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:44:54 crc kubenswrapper[4965]: I0219 09:44:54.330431 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 09:44:54 crc kubenswrapper[4965]: W0219 09:44:54.654591 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-9d39aa5ffa7fa100aa4367712b185f4221b83ef20dd6314871f923530a326a3d WatchSource:0}: Error finding container 9d39aa5ffa7fa100aa4367712b185f4221b83ef20dd6314871f923530a326a3d: Status 404 returned error can't find the container with id 9d39aa5ffa7fa100aa4367712b185f4221b83ef20dd6314871f923530a326a3d Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.012829 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"e3e6587e70160db5fa133f6985bccfecac96ec4c059dd61451e1e87273fa3839"} Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.012907 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"9d39aa5ffa7fa100aa4367712b185f4221b83ef20dd6314871f923530a326a3d"} Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.015034 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"183be4c28c77ce5a6a2961b6b2f5cca2bfff40d3fdede20d590792480379f6ae"} Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.015108 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"17e1e2701f1282df3c1efd49a7f96dd6cdd5b896b1d34c06a4324895b64cca2b"} Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.015364 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.017128 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"d366ade7402ff1655521bc82e715ea8dc6d01f273bd0c970a517c9a20655a7ba"} Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.017252 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"9dd7953ffce38b4ae30b3bdd0eecbaf12dc99fca015525f2d9f012203a269208"} Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.386439 4965 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.443918 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-25n6z"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.444676 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hdbpp"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.444705 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-25n6z" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.445887 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hdbpp" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.447056 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-msn62"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.447367 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4chpc"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.447720 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4chpc" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.448096 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-msn62" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.449286 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-rttcv"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.449667 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-66l86"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.450125 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-66l86" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.450236 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rttcv" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.451896 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-88wlz"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.452239 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-88wlz" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.461816 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.462164 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.462336 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.462389 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.462852 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.463081 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.464147 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-n8ngq"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.464987 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n8ngq" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.465967 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-2v2tm"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.466724 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2v2tm" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.470641 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.470834 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.470942 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.471055 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.473451 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-8b2cq"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.473541 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.474247 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-8b2cq" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.474691 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.475298 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.475550 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.476144 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.476235 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.476286 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.476345 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.476493 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.476515 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.476871 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.477994 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-b2d5q"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.484616 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-b2d5q" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.484664 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.485240 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.487922 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.488056 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.488593 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.489530 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.489539 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.489879 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.490171 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.492525 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.495964 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.496017 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.496050 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.496187 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.496328 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.496337 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.495992 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.506248 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.506401 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.506499 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.506580 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.506679 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.506807 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.506897 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.506972 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.507050 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.507217 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.507447 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.509208 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.509405 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-pbfgx"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.509979 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-pbfgx" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.510141 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.510274 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.510663 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.510927 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/06d2ff1d-a88d-464e-9ab2-fe2bc7b67cf3-audit-dir\") pod \"apiserver-7bbb656c7d-2v2tm\" (UID: \"06d2ff1d-a88d-464e-9ab2-fe2bc7b67cf3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2v2tm" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.510968 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2aa84740-2db0-45f4-a3e0-2b78422a51e3-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-msn62\" (UID: \"2aa84740-2db0-45f4-a3e0-2b78422a51e3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-msn62" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.510991 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e91579c1-18ab-46f4-877b-98962c16c1d6-auth-proxy-config\") pod \"machine-approver-56656f9798-66l86\" (UID: \"e91579c1-18ab-46f4-877b-98962c16c1d6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-66l86" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.511018 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e27db4f2-9044-44cd-838c-ee58b322f026-trusted-ca\") pod \"console-operator-58897d9998-b2d5q\" (UID: \"e27db4f2-9044-44cd-838c-ee58b322f026\") " pod="openshift-console-operator/console-operator-58897d9998-b2d5q" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.511035 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c19a4260-b2ba-478a-8fa6-e2045fe1b4ee-available-featuregates\") pod \"openshift-config-operator-7777fb866f-n8ngq\" (UID: \"c19a4260-b2ba-478a-8fa6-e2045fe1b4ee\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n8ngq" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.511059 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e91579c1-18ab-46f4-877b-98962c16c1d6-config\") pod \"machine-approver-56656f9798-66l86\" (UID: \"e91579c1-18ab-46f4-877b-98962c16c1d6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-66l86" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.511077 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29k72\" (UniqueName: \"kubernetes.io/projected/e91579c1-18ab-46f4-877b-98962c16c1d6-kube-api-access-29k72\") pod \"machine-approver-56656f9798-66l86\" (UID: \"e91579c1-18ab-46f4-877b-98962c16c1d6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-66l86" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.511094 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ccda909-983a-43f6-9e98-c46683e6f63f-serving-cert\") pod \"route-controller-manager-6576b87f9c-rttcv\" (UID: \"6ccda909-983a-43f6-9e98-c46683e6f63f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rttcv" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.511116 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e255fdb7-438f-413c-baf2-52e93f1eb0a3-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-88wlz\" (UID: \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\") " pod="openshift-authentication/oauth-openshift-558db77b4-88wlz" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.511138 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f2aae678-17fc-4272-be7e-839946082d8b-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-hdbpp\" (UID: \"f2aae678-17fc-4272-be7e-839946082d8b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hdbpp" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.511165 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e255fdb7-438f-413c-baf2-52e93f1eb0a3-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-88wlz\" (UID: \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\") " pod="openshift-authentication/oauth-openshift-558db77b4-88wlz" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.511186 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/715c2ecd-ac0a-4758-9ded-2ce22952b44f-serving-cert\") pod \"apiserver-76f77b778f-25n6z\" (UID: \"715c2ecd-ac0a-4758-9ded-2ce22952b44f\") " pod="openshift-apiserver/apiserver-76f77b778f-25n6z" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.511220 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e255fdb7-438f-413c-baf2-52e93f1eb0a3-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-88wlz\" (UID: \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\") " pod="openshift-authentication/oauth-openshift-558db77b4-88wlz" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.511239 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2aae678-17fc-4272-be7e-839946082d8b-config\") pod \"controller-manager-879f6c89f-hdbpp\" (UID: \"f2aae678-17fc-4272-be7e-839946082d8b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hdbpp" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.511259 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e255fdb7-438f-413c-baf2-52e93f1eb0a3-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-88wlz\" (UID: \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\") " pod="openshift-authentication/oauth-openshift-558db77b4-88wlz" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.511275 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e255fdb7-438f-413c-baf2-52e93f1eb0a3-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-88wlz\" (UID: \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\") " pod="openshift-authentication/oauth-openshift-558db77b4-88wlz" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.511290 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2aa84740-2db0-45f4-a3e0-2b78422a51e3-config\") pod \"openshift-apiserver-operator-796bbdcf4f-msn62\" (UID: \"2aa84740-2db0-45f4-a3e0-2b78422a51e3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-msn62" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.511311 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4stw\" (UniqueName: \"kubernetes.io/projected/715c2ecd-ac0a-4758-9ded-2ce22952b44f-kube-api-access-p4stw\") pod \"apiserver-76f77b778f-25n6z\" (UID: \"715c2ecd-ac0a-4758-9ded-2ce22952b44f\") " pod="openshift-apiserver/apiserver-76f77b778f-25n6z" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.511334 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/715c2ecd-ac0a-4758-9ded-2ce22952b44f-image-import-ca\") pod \"apiserver-76f77b778f-25n6z\" (UID: \"715c2ecd-ac0a-4758-9ded-2ce22952b44f\") " pod="openshift-apiserver/apiserver-76f77b778f-25n6z" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.511354 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmqnm\" (UniqueName: \"kubernetes.io/projected/e255fdb7-438f-413c-baf2-52e93f1eb0a3-kube-api-access-bmqnm\") pod \"oauth-openshift-558db77b4-88wlz\" (UID: \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\") " pod="openshift-authentication/oauth-openshift-558db77b4-88wlz" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.511372 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6kvb\" (UniqueName: \"kubernetes.io/projected/2aa84740-2db0-45f4-a3e0-2b78422a51e3-kube-api-access-b6kvb\") pod \"openshift-apiserver-operator-796bbdcf4f-msn62\" (UID: \"2aa84740-2db0-45f4-a3e0-2b78422a51e3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-msn62" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.511390 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8dbq\" (UniqueName: \"kubernetes.io/projected/efb57d4d-b3d4-42fa-a27b-299bdf135836-kube-api-access-s8dbq\") pod \"downloads-7954f5f757-8b2cq\" (UID: \"efb57d4d-b3d4-42fa-a27b-299bdf135836\") " pod="openshift-console/downloads-7954f5f757-8b2cq" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.511407 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/06d2ff1d-a88d-464e-9ab2-fe2bc7b67cf3-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-2v2tm\" (UID: \"06d2ff1d-a88d-464e-9ab2-fe2bc7b67cf3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2v2tm" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.511424 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e255fdb7-438f-413c-baf2-52e93f1eb0a3-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-88wlz\" (UID: \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\") " pod="openshift-authentication/oauth-openshift-558db77b4-88wlz" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.511440 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ccda909-983a-43f6-9e98-c46683e6f63f-client-ca\") pod \"route-controller-manager-6576b87f9c-rttcv\" (UID: \"6ccda909-983a-43f6-9e98-c46683e6f63f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rttcv" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.511459 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/715c2ecd-ac0a-4758-9ded-2ce22952b44f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-25n6z\" (UID: \"715c2ecd-ac0a-4758-9ded-2ce22952b44f\") " pod="openshift-apiserver/apiserver-76f77b778f-25n6z" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.511472 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e27db4f2-9044-44cd-838c-ee58b322f026-serving-cert\") pod \"console-operator-58897d9998-b2d5q\" (UID: \"e27db4f2-9044-44cd-838c-ee58b322f026\") " pod="openshift-console-operator/console-operator-58897d9998-b2d5q" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.511489 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/715c2ecd-ac0a-4758-9ded-2ce22952b44f-audit\") pod \"apiserver-76f77b778f-25n6z\" (UID: \"715c2ecd-ac0a-4758-9ded-2ce22952b44f\") " pod="openshift-apiserver/apiserver-76f77b778f-25n6z" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.511510 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e27db4f2-9044-44cd-838c-ee58b322f026-config\") pod \"console-operator-58897d9998-b2d5q\" (UID: \"e27db4f2-9044-44cd-838c-ee58b322f026\") " pod="openshift-console-operator/console-operator-58897d9998-b2d5q" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.511525 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/2236bd7c-4f4c-48a1-82cf-0b406ff1934f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-4chpc\" (UID: \"2236bd7c-4f4c-48a1-82cf-0b406ff1934f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4chpc" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.511543 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx9bh\" (UniqueName: \"kubernetes.io/projected/e27db4f2-9044-44cd-838c-ee58b322f026-kube-api-access-bx9bh\") pod \"console-operator-58897d9998-b2d5q\" (UID: \"e27db4f2-9044-44cd-838c-ee58b322f026\") " pod="openshift-console-operator/console-operator-58897d9998-b2d5q" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.511560 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e255fdb7-438f-413c-baf2-52e93f1eb0a3-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-88wlz\" (UID: \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\") " pod="openshift-authentication/oauth-openshift-558db77b4-88wlz" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.511576 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2mx7\" (UniqueName: \"kubernetes.io/projected/c19a4260-b2ba-478a-8fa6-e2045fe1b4ee-kube-api-access-k2mx7\") pod \"openshift-config-operator-7777fb866f-n8ngq\" (UID: \"c19a4260-b2ba-478a-8fa6-e2045fe1b4ee\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n8ngq" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.511595 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e91579c1-18ab-46f4-877b-98962c16c1d6-machine-approver-tls\") pod \"machine-approver-56656f9798-66l86\" (UID: \"e91579c1-18ab-46f4-877b-98962c16c1d6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-66l86" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.511610 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/715c2ecd-ac0a-4758-9ded-2ce22952b44f-node-pullsecrets\") pod \"apiserver-76f77b778f-25n6z\" (UID: \"715c2ecd-ac0a-4758-9ded-2ce22952b44f\") " pod="openshift-apiserver/apiserver-76f77b778f-25n6z" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.511628 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c19a4260-b2ba-478a-8fa6-e2045fe1b4ee-serving-cert\") pod \"openshift-config-operator-7777fb866f-n8ngq\" (UID: \"c19a4260-b2ba-478a-8fa6-e2045fe1b4ee\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n8ngq" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.511647 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/715c2ecd-ac0a-4758-9ded-2ce22952b44f-config\") pod \"apiserver-76f77b778f-25n6z\" (UID: \"715c2ecd-ac0a-4758-9ded-2ce22952b44f\") " pod="openshift-apiserver/apiserver-76f77b778f-25n6z" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.511669 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e255fdb7-438f-413c-baf2-52e93f1eb0a3-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-88wlz\" (UID: \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\") " pod="openshift-authentication/oauth-openshift-558db77b4-88wlz" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.511685 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/715c2ecd-ac0a-4758-9ded-2ce22952b44f-audit-dir\") pod \"apiserver-76f77b778f-25n6z\" (UID: \"715c2ecd-ac0a-4758-9ded-2ce22952b44f\") " pod="openshift-apiserver/apiserver-76f77b778f-25n6z" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.511702 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/06d2ff1d-a88d-464e-9ab2-fe2bc7b67cf3-encryption-config\") pod \"apiserver-7bbb656c7d-2v2tm\" (UID: \"06d2ff1d-a88d-464e-9ab2-fe2bc7b67cf3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2v2tm" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.511722 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6wvc\" (UniqueName: \"kubernetes.io/projected/06d2ff1d-a88d-464e-9ab2-fe2bc7b67cf3-kube-api-access-w6wvc\") pod \"apiserver-7bbb656c7d-2v2tm\" (UID: \"06d2ff1d-a88d-464e-9ab2-fe2bc7b67cf3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2v2tm" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.511746 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e255fdb7-438f-413c-baf2-52e93f1eb0a3-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-88wlz\" (UID: \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\") " pod="openshift-authentication/oauth-openshift-558db77b4-88wlz" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.511994 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e255fdb7-438f-413c-baf2-52e93f1eb0a3-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-88wlz\" (UID: \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\") " pod="openshift-authentication/oauth-openshift-558db77b4-88wlz" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.512058 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06d2ff1d-a88d-464e-9ab2-fe2bc7b67cf3-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-2v2tm\" (UID: \"06d2ff1d-a88d-464e-9ab2-fe2bc7b67cf3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2v2tm" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.512090 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/06d2ff1d-a88d-464e-9ab2-fe2bc7b67cf3-etcd-client\") pod \"apiserver-7bbb656c7d-2v2tm\" (UID: \"06d2ff1d-a88d-464e-9ab2-fe2bc7b67cf3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2v2tm" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.512135 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqv8d\" (UniqueName: \"kubernetes.io/projected/f2aae678-17fc-4272-be7e-839946082d8b-kube-api-access-dqv8d\") pod \"controller-manager-879f6c89f-hdbpp\" (UID: \"f2aae678-17fc-4272-be7e-839946082d8b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hdbpp" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.512154 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.512172 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e255fdb7-438f-413c-baf2-52e93f1eb0a3-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-88wlz\" (UID: \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\") " pod="openshift-authentication/oauth-openshift-558db77b4-88wlz" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.512382 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.512422 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06d2ff1d-a88d-464e-9ab2-fe2bc7b67cf3-serving-cert\") pod \"apiserver-7bbb656c7d-2v2tm\" (UID: \"06d2ff1d-a88d-464e-9ab2-fe2bc7b67cf3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2v2tm" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.512451 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2aae678-17fc-4272-be7e-839946082d8b-serving-cert\") pod \"controller-manager-879f6c89f-hdbpp\" (UID: \"f2aae678-17fc-4272-be7e-839946082d8b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hdbpp" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.512485 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/715c2ecd-ac0a-4758-9ded-2ce22952b44f-encryption-config\") pod \"apiserver-76f77b778f-25n6z\" (UID: \"715c2ecd-ac0a-4758-9ded-2ce22952b44f\") " pod="openshift-apiserver/apiserver-76f77b778f-25n6z" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.512508 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/06d2ff1d-a88d-464e-9ab2-fe2bc7b67cf3-audit-policies\") pod \"apiserver-7bbb656c7d-2v2tm\" (UID: \"06d2ff1d-a88d-464e-9ab2-fe2bc7b67cf3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2v2tm" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.512536 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxh57\" (UniqueName: \"kubernetes.io/projected/2236bd7c-4f4c-48a1-82cf-0b406ff1934f-kube-api-access-sxh57\") pod \"cluster-samples-operator-665b6dd947-4chpc\" (UID: \"2236bd7c-4f4c-48a1-82cf-0b406ff1934f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4chpc" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.512562 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/715c2ecd-ac0a-4758-9ded-2ce22952b44f-etcd-client\") pod \"apiserver-76f77b778f-25n6z\" (UID: \"715c2ecd-ac0a-4758-9ded-2ce22952b44f\") " pod="openshift-apiserver/apiserver-76f77b778f-25n6z" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.512586 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e255fdb7-438f-413c-baf2-52e93f1eb0a3-audit-policies\") pod \"oauth-openshift-558db77b4-88wlz\" (UID: \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\") " pod="openshift-authentication/oauth-openshift-558db77b4-88wlz" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.512610 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f2aae678-17fc-4272-be7e-839946082d8b-client-ca\") pod \"controller-manager-879f6c89f-hdbpp\" (UID: \"f2aae678-17fc-4272-be7e-839946082d8b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hdbpp" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.512642 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ccda909-983a-43f6-9e98-c46683e6f63f-config\") pod \"route-controller-manager-6576b87f9c-rttcv\" (UID: \"6ccda909-983a-43f6-9e98-c46683e6f63f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rttcv" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.512667 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/715c2ecd-ac0a-4758-9ded-2ce22952b44f-etcd-serving-ca\") pod \"apiserver-76f77b778f-25n6z\" (UID: \"715c2ecd-ac0a-4758-9ded-2ce22952b44f\") " pod="openshift-apiserver/apiserver-76f77b778f-25n6z" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.512694 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e255fdb7-438f-413c-baf2-52e93f1eb0a3-audit-dir\") pod \"oauth-openshift-558db77b4-88wlz\" (UID: \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\") " pod="openshift-authentication/oauth-openshift-558db77b4-88wlz" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.512728 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pflm7\" (UniqueName: \"kubernetes.io/projected/6ccda909-983a-43f6-9e98-c46683e6f63f-kube-api-access-pflm7\") pod \"route-controller-manager-6576b87f9c-rttcv\" (UID: \"6ccda909-983a-43f6-9e98-c46683e6f63f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rttcv" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.512478 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.512873 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.515933 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.516022 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.516086 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.516492 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.516644 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.516808 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.516966 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.517517 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.522047 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-vkxtv"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.522917 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-vkxtv" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.524923 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.529346 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.533530 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.533993 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.534091 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.534135 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.535222 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-hgzq5"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.535755 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-hgzq5" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.537036 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.538141 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.538583 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.538719 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.539304 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.539509 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.542573 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.543232 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.543476 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.546515 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.546834 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6f8hc"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.547396 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q8dwq"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.547770 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q8dwq" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.555515 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.560320 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6f8hc" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.561384 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.561788 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.562004 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.562258 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.562408 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.564260 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.579008 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.610271 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.615600 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.615726 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.619883 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mdd5s"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.620453 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vfkkc"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.620794 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vfkkc" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.621489 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.621600 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mdd5s" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.622962 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.623107 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.623662 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.623914 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13fcdc33-7dcb-4d34-86ca-bd40d679560e-config\") pod \"authentication-operator-69f744f599-pbfgx\" (UID: \"13fcdc33-7dcb-4d34-86ca-bd40d679560e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pbfgx" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.623969 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13fcdc33-7dcb-4d34-86ca-bd40d679560e-service-ca-bundle\") pod \"authentication-operator-69f744f599-pbfgx\" (UID: \"13fcdc33-7dcb-4d34-86ca-bd40d679560e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pbfgx" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.624012 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/715c2ecd-ac0a-4758-9ded-2ce22952b44f-config\") pod \"apiserver-76f77b778f-25n6z\" (UID: \"715c2ecd-ac0a-4758-9ded-2ce22952b44f\") " pod="openshift-apiserver/apiserver-76f77b778f-25n6z" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.624040 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/06d2ff1d-a88d-464e-9ab2-fe2bc7b67cf3-encryption-config\") pod \"apiserver-7bbb656c7d-2v2tm\" (UID: \"06d2ff1d-a88d-464e-9ab2-fe2bc7b67cf3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2v2tm" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.624067 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6wvc\" (UniqueName: \"kubernetes.io/projected/06d2ff1d-a88d-464e-9ab2-fe2bc7b67cf3-kube-api-access-w6wvc\") pod \"apiserver-7bbb656c7d-2v2tm\" (UID: \"06d2ff1d-a88d-464e-9ab2-fe2bc7b67cf3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2v2tm" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.624094 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e255fdb7-438f-413c-baf2-52e93f1eb0a3-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-88wlz\" (UID: \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\") " pod="openshift-authentication/oauth-openshift-558db77b4-88wlz" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.624116 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/715c2ecd-ac0a-4758-9ded-2ce22952b44f-audit-dir\") pod \"apiserver-76f77b778f-25n6z\" (UID: \"715c2ecd-ac0a-4758-9ded-2ce22952b44f\") " pod="openshift-apiserver/apiserver-76f77b778f-25n6z" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.624143 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/16b70f8e-c2b6-4545-813e-23b82399a149-profile-collector-cert\") pod \"catalog-operator-68c6474976-6f8hc\" (UID: \"16b70f8e-c2b6-4545-813e-23b82399a149\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6f8hc" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.624172 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e255fdb7-438f-413c-baf2-52e93f1eb0a3-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-88wlz\" (UID: \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\") " pod="openshift-authentication/oauth-openshift-558db77b4-88wlz" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.624219 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pccx8\" (UniqueName: \"kubernetes.io/projected/16b70f8e-c2b6-4545-813e-23b82399a149-kube-api-access-pccx8\") pod \"catalog-operator-68c6474976-6f8hc\" (UID: \"16b70f8e-c2b6-4545-813e-23b82399a149\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6f8hc" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.624245 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e255fdb7-438f-413c-baf2-52e93f1eb0a3-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-88wlz\" (UID: \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\") " pod="openshift-authentication/oauth-openshift-558db77b4-88wlz" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.624281 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06d2ff1d-a88d-464e-9ab2-fe2bc7b67cf3-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-2v2tm\" (UID: \"06d2ff1d-a88d-464e-9ab2-fe2bc7b67cf3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2v2tm" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.624283 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.624309 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/91fd349f-c4be-4636-a5a9-76ed721d9afa-console-oauth-config\") pod \"console-f9d7485db-hgzq5\" (UID: \"91fd349f-c4be-4636-a5a9-76ed721d9afa\") " pod="openshift-console/console-f9d7485db-hgzq5" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.624336 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/06d2ff1d-a88d-464e-9ab2-fe2bc7b67cf3-etcd-client\") pod \"apiserver-7bbb656c7d-2v2tm\" (UID: \"06d2ff1d-a88d-464e-9ab2-fe2bc7b67cf3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2v2tm" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.624387 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqv8d\" (UniqueName: \"kubernetes.io/projected/f2aae678-17fc-4272-be7e-839946082d8b-kube-api-access-dqv8d\") pod \"controller-manager-879f6c89f-hdbpp\" (UID: \"f2aae678-17fc-4272-be7e-839946082d8b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hdbpp" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.624419 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e255fdb7-438f-413c-baf2-52e93f1eb0a3-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-88wlz\" (UID: \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\") " pod="openshift-authentication/oauth-openshift-558db77b4-88wlz" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.624445 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2aae678-17fc-4272-be7e-839946082d8b-serving-cert\") pod \"controller-manager-879f6c89f-hdbpp\" (UID: \"f2aae678-17fc-4272-be7e-839946082d8b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hdbpp" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.624471 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/715c2ecd-ac0a-4758-9ded-2ce22952b44f-encryption-config\") pod \"apiserver-76f77b778f-25n6z\" (UID: \"715c2ecd-ac0a-4758-9ded-2ce22952b44f\") " pod="openshift-apiserver/apiserver-76f77b778f-25n6z" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.624494 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/06d2ff1d-a88d-464e-9ab2-fe2bc7b67cf3-audit-policies\") pod \"apiserver-7bbb656c7d-2v2tm\" (UID: \"06d2ff1d-a88d-464e-9ab2-fe2bc7b67cf3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2v2tm" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.624517 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06d2ff1d-a88d-464e-9ab2-fe2bc7b67cf3-serving-cert\") pod \"apiserver-7bbb656c7d-2v2tm\" (UID: \"06d2ff1d-a88d-464e-9ab2-fe2bc7b67cf3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2v2tm" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.624551 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxh57\" (UniqueName: \"kubernetes.io/projected/2236bd7c-4f4c-48a1-82cf-0b406ff1934f-kube-api-access-sxh57\") pod \"cluster-samples-operator-665b6dd947-4chpc\" (UID: \"2236bd7c-4f4c-48a1-82cf-0b406ff1934f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4chpc" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.624575 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/715c2ecd-ac0a-4758-9ded-2ce22952b44f-etcd-client\") pod \"apiserver-76f77b778f-25n6z\" (UID: \"715c2ecd-ac0a-4758-9ded-2ce22952b44f\") " pod="openshift-apiserver/apiserver-76f77b778f-25n6z" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.624601 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e255fdb7-438f-413c-baf2-52e93f1eb0a3-audit-policies\") pod \"oauth-openshift-558db77b4-88wlz\" (UID: \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\") " pod="openshift-authentication/oauth-openshift-558db77b4-88wlz" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.624629 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f2aae678-17fc-4272-be7e-839946082d8b-client-ca\") pod \"controller-manager-879f6c89f-hdbpp\" (UID: \"f2aae678-17fc-4272-be7e-839946082d8b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hdbpp" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.624656 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e255fdb7-438f-413c-baf2-52e93f1eb0a3-audit-dir\") pod \"oauth-openshift-558db77b4-88wlz\" (UID: \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\") " pod="openshift-authentication/oauth-openshift-558db77b4-88wlz" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.624681 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ccda909-983a-43f6-9e98-c46683e6f63f-config\") pod \"route-controller-manager-6576b87f9c-rttcv\" (UID: \"6ccda909-983a-43f6-9e98-c46683e6f63f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rttcv" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.624710 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/715c2ecd-ac0a-4758-9ded-2ce22952b44f-etcd-serving-ca\") pod \"apiserver-76f77b778f-25n6z\" (UID: \"715c2ecd-ac0a-4758-9ded-2ce22952b44f\") " pod="openshift-apiserver/apiserver-76f77b778f-25n6z" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.624740 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pflm7\" (UniqueName: \"kubernetes.io/projected/6ccda909-983a-43f6-9e98-c46683e6f63f-kube-api-access-pflm7\") pod \"route-controller-manager-6576b87f9c-rttcv\" (UID: \"6ccda909-983a-43f6-9e98-c46683e6f63f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rttcv" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.624767 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13fcdc33-7dcb-4d34-86ca-bd40d679560e-serving-cert\") pod \"authentication-operator-69f744f599-pbfgx\" (UID: \"13fcdc33-7dcb-4d34-86ca-bd40d679560e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pbfgx" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.624792 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2aa84740-2db0-45f4-a3e0-2b78422a51e3-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-msn62\" (UID: \"2aa84740-2db0-45f4-a3e0-2b78422a51e3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-msn62" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.624814 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/06d2ff1d-a88d-464e-9ab2-fe2bc7b67cf3-audit-dir\") pod \"apiserver-7bbb656c7d-2v2tm\" (UID: \"06d2ff1d-a88d-464e-9ab2-fe2bc7b67cf3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2v2tm" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.624838 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6cba3a97-d8fe-4f88-9d3d-ed1a49713f1c-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-q8dwq\" (UID: \"6cba3a97-d8fe-4f88-9d3d-ed1a49713f1c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q8dwq" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.624863 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e91579c1-18ab-46f4-877b-98962c16c1d6-auth-proxy-config\") pod \"machine-approver-56656f9798-66l86\" (UID: \"e91579c1-18ab-46f4-877b-98962c16c1d6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-66l86" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.624886 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c19a4260-b2ba-478a-8fa6-e2045fe1b4ee-available-featuregates\") pod \"openshift-config-operator-7777fb866f-n8ngq\" (UID: \"c19a4260-b2ba-478a-8fa6-e2045fe1b4ee\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n8ngq" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.624912 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9wg9\" (UniqueName: \"kubernetes.io/projected/13fcdc33-7dcb-4d34-86ca-bd40d679560e-kube-api-access-n9wg9\") pod \"authentication-operator-69f744f599-pbfgx\" (UID: \"13fcdc33-7dcb-4d34-86ca-bd40d679560e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pbfgx" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.624938 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e27db4f2-9044-44cd-838c-ee58b322f026-trusted-ca\") pod \"console-operator-58897d9998-b2d5q\" (UID: \"e27db4f2-9044-44cd-838c-ee58b322f026\") " pod="openshift-console-operator/console-operator-58897d9998-b2d5q" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.624963 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ccda909-983a-43f6-9e98-c46683e6f63f-serving-cert\") pod \"route-controller-manager-6576b87f9c-rttcv\" (UID: \"6ccda909-983a-43f6-9e98-c46683e6f63f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rttcv" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.624987 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7b5f\" (UniqueName: \"kubernetes.io/projected/91fd349f-c4be-4636-a5a9-76ed721d9afa-kube-api-access-q7b5f\") pod \"console-f9d7485db-hgzq5\" (UID: \"91fd349f-c4be-4636-a5a9-76ed721d9afa\") " pod="openshift-console/console-f9d7485db-hgzq5" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.625014 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e255fdb7-438f-413c-baf2-52e93f1eb0a3-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-88wlz\" (UID: \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\") " pod="openshift-authentication/oauth-openshift-558db77b4-88wlz" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.625039 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e91579c1-18ab-46f4-877b-98962c16c1d6-config\") pod \"machine-approver-56656f9798-66l86\" (UID: \"e91579c1-18ab-46f4-877b-98962c16c1d6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-66l86" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.625062 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29k72\" (UniqueName: \"kubernetes.io/projected/e91579c1-18ab-46f4-877b-98962c16c1d6-kube-api-access-29k72\") pod \"machine-approver-56656f9798-66l86\" (UID: \"e91579c1-18ab-46f4-877b-98962c16c1d6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-66l86" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.625087 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f2aae678-17fc-4272-be7e-839946082d8b-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-hdbpp\" (UID: \"f2aae678-17fc-4272-be7e-839946082d8b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hdbpp" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.625138 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/91fd349f-c4be-4636-a5a9-76ed721d9afa-console-config\") pod \"console-f9d7485db-hgzq5\" (UID: \"91fd349f-c4be-4636-a5a9-76ed721d9afa\") " pod="openshift-console/console-f9d7485db-hgzq5" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.625171 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e255fdb7-438f-413c-baf2-52e93f1eb0a3-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-88wlz\" (UID: \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\") " pod="openshift-authentication/oauth-openshift-558db77b4-88wlz" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.625217 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/715c2ecd-ac0a-4758-9ded-2ce22952b44f-serving-cert\") pod \"apiserver-76f77b778f-25n6z\" (UID: \"715c2ecd-ac0a-4758-9ded-2ce22952b44f\") " pod="openshift-apiserver/apiserver-76f77b778f-25n6z" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.625242 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91fd349f-c4be-4636-a5a9-76ed721d9afa-trusted-ca-bundle\") pod \"console-f9d7485db-hgzq5\" (UID: \"91fd349f-c4be-4636-a5a9-76ed721d9afa\") " pod="openshift-console/console-f9d7485db-hgzq5" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.625269 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e255fdb7-438f-413c-baf2-52e93f1eb0a3-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-88wlz\" (UID: \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\") " pod="openshift-authentication/oauth-openshift-558db77b4-88wlz" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.625290 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e255fdb7-438f-413c-baf2-52e93f1eb0a3-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-88wlz\" (UID: \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\") " pod="openshift-authentication/oauth-openshift-558db77b4-88wlz" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.625310 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2aae678-17fc-4272-be7e-839946082d8b-config\") pod \"controller-manager-879f6c89f-hdbpp\" (UID: \"f2aae678-17fc-4272-be7e-839946082d8b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hdbpp" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.625329 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4stw\" (UniqueName: \"kubernetes.io/projected/715c2ecd-ac0a-4758-9ded-2ce22952b44f-kube-api-access-p4stw\") pod \"apiserver-76f77b778f-25n6z\" (UID: \"715c2ecd-ac0a-4758-9ded-2ce22952b44f\") " pod="openshift-apiserver/apiserver-76f77b778f-25n6z" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.625351 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2nxt\" (UniqueName: \"kubernetes.io/projected/6cba3a97-d8fe-4f88-9d3d-ed1a49713f1c-kube-api-access-w2nxt\") pod \"openshift-controller-manager-operator-756b6f6bc6-q8dwq\" (UID: \"6cba3a97-d8fe-4f88-9d3d-ed1a49713f1c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q8dwq" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.625377 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e255fdb7-438f-413c-baf2-52e93f1eb0a3-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-88wlz\" (UID: \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\") " pod="openshift-authentication/oauth-openshift-558db77b4-88wlz" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.625406 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2aa84740-2db0-45f4-a3e0-2b78422a51e3-config\") pod \"openshift-apiserver-operator-796bbdcf4f-msn62\" (UID: \"2aa84740-2db0-45f4-a3e0-2b78422a51e3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-msn62" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.625429 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/715c2ecd-ac0a-4758-9ded-2ce22952b44f-image-import-ca\") pod \"apiserver-76f77b778f-25n6z\" (UID: \"715c2ecd-ac0a-4758-9ded-2ce22952b44f\") " pod="openshift-apiserver/apiserver-76f77b778f-25n6z" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.625456 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/91fd349f-c4be-4636-a5a9-76ed721d9afa-console-serving-cert\") pod \"console-f9d7485db-hgzq5\" (UID: \"91fd349f-c4be-4636-a5a9-76ed721d9afa\") " pod="openshift-console/console-f9d7485db-hgzq5" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.625481 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cba3a97-d8fe-4f88-9d3d-ed1a49713f1c-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-q8dwq\" (UID: \"6cba3a97-d8fe-4f88-9d3d-ed1a49713f1c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q8dwq" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.625506 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmqnm\" (UniqueName: \"kubernetes.io/projected/e255fdb7-438f-413c-baf2-52e93f1eb0a3-kube-api-access-bmqnm\") pod \"oauth-openshift-558db77b4-88wlz\" (UID: \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\") " pod="openshift-authentication/oauth-openshift-558db77b4-88wlz" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.625530 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6kvb\" (UniqueName: \"kubernetes.io/projected/2aa84740-2db0-45f4-a3e0-2b78422a51e3-kube-api-access-b6kvb\") pod \"openshift-apiserver-operator-796bbdcf4f-msn62\" (UID: \"2aa84740-2db0-45f4-a3e0-2b78422a51e3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-msn62" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.625553 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/91fd349f-c4be-4636-a5a9-76ed721d9afa-oauth-serving-cert\") pod \"console-f9d7485db-hgzq5\" (UID: \"91fd349f-c4be-4636-a5a9-76ed721d9afa\") " pod="openshift-console/console-f9d7485db-hgzq5" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.625578 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8dbq\" (UniqueName: \"kubernetes.io/projected/efb57d4d-b3d4-42fa-a27b-299bdf135836-kube-api-access-s8dbq\") pod \"downloads-7954f5f757-8b2cq\" (UID: \"efb57d4d-b3d4-42fa-a27b-299bdf135836\") " pod="openshift-console/downloads-7954f5f757-8b2cq" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.625601 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/06d2ff1d-a88d-464e-9ab2-fe2bc7b67cf3-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-2v2tm\" (UID: \"06d2ff1d-a88d-464e-9ab2-fe2bc7b67cf3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2v2tm" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.625624 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e255fdb7-438f-413c-baf2-52e93f1eb0a3-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-88wlz\" (UID: \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\") " pod="openshift-authentication/oauth-openshift-558db77b4-88wlz" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.625651 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ccda909-983a-43f6-9e98-c46683e6f63f-client-ca\") pod \"route-controller-manager-6576b87f9c-rttcv\" (UID: \"6ccda909-983a-43f6-9e98-c46683e6f63f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rttcv" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.625674 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/715c2ecd-ac0a-4758-9ded-2ce22952b44f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-25n6z\" (UID: \"715c2ecd-ac0a-4758-9ded-2ce22952b44f\") " pod="openshift-apiserver/apiserver-76f77b778f-25n6z" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.625694 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/16b70f8e-c2b6-4545-813e-23b82399a149-srv-cert\") pod \"catalog-operator-68c6474976-6f8hc\" (UID: \"16b70f8e-c2b6-4545-813e-23b82399a149\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6f8hc" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.625719 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2kc8\" (UniqueName: \"kubernetes.io/projected/d6fcc552-ae72-46a3-9525-cfb460da05e1-kube-api-access-t2kc8\") pod \"dns-operator-744455d44c-vkxtv\" (UID: \"d6fcc552-ae72-46a3-9525-cfb460da05e1\") " pod="openshift-dns-operator/dns-operator-744455d44c-vkxtv" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.625743 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e27db4f2-9044-44cd-838c-ee58b322f026-config\") pod \"console-operator-58897d9998-b2d5q\" (UID: \"e27db4f2-9044-44cd-838c-ee58b322f026\") " pod="openshift-console-operator/console-operator-58897d9998-b2d5q" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.625771 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e27db4f2-9044-44cd-838c-ee58b322f026-serving-cert\") pod \"console-operator-58897d9998-b2d5q\" (UID: \"e27db4f2-9044-44cd-838c-ee58b322f026\") " pod="openshift-console-operator/console-operator-58897d9998-b2d5q" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.625795 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/715c2ecd-ac0a-4758-9ded-2ce22952b44f-audit\") pod \"apiserver-76f77b778f-25n6z\" (UID: \"715c2ecd-ac0a-4758-9ded-2ce22952b44f\") " pod="openshift-apiserver/apiserver-76f77b778f-25n6z" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.625819 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/2236bd7c-4f4c-48a1-82cf-0b406ff1934f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-4chpc\" (UID: \"2236bd7c-4f4c-48a1-82cf-0b406ff1934f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4chpc" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.625847 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e255fdb7-438f-413c-baf2-52e93f1eb0a3-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-88wlz\" (UID: \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\") " pod="openshift-authentication/oauth-openshift-558db77b4-88wlz" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.625871 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bx9bh\" (UniqueName: \"kubernetes.io/projected/e27db4f2-9044-44cd-838c-ee58b322f026-kube-api-access-bx9bh\") pod \"console-operator-58897d9998-b2d5q\" (UID: \"e27db4f2-9044-44cd-838c-ee58b322f026\") " pod="openshift-console-operator/console-operator-58897d9998-b2d5q" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.625895 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/91fd349f-c4be-4636-a5a9-76ed721d9afa-service-ca\") pod \"console-f9d7485db-hgzq5\" (UID: \"91fd349f-c4be-4636-a5a9-76ed721d9afa\") " pod="openshift-console/console-f9d7485db-hgzq5" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.625900 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bdvzx"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.626020 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.626430 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/715c2ecd-ac0a-4758-9ded-2ce22952b44f-config\") pod \"apiserver-76f77b778f-25n6z\" (UID: \"715c2ecd-ac0a-4758-9ded-2ce22952b44f\") " pod="openshift-apiserver/apiserver-76f77b778f-25n6z" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.626562 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524890-mgzh5"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.626842 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-7dd6s"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.627316 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7dd6s" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.625914 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13fcdc33-7dcb-4d34-86ca-bd40d679560e-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-pbfgx\" (UID: \"13fcdc33-7dcb-4d34-86ca-bd40d679560e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pbfgx" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.627961 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e91579c1-18ab-46f4-877b-98962c16c1d6-machine-approver-tls\") pod \"machine-approver-56656f9798-66l86\" (UID: \"e91579c1-18ab-46f4-877b-98962c16c1d6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-66l86" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.627984 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/715c2ecd-ac0a-4758-9ded-2ce22952b44f-node-pullsecrets\") pod \"apiserver-76f77b778f-25n6z\" (UID: \"715c2ecd-ac0a-4758-9ded-2ce22952b44f\") " pod="openshift-apiserver/apiserver-76f77b778f-25n6z" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.628010 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c19a4260-b2ba-478a-8fa6-e2045fe1b4ee-serving-cert\") pod \"openshift-config-operator-7777fb866f-n8ngq\" (UID: \"c19a4260-b2ba-478a-8fa6-e2045fe1b4ee\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n8ngq" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.628032 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2mx7\" (UniqueName: \"kubernetes.io/projected/c19a4260-b2ba-478a-8fa6-e2045fe1b4ee-kube-api-access-k2mx7\") pod \"openshift-config-operator-7777fb866f-n8ngq\" (UID: \"c19a4260-b2ba-478a-8fa6-e2045fe1b4ee\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n8ngq" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.628061 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d6fcc552-ae72-46a3-9525-cfb460da05e1-metrics-tls\") pod \"dns-operator-744455d44c-vkxtv\" (UID: \"d6fcc552-ae72-46a3-9525-cfb460da05e1\") " pod="openshift-dns-operator/dns-operator-744455d44c-vkxtv" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.628230 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bdvzx" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.628367 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524890-mgzh5" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.628780 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2aa84740-2db0-45f4-a3e0-2b78422a51e3-config\") pod \"openshift-apiserver-operator-796bbdcf4f-msn62\" (UID: \"2aa84740-2db0-45f4-a3e0-2b78422a51e3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-msn62" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.629552 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/715c2ecd-ac0a-4758-9ded-2ce22952b44f-image-import-ca\") pod \"apiserver-76f77b778f-25n6z\" (UID: \"715c2ecd-ac0a-4758-9ded-2ce22952b44f\") " pod="openshift-apiserver/apiserver-76f77b778f-25n6z" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.630501 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e255fdb7-438f-413c-baf2-52e93f1eb0a3-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-88wlz\" (UID: \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\") " pod="openshift-authentication/oauth-openshift-558db77b4-88wlz" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.630593 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/715c2ecd-ac0a-4758-9ded-2ce22952b44f-audit-dir\") pod \"apiserver-76f77b778f-25n6z\" (UID: \"715c2ecd-ac0a-4758-9ded-2ce22952b44f\") " pod="openshift-apiserver/apiserver-76f77b778f-25n6z" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.631088 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.631584 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/06d2ff1d-a88d-464e-9ab2-fe2bc7b67cf3-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-2v2tm\" (UID: \"06d2ff1d-a88d-464e-9ab2-fe2bc7b67cf3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2v2tm" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.631713 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.631938 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.632130 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.632430 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/06d2ff1d-a88d-464e-9ab2-fe2bc7b67cf3-audit-dir\") pod \"apiserver-7bbb656c7d-2v2tm\" (UID: \"06d2ff1d-a88d-464e-9ab2-fe2bc7b67cf3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2v2tm" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.633423 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e91579c1-18ab-46f4-877b-98962c16c1d6-auth-proxy-config\") pod \"machine-approver-56656f9798-66l86\" (UID: \"e91579c1-18ab-46f4-877b-98962c16c1d6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-66l86" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.633808 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c19a4260-b2ba-478a-8fa6-e2045fe1b4ee-available-featuregates\") pod \"openshift-config-operator-7777fb866f-n8ngq\" (UID: \"c19a4260-b2ba-478a-8fa6-e2045fe1b4ee\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n8ngq" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.637658 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e27db4f2-9044-44cd-838c-ee58b322f026-trusted-ca\") pod \"console-operator-58897d9998-b2d5q\" (UID: \"e27db4f2-9044-44cd-838c-ee58b322f026\") " pod="openshift-console-operator/console-operator-58897d9998-b2d5q" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.638413 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f2aae678-17fc-4272-be7e-839946082d8b-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-hdbpp\" (UID: \"f2aae678-17fc-4272-be7e-839946082d8b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hdbpp" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.639308 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.639384 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e255fdb7-438f-413c-baf2-52e93f1eb0a3-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-88wlz\" (UID: \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\") " pod="openshift-authentication/oauth-openshift-558db77b4-88wlz" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.639448 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/715c2ecd-ac0a-4758-9ded-2ce22952b44f-node-pullsecrets\") pod \"apiserver-76f77b778f-25n6z\" (UID: \"715c2ecd-ac0a-4758-9ded-2ce22952b44f\") " pod="openshift-apiserver/apiserver-76f77b778f-25n6z" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.639764 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06d2ff1d-a88d-464e-9ab2-fe2bc7b67cf3-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-2v2tm\" (UID: \"06d2ff1d-a88d-464e-9ab2-fe2bc7b67cf3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2v2tm" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.639911 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e255fdb7-438f-413c-baf2-52e93f1eb0a3-audit-dir\") pod \"oauth-openshift-558db77b4-88wlz\" (UID: \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\") " pod="openshift-authentication/oauth-openshift-558db77b4-88wlz" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.640360 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e255fdb7-438f-413c-baf2-52e93f1eb0a3-audit-policies\") pod \"oauth-openshift-558db77b4-88wlz\" (UID: \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\") " pod="openshift-authentication/oauth-openshift-558db77b4-88wlz" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.640957 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f2aae678-17fc-4272-be7e-839946082d8b-client-ca\") pod \"controller-manager-879f6c89f-hdbpp\" (UID: \"f2aae678-17fc-4272-be7e-839946082d8b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hdbpp" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.641758 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.642442 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e255fdb7-438f-413c-baf2-52e93f1eb0a3-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-88wlz\" (UID: \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\") " pod="openshift-authentication/oauth-openshift-558db77b4-88wlz" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.643146 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ccda909-983a-43f6-9e98-c46683e6f63f-config\") pod \"route-controller-manager-6576b87f9c-rttcv\" (UID: \"6ccda909-983a-43f6-9e98-c46683e6f63f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rttcv" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.644186 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ccda909-983a-43f6-9e98-c46683e6f63f-client-ca\") pod \"route-controller-manager-6576b87f9c-rttcv\" (UID: \"6ccda909-983a-43f6-9e98-c46683e6f63f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rttcv" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.644647 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.644888 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e91579c1-18ab-46f4-877b-98962c16c1d6-config\") pod \"machine-approver-56656f9798-66l86\" (UID: \"e91579c1-18ab-46f4-877b-98962c16c1d6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-66l86" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.644929 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/715c2ecd-ac0a-4758-9ded-2ce22952b44f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-25n6z\" (UID: \"715c2ecd-ac0a-4758-9ded-2ce22952b44f\") " pod="openshift-apiserver/apiserver-76f77b778f-25n6z" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.645031 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.645175 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.645392 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vkrsj"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.645602 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e27db4f2-9044-44cd-838c-ee58b322f026-config\") pod \"console-operator-58897d9998-b2d5q\" (UID: \"e27db4f2-9044-44cd-838c-ee58b322f026\") " pod="openshift-console-operator/console-operator-58897d9998-b2d5q" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.645714 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/715c2ecd-ac0a-4758-9ded-2ce22952b44f-etcd-serving-ca\") pod \"apiserver-76f77b778f-25n6z\" (UID: \"715c2ecd-ac0a-4758-9ded-2ce22952b44f\") " pod="openshift-apiserver/apiserver-76f77b778f-25n6z" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.645943 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-hqt8l"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.646360 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zjvhq"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.646723 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zjvhq" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.647013 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vkrsj" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.647035 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/06d2ff1d-a88d-464e-9ab2-fe2bc7b67cf3-audit-policies\") pod \"apiserver-7bbb656c7d-2v2tm\" (UID: \"06d2ff1d-a88d-464e-9ab2-fe2bc7b67cf3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2v2tm" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.655806 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06d2ff1d-a88d-464e-9ab2-fe2bc7b67cf3-serving-cert\") pod \"apiserver-7bbb656c7d-2v2tm\" (UID: \"06d2ff1d-a88d-464e-9ab2-fe2bc7b67cf3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2v2tm" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.656486 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2aae678-17fc-4272-be7e-839946082d8b-serving-cert\") pod \"controller-manager-879f6c89f-hdbpp\" (UID: \"f2aae678-17fc-4272-be7e-839946082d8b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hdbpp" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.658523 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-hqt8l" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.661420 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2aae678-17fc-4272-be7e-839946082d8b-config\") pod \"controller-manager-879f6c89f-hdbpp\" (UID: \"f2aae678-17fc-4272-be7e-839946082d8b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hdbpp" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.661884 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.663135 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e255fdb7-438f-413c-baf2-52e93f1eb0a3-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-88wlz\" (UID: \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\") " pod="openshift-authentication/oauth-openshift-558db77b4-88wlz" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.663986 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/715c2ecd-ac0a-4758-9ded-2ce22952b44f-audit\") pod \"apiserver-76f77b778f-25n6z\" (UID: \"715c2ecd-ac0a-4758-9ded-2ce22952b44f\") " pod="openshift-apiserver/apiserver-76f77b778f-25n6z" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.666392 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kpdjc"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.666846 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-qnhlh"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.667260 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qnhlh" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.667489 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kpdjc" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.670320 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e255fdb7-438f-413c-baf2-52e93f1eb0a3-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-88wlz\" (UID: \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\") " pod="openshift-authentication/oauth-openshift-558db77b4-88wlz" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.670323 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/06d2ff1d-a88d-464e-9ab2-fe2bc7b67cf3-encryption-config\") pod \"apiserver-7bbb656c7d-2v2tm\" (UID: \"06d2ff1d-a88d-464e-9ab2-fe2bc7b67cf3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2v2tm" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.670547 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c19a4260-b2ba-478a-8fa6-e2045fe1b4ee-serving-cert\") pod \"openshift-config-operator-7777fb866f-n8ngq\" (UID: \"c19a4260-b2ba-478a-8fa6-e2045fe1b4ee\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n8ngq" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.670810 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2aa84740-2db0-45f4-a3e0-2b78422a51e3-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-msn62\" (UID: \"2aa84740-2db0-45f4-a3e0-2b78422a51e3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-msn62" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.670957 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e255fdb7-438f-413c-baf2-52e93f1eb0a3-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-88wlz\" (UID: \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\") " pod="openshift-authentication/oauth-openshift-558db77b4-88wlz" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.671000 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e255fdb7-438f-413c-baf2-52e93f1eb0a3-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-88wlz\" (UID: \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\") " pod="openshift-authentication/oauth-openshift-558db77b4-88wlz" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.671162 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e255fdb7-438f-413c-baf2-52e93f1eb0a3-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-88wlz\" (UID: \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\") " pod="openshift-authentication/oauth-openshift-558db77b4-88wlz" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.671182 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.671336 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ccda909-983a-43f6-9e98-c46683e6f63f-serving-cert\") pod \"route-controller-manager-6576b87f9c-rttcv\" (UID: \"6ccda909-983a-43f6-9e98-c46683e6f63f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rttcv" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.672079 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e255fdb7-438f-413c-baf2-52e93f1eb0a3-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-88wlz\" (UID: \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\") " pod="openshift-authentication/oauth-openshift-558db77b4-88wlz" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.678463 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/2236bd7c-4f4c-48a1-82cf-0b406ff1934f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-4chpc\" (UID: \"2236bd7c-4f4c-48a1-82cf-0b406ff1934f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4chpc" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.678676 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e255fdb7-438f-413c-baf2-52e93f1eb0a3-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-88wlz\" (UID: \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\") " pod="openshift-authentication/oauth-openshift-558db77b4-88wlz" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.678725 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lz7fh"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.679372 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lz7fh" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.679728 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e91579c1-18ab-46f4-877b-98962c16c1d6-machine-approver-tls\") pod \"machine-approver-56656f9798-66l86\" (UID: \"e91579c1-18ab-46f4-877b-98962c16c1d6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-66l86" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.679932 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e27db4f2-9044-44cd-838c-ee58b322f026-serving-cert\") pod \"console-operator-58897d9998-b2d5q\" (UID: \"e27db4f2-9044-44cd-838c-ee58b322f026\") " pod="openshift-console-operator/console-operator-58897d9998-b2d5q" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.680000 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/715c2ecd-ac0a-4758-9ded-2ce22952b44f-encryption-config\") pod \"apiserver-76f77b778f-25n6z\" (UID: \"715c2ecd-ac0a-4758-9ded-2ce22952b44f\") " pod="openshift-apiserver/apiserver-76f77b778f-25n6z" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.680033 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-56x8k"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.680435 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-56x8k" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.680901 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-h4p5q"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.682681 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-h4p5q" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.682963 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/715c2ecd-ac0a-4758-9ded-2ce22952b44f-etcd-client\") pod \"apiserver-76f77b778f-25n6z\" (UID: \"715c2ecd-ac0a-4758-9ded-2ce22952b44f\") " pod="openshift-apiserver/apiserver-76f77b778f-25n6z" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.683164 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e255fdb7-438f-413c-baf2-52e93f1eb0a3-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-88wlz\" (UID: \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\") " pod="openshift-authentication/oauth-openshift-558db77b4-88wlz" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.683735 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/06d2ff1d-a88d-464e-9ab2-fe2bc7b67cf3-etcd-client\") pod \"apiserver-7bbb656c7d-2v2tm\" (UID: \"06d2ff1d-a88d-464e-9ab2-fe2bc7b67cf3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2v2tm" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.686228 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6gzld"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.688591 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6gzld" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.690742 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-bjhlw"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.691273 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bjhlw" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.695714 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2v6fb"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.695834 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/715c2ecd-ac0a-4758-9ded-2ce22952b44f-serving-cert\") pod \"apiserver-76f77b778f-25n6z\" (UID: \"715c2ecd-ac0a-4758-9ded-2ce22952b44f\") " pod="openshift-apiserver/apiserver-76f77b778f-25n6z" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.696046 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-47ksl"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.696339 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-nq2jk"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.696359 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.696763 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nq2jk" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.696941 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2v6fb" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.697074 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-47ksl" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.698923 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-842k4"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.699390 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.699592 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f6hln"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.701662 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-kvwsm"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.701788 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f6hln" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.702118 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-kvwsm" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.720256 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-25n6z"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.722852 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-twxbq"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.725047 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-twxbq" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.728969 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-msn62"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.730921 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a347c331-9240-4c72-940b-60042b98960f-profile-collector-cert\") pod \"olm-operator-6b444d44fb-mdd5s\" (UID: \"a347c331-9240-4c72-940b-60042b98960f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mdd5s" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.731031 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/91fd349f-c4be-4636-a5a9-76ed721d9afa-console-serving-cert\") pod \"console-f9d7485db-hgzq5\" (UID: \"91fd349f-c4be-4636-a5a9-76ed721d9afa\") " pod="openshift-console/console-f9d7485db-hgzq5" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.731087 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cba3a97-d8fe-4f88-9d3d-ed1a49713f1c-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-q8dwq\" (UID: \"6cba3a97-d8fe-4f88-9d3d-ed1a49713f1c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q8dwq" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.731129 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/91fd349f-c4be-4636-a5a9-76ed721d9afa-oauth-serving-cert\") pod \"console-f9d7485db-hgzq5\" (UID: \"91fd349f-c4be-4636-a5a9-76ed721d9afa\") " pod="openshift-console/console-f9d7485db-hgzq5" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.731164 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c6cb9e72-09ea-41da-97ef-a5501b57a58b-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-6gzld\" (UID: \"c6cb9e72-09ea-41da-97ef-a5501b57a58b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6gzld" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.731213 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/16b70f8e-c2b6-4545-813e-23b82399a149-srv-cert\") pod \"catalog-operator-68c6474976-6f8hc\" (UID: \"16b70f8e-c2b6-4545-813e-23b82399a149\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6f8hc" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.731243 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2kc8\" (UniqueName: \"kubernetes.io/projected/d6fcc552-ae72-46a3-9525-cfb460da05e1-kube-api-access-t2kc8\") pod \"dns-operator-744455d44c-vkxtv\" (UID: \"d6fcc552-ae72-46a3-9525-cfb460da05e1\") " pod="openshift-dns-operator/dns-operator-744455d44c-vkxtv" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.731283 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/9dfc9a41-03c9-411f-82ad-e212654e4bc3-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-bdvzx\" (UID: \"9dfc9a41-03c9-411f-82ad-e212654e4bc3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bdvzx" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.731317 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d6fcc552-ae72-46a3-9525-cfb460da05e1-metrics-tls\") pod \"dns-operator-744455d44c-vkxtv\" (UID: \"d6fcc552-ae72-46a3-9525-cfb460da05e1\") " pod="openshift-dns-operator/dns-operator-744455d44c-vkxtv" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.731340 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/91fd349f-c4be-4636-a5a9-76ed721d9afa-service-ca\") pod \"console-f9d7485db-hgzq5\" (UID: \"91fd349f-c4be-4636-a5a9-76ed721d9afa\") " pod="openshift-console/console-f9d7485db-hgzq5" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.731361 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13fcdc33-7dcb-4d34-86ca-bd40d679560e-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-pbfgx\" (UID: \"13fcdc33-7dcb-4d34-86ca-bd40d679560e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pbfgx" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.731684 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13fcdc33-7dcb-4d34-86ca-bd40d679560e-config\") pod \"authentication-operator-69f744f599-pbfgx\" (UID: \"13fcdc33-7dcb-4d34-86ca-bd40d679560e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pbfgx" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.731703 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13fcdc33-7dcb-4d34-86ca-bd40d679560e-service-ca-bundle\") pod \"authentication-operator-69f744f599-pbfgx\" (UID: \"13fcdc33-7dcb-4d34-86ca-bd40d679560e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pbfgx" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.731745 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/16b70f8e-c2b6-4545-813e-23b82399a149-profile-collector-cert\") pod \"catalog-operator-68c6474976-6f8hc\" (UID: \"16b70f8e-c2b6-4545-813e-23b82399a149\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6f8hc" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.731765 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6cb9e72-09ea-41da-97ef-a5501b57a58b-config\") pod \"kube-apiserver-operator-766d6c64bb-6gzld\" (UID: \"c6cb9e72-09ea-41da-97ef-a5501b57a58b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6gzld" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.731789 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pccx8\" (UniqueName: \"kubernetes.io/projected/16b70f8e-c2b6-4545-813e-23b82399a149-kube-api-access-pccx8\") pod \"catalog-operator-68c6474976-6f8hc\" (UID: \"16b70f8e-c2b6-4545-813e-23b82399a149\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6f8hc" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.731835 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/91fd349f-c4be-4636-a5a9-76ed721d9afa-console-oauth-config\") pod \"console-f9d7485db-hgzq5\" (UID: \"91fd349f-c4be-4636-a5a9-76ed721d9afa\") " pod="openshift-console/console-f9d7485db-hgzq5" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.731858 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7cd3fe51-e7f2-4d42-b054-5e8dbbb7ebbe-config-volume\") pod \"collect-profiles-29524890-mgzh5\" (UID: \"7cd3fe51-e7f2-4d42-b054-5e8dbbb7ebbe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524890-mgzh5" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.731883 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6cb9e72-09ea-41da-97ef-a5501b57a58b-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-6gzld\" (UID: \"c6cb9e72-09ea-41da-97ef-a5501b57a58b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6gzld" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.731913 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7cd3fe51-e7f2-4d42-b054-5e8dbbb7ebbe-secret-volume\") pod \"collect-profiles-29524890-mgzh5\" (UID: \"7cd3fe51-e7f2-4d42-b054-5e8dbbb7ebbe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524890-mgzh5" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.731947 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbvbj\" (UniqueName: \"kubernetes.io/projected/7cd3fe51-e7f2-4d42-b054-5e8dbbb7ebbe-kube-api-access-nbvbj\") pod \"collect-profiles-29524890-mgzh5\" (UID: \"7cd3fe51-e7f2-4d42-b054-5e8dbbb7ebbe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524890-mgzh5" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.731971 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a347c331-9240-4c72-940b-60042b98960f-srv-cert\") pod \"olm-operator-6b444d44fb-mdd5s\" (UID: \"a347c331-9240-4c72-940b-60042b98960f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mdd5s" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.732005 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6cba3a97-d8fe-4f88-9d3d-ed1a49713f1c-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-q8dwq\" (UID: \"6cba3a97-d8fe-4f88-9d3d-ed1a49713f1c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q8dwq" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.732024 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13fcdc33-7dcb-4d34-86ca-bd40d679560e-serving-cert\") pod \"authentication-operator-69f744f599-pbfgx\" (UID: \"13fcdc33-7dcb-4d34-86ca-bd40d679560e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pbfgx" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.732048 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9wg9\" (UniqueName: \"kubernetes.io/projected/13fcdc33-7dcb-4d34-86ca-bd40d679560e-kube-api-access-n9wg9\") pod \"authentication-operator-69f744f599-pbfgx\" (UID: \"13fcdc33-7dcb-4d34-86ca-bd40d679560e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pbfgx" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.732076 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7b5f\" (UniqueName: \"kubernetes.io/projected/91fd349f-c4be-4636-a5a9-76ed721d9afa-kube-api-access-q7b5f\") pod \"console-f9d7485db-hgzq5\" (UID: \"91fd349f-c4be-4636-a5a9-76ed721d9afa\") " pod="openshift-console/console-f9d7485db-hgzq5" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.732125 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvmwb\" (UniqueName: \"kubernetes.io/projected/9dfc9a41-03c9-411f-82ad-e212654e4bc3-kube-api-access-dvmwb\") pod \"package-server-manager-789f6589d5-bdvzx\" (UID: \"9dfc9a41-03c9-411f-82ad-e212654e4bc3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bdvzx" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.732150 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/91fd349f-c4be-4636-a5a9-76ed721d9afa-console-config\") pod \"console-f9d7485db-hgzq5\" (UID: \"91fd349f-c4be-4636-a5a9-76ed721d9afa\") " pod="openshift-console/console-f9d7485db-hgzq5" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.732174 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt8ll\" (UniqueName: \"kubernetes.io/projected/a347c331-9240-4c72-940b-60042b98960f-kube-api-access-kt8ll\") pod \"olm-operator-6b444d44fb-mdd5s\" (UID: \"a347c331-9240-4c72-940b-60042b98960f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mdd5s" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.732212 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91fd349f-c4be-4636-a5a9-76ed721d9afa-trusted-ca-bundle\") pod \"console-f9d7485db-hgzq5\" (UID: \"91fd349f-c4be-4636-a5a9-76ed721d9afa\") " pod="openshift-console/console-f9d7485db-hgzq5" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.732245 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2nxt\" (UniqueName: \"kubernetes.io/projected/6cba3a97-d8fe-4f88-9d3d-ed1a49713f1c-kube-api-access-w2nxt\") pod \"openshift-controller-manager-operator-756b6f6bc6-q8dwq\" (UID: \"6cba3a97-d8fe-4f88-9d3d-ed1a49713f1c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q8dwq" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.733405 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13fcdc33-7dcb-4d34-86ca-bd40d679560e-service-ca-bundle\") pod \"authentication-operator-69f744f599-pbfgx\" (UID: \"13fcdc33-7dcb-4d34-86ca-bd40d679560e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pbfgx" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.733535 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13fcdc33-7dcb-4d34-86ca-bd40d679560e-config\") pod \"authentication-operator-69f744f599-pbfgx\" (UID: \"13fcdc33-7dcb-4d34-86ca-bd40d679560e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pbfgx" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.734042 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cba3a97-d8fe-4f88-9d3d-ed1a49713f1c-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-q8dwq\" (UID: \"6cba3a97-d8fe-4f88-9d3d-ed1a49713f1c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q8dwq" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.734528 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hdbpp"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.738636 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/91fd349f-c4be-4636-a5a9-76ed721d9afa-oauth-serving-cert\") pod \"console-f9d7485db-hgzq5\" (UID: \"91fd349f-c4be-4636-a5a9-76ed721d9afa\") " pod="openshift-console/console-f9d7485db-hgzq5" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.739113 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/91fd349f-c4be-4636-a5a9-76ed721d9afa-service-ca\") pod \"console-f9d7485db-hgzq5\" (UID: \"91fd349f-c4be-4636-a5a9-76ed721d9afa\") " pod="openshift-console/console-f9d7485db-hgzq5" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.739679 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-8b2cq"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.739827 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91fd349f-c4be-4636-a5a9-76ed721d9afa-trusted-ca-bundle\") pod \"console-f9d7485db-hgzq5\" (UID: \"91fd349f-c4be-4636-a5a9-76ed721d9afa\") " pod="openshift-console/console-f9d7485db-hgzq5" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.742672 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.745035 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/91fd349f-c4be-4636-a5a9-76ed721d9afa-console-config\") pod \"console-f9d7485db-hgzq5\" (UID: \"91fd349f-c4be-4636-a5a9-76ed721d9afa\") " pod="openshift-console/console-f9d7485db-hgzq5" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.745146 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-88wlz"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.745226 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-n8ngq"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.745632 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-sbvpc"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.746557 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/16b70f8e-c2b6-4545-813e-23b82399a149-profile-collector-cert\") pod \"catalog-operator-68c6474976-6f8hc\" (UID: \"16b70f8e-c2b6-4545-813e-23b82399a149\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6f8hc" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.747183 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13fcdc33-7dcb-4d34-86ca-bd40d679560e-serving-cert\") pod \"authentication-operator-69f744f599-pbfgx\" (UID: \"13fcdc33-7dcb-4d34-86ca-bd40d679560e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pbfgx" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.747337 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/91fd349f-c4be-4636-a5a9-76ed721d9afa-console-oauth-config\") pod \"console-f9d7485db-hgzq5\" (UID: \"91fd349f-c4be-4636-a5a9-76ed721d9afa\") " pod="openshift-console/console-f9d7485db-hgzq5" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.747523 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6cba3a97-d8fe-4f88-9d3d-ed1a49713f1c-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-q8dwq\" (UID: \"6cba3a97-d8fe-4f88-9d3d-ed1a49713f1c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q8dwq" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.748406 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13fcdc33-7dcb-4d34-86ca-bd40d679560e-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-pbfgx\" (UID: \"13fcdc33-7dcb-4d34-86ca-bd40d679560e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pbfgx" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.749708 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-sbvpc" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.751924 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d6fcc552-ae72-46a3-9525-cfb460da05e1-metrics-tls\") pod \"dns-operator-744455d44c-vkxtv\" (UID: \"d6fcc552-ae72-46a3-9525-cfb460da05e1\") " pod="openshift-dns-operator/dns-operator-744455d44c-vkxtv" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.755545 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.755849 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-b2d5q"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.755975 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/16b70f8e-c2b6-4545-813e-23b82399a149-srv-cert\") pod \"catalog-operator-68c6474976-6f8hc\" (UID: \"16b70f8e-c2b6-4545-813e-23b82399a149\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6f8hc" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.756656 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/91fd349f-c4be-4636-a5a9-76ed721d9afa-console-serving-cert\") pod \"console-f9d7485db-hgzq5\" (UID: \"91fd349f-c4be-4636-a5a9-76ed721d9afa\") " pod="openshift-console/console-f9d7485db-hgzq5" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.762777 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-2v2tm"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.766024 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-7dd6s"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.773851 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.774536 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q8dwq"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.775849 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-hgzq5"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.777024 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-th66b"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.778027 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6f8hc"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.778163 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-th66b" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.779227 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4chpc"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.780372 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-rttcv"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.781466 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zjvhq"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.782757 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524890-mgzh5"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.783692 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2v6fb"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.787221 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.795105 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-h4p5q"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.796416 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vfkkc"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.798992 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kpdjc"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.800408 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bdvzx"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.801850 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lz7fh"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.803325 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-qnhlh"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.805230 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-pbfgx"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.806459 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-hqt8l"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.807456 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.808152 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-vkxtv"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.809266 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-gmzqb"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.810550 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-gmzqb" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.811000 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mdd5s"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.812062 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-t4rcw"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.813308 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vkrsj"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.813400 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-t4rcw" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.814338 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-sbvpc"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.815408 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f6hln"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.816565 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-47ksl"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.817655 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6gzld"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.818763 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-twxbq"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.820739 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-kvwsm"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.822043 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-bjhlw"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.823222 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-nq2jk"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.824994 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-t4rcw"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.826600 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-842k4"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.827733 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-gmzqb"] Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.833253 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbvbj\" (UniqueName: \"kubernetes.io/projected/7cd3fe51-e7f2-4d42-b054-5e8dbbb7ebbe-kube-api-access-nbvbj\") pod \"collect-profiles-29524890-mgzh5\" (UID: \"7cd3fe51-e7f2-4d42-b054-5e8dbbb7ebbe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524890-mgzh5" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.833312 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a347c331-9240-4c72-940b-60042b98960f-srv-cert\") pod \"olm-operator-6b444d44fb-mdd5s\" (UID: \"a347c331-9240-4c72-940b-60042b98960f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mdd5s" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.833376 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvmwb\" (UniqueName: \"kubernetes.io/projected/9dfc9a41-03c9-411f-82ad-e212654e4bc3-kube-api-access-dvmwb\") pod \"package-server-manager-789f6589d5-bdvzx\" (UID: \"9dfc9a41-03c9-411f-82ad-e212654e4bc3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bdvzx" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.833412 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt8ll\" (UniqueName: \"kubernetes.io/projected/a347c331-9240-4c72-940b-60042b98960f-kube-api-access-kt8ll\") pod \"olm-operator-6b444d44fb-mdd5s\" (UID: \"a347c331-9240-4c72-940b-60042b98960f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mdd5s" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.833690 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a347c331-9240-4c72-940b-60042b98960f-profile-collector-cert\") pod \"olm-operator-6b444d44fb-mdd5s\" (UID: \"a347c331-9240-4c72-940b-60042b98960f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mdd5s" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.834485 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c6cb9e72-09ea-41da-97ef-a5501b57a58b-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-6gzld\" (UID: \"c6cb9e72-09ea-41da-97ef-a5501b57a58b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6gzld" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.834533 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/9dfc9a41-03c9-411f-82ad-e212654e4bc3-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-bdvzx\" (UID: \"9dfc9a41-03c9-411f-82ad-e212654e4bc3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bdvzx" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.834642 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6cb9e72-09ea-41da-97ef-a5501b57a58b-config\") pod \"kube-apiserver-operator-766d6c64bb-6gzld\" (UID: \"c6cb9e72-09ea-41da-97ef-a5501b57a58b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6gzld" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.834689 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7cd3fe51-e7f2-4d42-b054-5e8dbbb7ebbe-config-volume\") pod \"collect-profiles-29524890-mgzh5\" (UID: \"7cd3fe51-e7f2-4d42-b054-5e8dbbb7ebbe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524890-mgzh5" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.834713 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6cb9e72-09ea-41da-97ef-a5501b57a58b-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-6gzld\" (UID: \"c6cb9e72-09ea-41da-97ef-a5501b57a58b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6gzld" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.834790 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7cd3fe51-e7f2-4d42-b054-5e8dbbb7ebbe-secret-volume\") pod \"collect-profiles-29524890-mgzh5\" (UID: \"7cd3fe51-e7f2-4d42-b054-5e8dbbb7ebbe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524890-mgzh5" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.835998 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7cd3fe51-e7f2-4d42-b054-5e8dbbb7ebbe-config-volume\") pod \"collect-profiles-29524890-mgzh5\" (UID: \"7cd3fe51-e7f2-4d42-b054-5e8dbbb7ebbe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524890-mgzh5" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.837644 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a347c331-9240-4c72-940b-60042b98960f-profile-collector-cert\") pod \"olm-operator-6b444d44fb-mdd5s\" (UID: \"a347c331-9240-4c72-940b-60042b98960f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mdd5s" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.837806 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a347c331-9240-4c72-940b-60042b98960f-srv-cert\") pod \"olm-operator-6b444d44fb-mdd5s\" (UID: \"a347c331-9240-4c72-940b-60042b98960f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mdd5s" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.838245 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/9dfc9a41-03c9-411f-82ad-e212654e4bc3-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-bdvzx\" (UID: \"9dfc9a41-03c9-411f-82ad-e212654e4bc3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bdvzx" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.838647 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7cd3fe51-e7f2-4d42-b054-5e8dbbb7ebbe-secret-volume\") pod \"collect-profiles-29524890-mgzh5\" (UID: \"7cd3fe51-e7f2-4d42-b054-5e8dbbb7ebbe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524890-mgzh5" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.843762 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6wvc\" (UniqueName: \"kubernetes.io/projected/06d2ff1d-a88d-464e-9ab2-fe2bc7b67cf3-kube-api-access-w6wvc\") pod \"apiserver-7bbb656c7d-2v2tm\" (UID: \"06d2ff1d-a88d-464e-9ab2-fe2bc7b67cf3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2v2tm" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.864084 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmqnm\" (UniqueName: \"kubernetes.io/projected/e255fdb7-438f-413c-baf2-52e93f1eb0a3-kube-api-access-bmqnm\") pod \"oauth-openshift-558db77b4-88wlz\" (UID: \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\") " pod="openshift-authentication/oauth-openshift-558db77b4-88wlz" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.889771 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6kvb\" (UniqueName: \"kubernetes.io/projected/2aa84740-2db0-45f4-a3e0-2b78422a51e3-kube-api-access-b6kvb\") pod \"openshift-apiserver-operator-796bbdcf4f-msn62\" (UID: \"2aa84740-2db0-45f4-a3e0-2b78422a51e3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-msn62" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.902266 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8dbq\" (UniqueName: \"kubernetes.io/projected/efb57d4d-b3d4-42fa-a27b-299bdf135836-kube-api-access-s8dbq\") pod \"downloads-7954f5f757-8b2cq\" (UID: \"efb57d4d-b3d4-42fa-a27b-299bdf135836\") " pod="openshift-console/downloads-7954f5f757-8b2cq" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.923539 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29k72\" (UniqueName: \"kubernetes.io/projected/e91579c1-18ab-46f4-877b-98962c16c1d6-kube-api-access-29k72\") pod \"machine-approver-56656f9798-66l86\" (UID: \"e91579c1-18ab-46f4-877b-98962c16c1d6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-66l86" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.942330 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2mx7\" (UniqueName: \"kubernetes.io/projected/c19a4260-b2ba-478a-8fa6-e2045fe1b4ee-kube-api-access-k2mx7\") pod \"openshift-config-operator-7777fb866f-n8ngq\" (UID: \"c19a4260-b2ba-478a-8fa6-e2045fe1b4ee\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n8ngq" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.962961 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqv8d\" (UniqueName: \"kubernetes.io/projected/f2aae678-17fc-4272-be7e-839946082d8b-kube-api-access-dqv8d\") pod \"controller-manager-879f6c89f-hdbpp\" (UID: \"f2aae678-17fc-4272-be7e-839946082d8b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hdbpp" Feb 19 09:44:55 crc kubenswrapper[4965]: I0219 09:44:55.967108 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.002384 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pflm7\" (UniqueName: \"kubernetes.io/projected/6ccda909-983a-43f6-9e98-c46683e6f63f-kube-api-access-pflm7\") pod \"route-controller-manager-6576b87f9c-rttcv\" (UID: \"6ccda909-983a-43f6-9e98-c46683e6f63f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rttcv" Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.007248 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.023699 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-66l86" Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.028395 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 19 09:44:56 crc kubenswrapper[4965]: W0219 09:44:56.040861 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode91579c1_18ab_46f4_877b_98962c16c1d6.slice/crio-206407f1f0a01e7a056e64d399fb55b84b3e452f1bc57260c6ba33c8fef584b6 WatchSource:0}: Error finding container 206407f1f0a01e7a056e64d399fb55b84b3e452f1bc57260c6ba33c8fef584b6: Status 404 returned error can't find the container with id 206407f1f0a01e7a056e64d399fb55b84b3e452f1bc57260c6ba33c8fef584b6 Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.063750 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-88wlz" Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.068103 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.071284 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxh57\" (UniqueName: \"kubernetes.io/projected/2236bd7c-4f4c-48a1-82cf-0b406ff1934f-kube-api-access-sxh57\") pod \"cluster-samples-operator-665b6dd947-4chpc\" (UID: \"2236bd7c-4f4c-48a1-82cf-0b406ff1934f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4chpc" Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.079125 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2v2tm" Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.089924 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-8b2cq" Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.089960 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.109063 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.120213 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n8ngq" Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.127501 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.130541 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hdbpp" Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.146610 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4chpc" Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.148757 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.177504 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.185602 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-msn62" Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.189144 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.211132 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.227561 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rttcv" Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.233518 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.254740 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.268340 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.327573 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.329818 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bx9bh\" (UniqueName: \"kubernetes.io/projected/e27db4f2-9044-44cd-838c-ee58b322f026-kube-api-access-bx9bh\") pod \"console-operator-58897d9998-b2d5q\" (UID: \"e27db4f2-9044-44cd-838c-ee58b322f026\") " pod="openshift-console-operator/console-operator-58897d9998-b2d5q" Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.350412 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4stw\" (UniqueName: \"kubernetes.io/projected/715c2ecd-ac0a-4758-9ded-2ce22952b44f-kube-api-access-p4stw\") pod \"apiserver-76f77b778f-25n6z\" (UID: \"715c2ecd-ac0a-4758-9ded-2ce22952b44f\") " pod="openshift-apiserver/apiserver-76f77b778f-25n6z" Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.353556 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.370174 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.392634 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-2v2tm"] Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.396673 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-b2d5q" Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.397451 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-25n6z" Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.399095 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.409936 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.415620 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-88wlz"] Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.421144 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-8b2cq"] Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.428072 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 19 09:44:56 crc kubenswrapper[4965]: W0219 09:44:56.433263 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podefb57d4d_b3d4_42fa_a27b_299bdf135836.slice/crio-8746e3563779f5f62413965ae1da68f8aaa5986852bc91b458a0982fb3485a35 WatchSource:0}: Error finding container 8746e3563779f5f62413965ae1da68f8aaa5986852bc91b458a0982fb3485a35: Status 404 returned error can't find the container with id 8746e3563779f5f62413965ae1da68f8aaa5986852bc91b458a0982fb3485a35 Feb 19 09:44:56 crc kubenswrapper[4965]: W0219 09:44:56.437864 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06d2ff1d_a88d_464e_9ab2_fe2bc7b67cf3.slice/crio-ff19fc8f9d652817f09b795fb3036cbe1a92078f0619e29d028c82eb6d783256 WatchSource:0}: Error finding container ff19fc8f9d652817f09b795fb3036cbe1a92078f0619e29d028c82eb6d783256: Status 404 returned error can't find the container with id ff19fc8f9d652817f09b795fb3036cbe1a92078f0619e29d028c82eb6d783256 Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.448786 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.475392 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.492372 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.508595 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.528858 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.552455 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.569670 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.587503 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.607392 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.617997 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4chpc"] Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.627007 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.648511 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.664827 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-rttcv"] Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.668089 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.686097 4965 request.go:700] Waited for 1.002549761s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/secrets?fieldSelector=metadata.name%3Detcd-operator-serving-cert&limit=500&resourceVersion=0 Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.689368 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.717948 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.731331 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-25n6z"] Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.732870 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.744055 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hdbpp"] Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.748968 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 19 09:44:56 crc kubenswrapper[4965]: W0219 09:44:56.749270 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod715c2ecd_ac0a_4758_9ded_2ce22952b44f.slice/crio-3b59b09d944e637709b259d69ff3e050c5378d620bd73f4aa5632b1314d88bba WatchSource:0}: Error finding container 3b59b09d944e637709b259d69ff3e050c5378d620bd73f4aa5632b1314d88bba: Status 404 returned error can't find the container with id 3b59b09d944e637709b259d69ff3e050c5378d620bd73f4aa5632b1314d88bba Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.753989 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-msn62"] Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.763862 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-n8ngq"] Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.768086 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-b2d5q"] Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.768503 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.789090 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.809162 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 19 09:44:56 crc kubenswrapper[4965]: W0219 09:44:56.809945 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc19a4260_b2ba_478a_8fa6_e2045fe1b4ee.slice/crio-586ac71e7a051a17d0c73d954d855212ee3b05aaaa39010ca43155d12e210b7b WatchSource:0}: Error finding container 586ac71e7a051a17d0c73d954d855212ee3b05aaaa39010ca43155d12e210b7b: Status 404 returned error can't find the container with id 586ac71e7a051a17d0c73d954d855212ee3b05aaaa39010ca43155d12e210b7b Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.827889 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 19 09:44:56 crc kubenswrapper[4965]: E0219 09:44:56.835120 4965 configmap.go:193] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 19 09:44:56 crc kubenswrapper[4965]: E0219 09:44:56.835270 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c6cb9e72-09ea-41da-97ef-a5501b57a58b-config podName:c6cb9e72-09ea-41da-97ef-a5501b57a58b nodeName:}" failed. No retries permitted until 2026-02-19 09:44:57.335237538 +0000 UTC m=+152.956558848 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c6cb9e72-09ea-41da-97ef-a5501b57a58b-config") pod "kube-apiserver-operator-766d6c64bb-6gzld" (UID: "c6cb9e72-09ea-41da-97ef-a5501b57a58b") : failed to sync configmap cache: timed out waiting for the condition Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.843560 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6cb9e72-09ea-41da-97ef-a5501b57a58b-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-6gzld\" (UID: \"c6cb9e72-09ea-41da-97ef-a5501b57a58b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6gzld" Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.848068 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.868936 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.888780 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.916703 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.929248 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.956253 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.967635 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 19 09:44:56 crc kubenswrapper[4965]: I0219 09:44:56.988485 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.008482 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.028112 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.037388 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rttcv" event={"ID":"6ccda909-983a-43f6-9e98-c46683e6f63f","Type":"ContainerStarted","Data":"99414624e9db0435b58e12aabbcfccbd4e47ec3cdf6e81c63f11ad07c6cd2e0b"} Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.037446 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rttcv" event={"ID":"6ccda909-983a-43f6-9e98-c46683e6f63f","Type":"ContainerStarted","Data":"290f3f233b34ff58939b593e5492dacd038f0ef5b6fe34e8e58b12f352af98cd"} Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.037631 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rttcv" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.039884 4965 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-rttcv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.039938 4965 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rttcv" podUID="6ccda909-983a-43f6-9e98-c46683e6f63f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.043581 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-88wlz" event={"ID":"e255fdb7-438f-413c-baf2-52e93f1eb0a3","Type":"ContainerStarted","Data":"03f733d0ceee5e5ff28829ebae2bcf82cacad169ed9730a17481e7fe24f3cdaa"} Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.043647 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-88wlz" event={"ID":"e255fdb7-438f-413c-baf2-52e93f1eb0a3","Type":"ContainerStarted","Data":"ba7fa50cfe4329cf5c0a1936e34e70471d831c12c0bf9ba143aef1632f445bd2"} Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.044050 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-88wlz" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.048089 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-b2d5q" event={"ID":"e27db4f2-9044-44cd-838c-ee58b322f026","Type":"ContainerStarted","Data":"59fcacfe7d22282cdd7383f9b67a0256ba16e64dc43eac3339817563378bfc4f"} Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.048513 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.050338 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n8ngq" event={"ID":"c19a4260-b2ba-478a-8fa6-e2045fe1b4ee","Type":"ContainerStarted","Data":"586ac71e7a051a17d0c73d954d855212ee3b05aaaa39010ca43155d12e210b7b"} Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.053533 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hdbpp" event={"ID":"f2aae678-17fc-4272-be7e-839946082d8b","Type":"ContainerStarted","Data":"8ad95ff9a21fc85a50f044febc9884a5269525300cc69c0697206be2254733aa"} Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.053597 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hdbpp" event={"ID":"f2aae678-17fc-4272-be7e-839946082d8b","Type":"ContainerStarted","Data":"123292214a789aff1bb137cb98d0d600c2bc9e7082288ec90dc9f88a8ac801c3"} Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.054024 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-hdbpp" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.057268 4965 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-hdbpp container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.057335 4965 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-hdbpp" podUID="f2aae678-17fc-4272-be7e-839946082d8b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.057285 4965 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-88wlz container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" start-of-body= Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.057412 4965 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-88wlz" podUID="e255fdb7-438f-413c-baf2-52e93f1eb0a3" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.062407 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-66l86" event={"ID":"e91579c1-18ab-46f4-877b-98962c16c1d6","Type":"ContainerStarted","Data":"8f213cf3f611f691aa2077fb698bc9ef4ce013c5148f7d1caeea736306b9ece9"} Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.062450 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-66l86" event={"ID":"e91579c1-18ab-46f4-877b-98962c16c1d6","Type":"ContainerStarted","Data":"c79c9d93f1a29dcbf45d67566caf3ccd6d479d6966ea3fd5315eb04ad86f4298"} Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.062467 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-66l86" event={"ID":"e91579c1-18ab-46f4-877b-98962c16c1d6","Type":"ContainerStarted","Data":"206407f1f0a01e7a056e64d399fb55b84b3e452f1bc57260c6ba33c8fef584b6"} Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.065977 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-25n6z" event={"ID":"715c2ecd-ac0a-4758-9ded-2ce22952b44f","Type":"ContainerStarted","Data":"3b59b09d944e637709b259d69ff3e050c5378d620bd73f4aa5632b1314d88bba"} Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.067445 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.070983 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-msn62" event={"ID":"2aa84740-2db0-45f4-a3e0-2b78422a51e3","Type":"ContainerStarted","Data":"44f2fa53d1184ad94ceafe4244b2538009784214fda9e4254af58c7cb79b1651"} Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.072110 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4chpc" event={"ID":"2236bd7c-4f4c-48a1-82cf-0b406ff1934f","Type":"ContainerStarted","Data":"46f79f0757a211b531b73de0ee3b8ae6ce5d4ba56790d31a637679f55e70e464"} Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.072143 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4chpc" event={"ID":"2236bd7c-4f4c-48a1-82cf-0b406ff1934f","Type":"ContainerStarted","Data":"af292856b65429c2a992128ffd31d7d274737ce73a014092f7817a5cf951b505"} Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.074097 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-8b2cq" event={"ID":"efb57d4d-b3d4-42fa-a27b-299bdf135836","Type":"ContainerStarted","Data":"b9c5d55905826d36cdd6731c5feab063eb3effaae7df21a3c53d0203cb4ad89f"} Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.074133 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-8b2cq" event={"ID":"efb57d4d-b3d4-42fa-a27b-299bdf135836","Type":"ContainerStarted","Data":"8746e3563779f5f62413965ae1da68f8aaa5986852bc91b458a0982fb3485a35"} Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.074448 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-8b2cq" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.075142 4965 generic.go:334] "Generic (PLEG): container finished" podID="06d2ff1d-a88d-464e-9ab2-fe2bc7b67cf3" containerID="02889015e2a57f9805de900c77171835dce2029270d8c8d7e9e499edc0a216db" exitCode=0 Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.075187 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2v2tm" event={"ID":"06d2ff1d-a88d-464e-9ab2-fe2bc7b67cf3","Type":"ContainerDied","Data":"02889015e2a57f9805de900c77171835dce2029270d8c8d7e9e499edc0a216db"} Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.075220 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2v2tm" event={"ID":"06d2ff1d-a88d-464e-9ab2-fe2bc7b67cf3","Type":"ContainerStarted","Data":"ff19fc8f9d652817f09b795fb3036cbe1a92078f0619e29d028c82eb6d783256"} Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.076752 4965 patch_prober.go:28] interesting pod/downloads-7954f5f757-8b2cq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.076817 4965 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-8b2cq" podUID="efb57d4d-b3d4-42fa-a27b-299bdf135836" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.088491 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.106879 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.127502 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.148147 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.168098 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.194015 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.209351 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.228466 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.247752 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.267611 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.288605 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.308099 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.328458 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.347460 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.366351 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6cb9e72-09ea-41da-97ef-a5501b57a58b-config\") pod \"kube-apiserver-operator-766d6c64bb-6gzld\" (UID: \"c6cb9e72-09ea-41da-97ef-a5501b57a58b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6gzld" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.367083 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6cb9e72-09ea-41da-97ef-a5501b57a58b-config\") pod \"kube-apiserver-operator-766d6c64bb-6gzld\" (UID: \"c6cb9e72-09ea-41da-97ef-a5501b57a58b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6gzld" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.367975 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.388210 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.465384 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2nxt\" (UniqueName: \"kubernetes.io/projected/6cba3a97-d8fe-4f88-9d3d-ed1a49713f1c-kube-api-access-w2nxt\") pod \"openshift-controller-manager-operator-756b6f6bc6-q8dwq\" (UID: \"6cba3a97-d8fe-4f88-9d3d-ed1a49713f1c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q8dwq" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.493736 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2kc8\" (UniqueName: \"kubernetes.io/projected/d6fcc552-ae72-46a3-9525-cfb460da05e1-kube-api-access-t2kc8\") pod \"dns-operator-744455d44c-vkxtv\" (UID: \"d6fcc552-ae72-46a3-9525-cfb460da05e1\") " pod="openshift-dns-operator/dns-operator-744455d44c-vkxtv" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.506021 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9wg9\" (UniqueName: \"kubernetes.io/projected/13fcdc33-7dcb-4d34-86ca-bd40d679560e-kube-api-access-n9wg9\") pod \"authentication-operator-69f744f599-pbfgx\" (UID: \"13fcdc33-7dcb-4d34-86ca-bd40d679560e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pbfgx" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.531345 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.531854 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7b5f\" (UniqueName: \"kubernetes.io/projected/91fd349f-c4be-4636-a5a9-76ed721d9afa-kube-api-access-q7b5f\") pod \"console-f9d7485db-hgzq5\" (UID: \"91fd349f-c4be-4636-a5a9-76ed721d9afa\") " pod="openshift-console/console-f9d7485db-hgzq5" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.547925 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.584079 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pccx8\" (UniqueName: \"kubernetes.io/projected/16b70f8e-c2b6-4545-813e-23b82399a149-kube-api-access-pccx8\") pod \"catalog-operator-68c6474976-6f8hc\" (UID: \"16b70f8e-c2b6-4545-813e-23b82399a149\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6f8hc" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.587942 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.605692 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-pbfgx" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.607967 4965 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.614610 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-vkxtv" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.621395 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-hgzq5" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.628074 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.629253 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q8dwq" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.634482 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6f8hc" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.648370 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.667324 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.686512 4965 request.go:700] Waited for 1.908015756s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dnode-bootstrapper-token&limit=500&resourceVersion=0 Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.688547 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.707774 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.733738 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.753367 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.770428 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.792668 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.818716 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.827667 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.881564 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbvbj\" (UniqueName: \"kubernetes.io/projected/7cd3fe51-e7f2-4d42-b054-5e8dbbb7ebbe-kube-api-access-nbvbj\") pod \"collect-profiles-29524890-mgzh5\" (UID: \"7cd3fe51-e7f2-4d42-b054-5e8dbbb7ebbe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524890-mgzh5" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.884050 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt8ll\" (UniqueName: \"kubernetes.io/projected/a347c331-9240-4c72-940b-60042b98960f-kube-api-access-kt8ll\") pod \"olm-operator-6b444d44fb-mdd5s\" (UID: \"a347c331-9240-4c72-940b-60042b98960f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mdd5s" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.907133 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvmwb\" (UniqueName: \"kubernetes.io/projected/9dfc9a41-03c9-411f-82ad-e212654e4bc3-kube-api-access-dvmwb\") pod \"package-server-manager-789f6589d5-bdvzx\" (UID: \"9dfc9a41-03c9-411f-82ad-e212654e4bc3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bdvzx" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.929721 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c6cb9e72-09ea-41da-97ef-a5501b57a58b-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-6gzld\" (UID: \"c6cb9e72-09ea-41da-97ef-a5501b57a58b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6gzld" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.952288 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mdd5s" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.972432 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bdvzx" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.980563 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524890-mgzh5" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.985927 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltxjm\" (UniqueName: \"kubernetes.io/projected/22aed16a-0375-45f1-8762-8d5afddf848a-kube-api-access-ltxjm\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.986218 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/49cf856e-b37d-4ab6-9c6e-241cbc4be93e-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-hqt8l\" (UID: \"49cf856e-b37d-4ab6-9c6e-241cbc4be93e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hqt8l" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.986250 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2nnb\" (UniqueName: \"kubernetes.io/projected/8c7730fc-ade8-4092-8923-54264965e892-kube-api-access-p2nnb\") pod \"migrator-59844c95c7-nq2jk\" (UID: \"8c7730fc-ade8-4092-8923-54264965e892\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nq2jk" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.986279 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49cf856e-b37d-4ab6-9c6e-241cbc4be93e-config\") pod \"machine-api-operator-5694c8668f-hqt8l\" (UID: \"49cf856e-b37d-4ab6-9c6e-241cbc4be93e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hqt8l" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.986335 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/164f68fa-9132-47c9-9c23-bca749b3f4e8-trusted-ca\") pod \"ingress-operator-5b745b69d9-bjhlw\" (UID: \"164f68fa-9132-47c9-9c23-bca749b3f4e8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bjhlw" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.986358 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prq2s\" (UniqueName: \"kubernetes.io/projected/082ad3b5-c0c5-437c-8077-395c6ec09ec3-kube-api-access-prq2s\") pod \"cluster-image-registry-operator-dc59b4c8b-f6hln\" (UID: \"082ad3b5-c0c5-437c-8077-395c6ec09ec3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f6hln" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.986404 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/5e0fcf66-e50c-4c4c-9370-08ed336d25d9-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-2v6fb\" (UID: \"5e0fcf66-e50c-4c4c-9370-08ed336d25d9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2v6fb" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.986457 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/22aed16a-0375-45f1-8762-8d5afddf848a-bound-sa-token\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.986483 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6bd24030-a535-4db1-b620-44d9c5c7a655-etcd-ca\") pod \"etcd-operator-b45778765-h4p5q\" (UID: \"6bd24030-a535-4db1-b620-44d9c5c7a655\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h4p5q" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.986610 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/082ad3b5-c0c5-437c-8077-395c6ec09ec3-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-f6hln\" (UID: \"082ad3b5-c0c5-437c-8077-395c6ec09ec3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f6hln" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.986637 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/49cf856e-b37d-4ab6-9c6e-241cbc4be93e-images\") pod \"machine-api-operator-5694c8668f-hqt8l\" (UID: \"49cf856e-b37d-4ab6-9c6e-241cbc4be93e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hqt8l" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.986675 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cd586f77-cb07-42a9-b20c-bf06ed856469-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-qnhlh\" (UID: \"cd586f77-cb07-42a9-b20c-bf06ed856469\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qnhlh" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.986727 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/f9b832c2-b2a0-4017-a323-c317ec4c1c1c-stats-auth\") pod \"router-default-5444994796-56x8k\" (UID: \"f9b832c2-b2a0-4017-a323-c317ec4c1c1c\") " pod="openshift-ingress/router-default-5444994796-56x8k" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.986750 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9ft7\" (UniqueName: \"kubernetes.io/projected/5e0fcf66-e50c-4c4c-9370-08ed336d25d9-kube-api-access-k9ft7\") pod \"control-plane-machine-set-operator-78cbb6b69f-2v6fb\" (UID: \"5e0fcf66-e50c-4c4c-9370-08ed336d25d9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2v6fb" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.986824 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z79fd\" (UniqueName: \"kubernetes.io/projected/f9b832c2-b2a0-4017-a323-c317ec4c1c1c-kube-api-access-z79fd\") pod \"router-default-5444994796-56x8k\" (UID: \"f9b832c2-b2a0-4017-a323-c317ec4c1c1c\") " pod="openshift-ingress/router-default-5444994796-56x8k" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.986874 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8772ecf1-3bce-4573-91de-daf37c1ef762-webhook-cert\") pod \"packageserver-d55dfcdfc-kpdjc\" (UID: \"8772ecf1-3bce-4573-91de-daf37c1ef762\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kpdjc" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.986898 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/52aa8dbd-b4ec-4579-b036-a1dcf35567a5-proxy-tls\") pod \"machine-config-operator-74547568cd-7dd6s\" (UID: \"52aa8dbd-b4ec-4579-b036-a1dcf35567a5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7dd6s" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.986961 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/52aa8dbd-b4ec-4579-b036-a1dcf35567a5-auth-proxy-config\") pod \"machine-config-operator-74547568cd-7dd6s\" (UID: \"52aa8dbd-b4ec-4579-b036-a1dcf35567a5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7dd6s" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.987000 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8772ecf1-3bce-4573-91de-daf37c1ef762-tmpfs\") pod \"packageserver-d55dfcdfc-kpdjc\" (UID: \"8772ecf1-3bce-4573-91de-daf37c1ef762\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kpdjc" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.987036 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bd24030-a535-4db1-b620-44d9c5c7a655-config\") pod \"etcd-operator-b45778765-h4p5q\" (UID: \"6bd24030-a535-4db1-b620-44d9c5c7a655\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h4p5q" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.987078 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6bd24030-a535-4db1-b620-44d9c5c7a655-serving-cert\") pod \"etcd-operator-b45778765-h4p5q\" (UID: \"6bd24030-a535-4db1-b620-44d9c5c7a655\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h4p5q" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.987103 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxfhn\" (UniqueName: \"kubernetes.io/projected/cd586f77-cb07-42a9-b20c-bf06ed856469-kube-api-access-fxfhn\") pod \"machine-config-controller-84d6567774-qnhlh\" (UID: \"cd586f77-cb07-42a9-b20c-bf06ed856469\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qnhlh" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.987153 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88b545dd-0faa-4093-8fd9-40b693b3ef87-config\") pod \"service-ca-operator-777779d784-47ksl\" (UID: \"88b545dd-0faa-4093-8fd9-40b693b3ef87\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-47ksl" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.987176 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/082ad3b5-c0c5-437c-8077-395c6ec09ec3-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-f6hln\" (UID: \"082ad3b5-c0c5-437c-8077-395c6ec09ec3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f6hln" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.987221 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bd91bca5-eb6e-4fcf-b8c8-013e057a95d0-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-zjvhq\" (UID: \"bd91bca5-eb6e-4fcf-b8c8-013e057a95d0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zjvhq" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.987254 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.987280 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cd586f77-cb07-42a9-b20c-bf06ed856469-proxy-tls\") pod \"machine-config-controller-84d6567774-qnhlh\" (UID: \"cd586f77-cb07-42a9-b20c-bf06ed856469\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qnhlh" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.987304 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/164f68fa-9132-47c9-9c23-bca749b3f4e8-metrics-tls\") pod \"ingress-operator-5b745b69d9-bjhlw\" (UID: \"164f68fa-9132-47c9-9c23-bca749b3f4e8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bjhlw" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.987327 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/164f68fa-9132-47c9-9c23-bca749b3f4e8-bound-sa-token\") pod \"ingress-operator-5b745b69d9-bjhlw\" (UID: \"164f68fa-9132-47c9-9c23-bca749b3f4e8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bjhlw" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.987350 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8772ecf1-3bce-4573-91de-daf37c1ef762-apiservice-cert\") pod \"packageserver-d55dfcdfc-kpdjc\" (UID: \"8772ecf1-3bce-4573-91de-daf37c1ef762\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kpdjc" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.987389 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff842fd5-2dc6-4411-b644-6de86566ab22-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-vfkkc\" (UID: \"ff842fd5-2dc6-4411-b644-6de86566ab22\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vfkkc" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.989789 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdr4x\" (UniqueName: \"kubernetes.io/projected/ff842fd5-2dc6-4411-b644-6de86566ab22-kube-api-access-gdr4x\") pod \"kube-storage-version-migrator-operator-b67b599dd-vfkkc\" (UID: \"ff842fd5-2dc6-4411-b644-6de86566ab22\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vfkkc" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.989830 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hz8rh\" (UniqueName: \"kubernetes.io/projected/49cf856e-b37d-4ab6-9c6e-241cbc4be93e-kube-api-access-hz8rh\") pod \"machine-api-operator-5694c8668f-hqt8l\" (UID: \"49cf856e-b37d-4ab6-9c6e-241cbc4be93e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hqt8l" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.989854 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/190603fe-6420-4d17-91f5-c37c9038002c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vkrsj\" (UID: \"190603fe-6420-4d17-91f5-c37c9038002c\") " pod="openshift-marketplace/marketplace-operator-79b997595-vkrsj" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.990910 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd91bca5-eb6e-4fcf-b8c8-013e057a95d0-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-zjvhq\" (UID: \"bd91bca5-eb6e-4fcf-b8c8-013e057a95d0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zjvhq" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.990940 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/52aa8dbd-b4ec-4579-b036-a1dcf35567a5-images\") pod \"machine-config-operator-74547568cd-7dd6s\" (UID: \"52aa8dbd-b4ec-4579-b036-a1dcf35567a5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7dd6s" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.990988 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4e92446-36cd-4840-97fc-d9f0d60e4e7d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lz7fh\" (UID: \"b4e92446-36cd-4840-97fc-d9f0d60e4e7d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lz7fh" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.991014 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/082ad3b5-c0c5-437c-8077-395c6ec09ec3-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-f6hln\" (UID: \"082ad3b5-c0c5-437c-8077-395c6ec09ec3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f6hln" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.991058 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/22aed16a-0375-45f1-8762-8d5afddf848a-registry-certificates\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.993500 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tbd4\" (UniqueName: \"kubernetes.io/projected/6c8a56a8-ed46-4e4a-9dbd-de3914ee3581-kube-api-access-7tbd4\") pod \"service-ca-9c57cc56f-kvwsm\" (UID: \"6c8a56a8-ed46-4e4a-9dbd-de3914ee3581\") " pod="openshift-service-ca/service-ca-9c57cc56f-kvwsm" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.993541 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cggwj\" (UniqueName: \"kubernetes.io/projected/164f68fa-9132-47c9-9c23-bca749b3f4e8-kube-api-access-cggwj\") pod \"ingress-operator-5b745b69d9-bjhlw\" (UID: \"164f68fa-9132-47c9-9c23-bca749b3f4e8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bjhlw" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.993567 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/22aed16a-0375-45f1-8762-8d5afddf848a-registry-tls\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.993622 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv2cm\" (UniqueName: \"kubernetes.io/projected/6bd24030-a535-4db1-b620-44d9c5c7a655-kube-api-access-qv2cm\") pod \"etcd-operator-b45778765-h4p5q\" (UID: \"6bd24030-a535-4db1-b620-44d9c5c7a655\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h4p5q" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.993648 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/22aed16a-0375-45f1-8762-8d5afddf848a-trusted-ca\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.993674 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/22aed16a-0375-45f1-8762-8d5afddf848a-installation-pull-secrets\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.993696 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6bd24030-a535-4db1-b620-44d9c5c7a655-etcd-client\") pod \"etcd-operator-b45778765-h4p5q\" (UID: \"6bd24030-a535-4db1-b620-44d9c5c7a655\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h4p5q" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.993733 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/6c8a56a8-ed46-4e4a-9dbd-de3914ee3581-signing-key\") pod \"service-ca-9c57cc56f-kvwsm\" (UID: \"6c8a56a8-ed46-4e4a-9dbd-de3914ee3581\") " pod="openshift-service-ca/service-ca-9c57cc56f-kvwsm" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.993797 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/6c8a56a8-ed46-4e4a-9dbd-de3914ee3581-signing-cabundle\") pod \"service-ca-9c57cc56f-kvwsm\" (UID: \"6c8a56a8-ed46-4e4a-9dbd-de3914ee3581\") " pod="openshift-service-ca/service-ca-9c57cc56f-kvwsm" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.993860 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/190603fe-6420-4d17-91f5-c37c9038002c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vkrsj\" (UID: \"190603fe-6420-4d17-91f5-c37c9038002c\") " pod="openshift-marketplace/marketplace-operator-79b997595-vkrsj" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.993905 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b4md\" (UniqueName: \"kubernetes.io/projected/52aa8dbd-b4ec-4579-b036-a1dcf35567a5-kube-api-access-5b4md\") pod \"machine-config-operator-74547568cd-7dd6s\" (UID: \"52aa8dbd-b4ec-4579-b036-a1dcf35567a5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7dd6s" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.993965 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6bd24030-a535-4db1-b620-44d9c5c7a655-etcd-service-ca\") pod \"etcd-operator-b45778765-h4p5q\" (UID: \"6bd24030-a535-4db1-b620-44d9c5c7a655\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h4p5q" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.994019 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff842fd5-2dc6-4411-b644-6de86566ab22-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-vfkkc\" (UID: \"ff842fd5-2dc6-4411-b644-6de86566ab22\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vfkkc" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.994043 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9b832c2-b2a0-4017-a323-c317ec4c1c1c-service-ca-bundle\") pod \"router-default-5444994796-56x8k\" (UID: \"f9b832c2-b2a0-4017-a323-c317ec4c1c1c\") " pod="openshift-ingress/router-default-5444994796-56x8k" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.994067 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f9b832c2-b2a0-4017-a323-c317ec4c1c1c-metrics-certs\") pod \"router-default-5444994796-56x8k\" (UID: \"f9b832c2-b2a0-4017-a323-c317ec4c1c1c\") " pod="openshift-ingress/router-default-5444994796-56x8k" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.994095 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsd2b\" (UniqueName: \"kubernetes.io/projected/88b545dd-0faa-4093-8fd9-40b693b3ef87-kube-api-access-dsd2b\") pod \"service-ca-operator-777779d784-47ksl\" (UID: \"88b545dd-0faa-4093-8fd9-40b693b3ef87\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-47ksl" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.994117 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/22aed16a-0375-45f1-8762-8d5afddf848a-ca-trust-extracted\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:44:57 crc kubenswrapper[4965]: E0219 09:44:57.995036 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:44:58.495019301 +0000 UTC m=+154.116340671 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-842k4" (UID: "22aed16a-0375-45f1-8762-8d5afddf848a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.995246 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd91bca5-eb6e-4fcf-b8c8-013e057a95d0-config\") pod \"kube-controller-manager-operator-78b949d7b-zjvhq\" (UID: \"bd91bca5-eb6e-4fcf-b8c8-013e057a95d0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zjvhq" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.995279 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b4e92446-36cd-4840-97fc-d9f0d60e4e7d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lz7fh\" (UID: \"b4e92446-36cd-4840-97fc-d9f0d60e4e7d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lz7fh" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.995316 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj27w\" (UniqueName: \"kubernetes.io/projected/190603fe-6420-4d17-91f5-c37c9038002c-kube-api-access-lj27w\") pod \"marketplace-operator-79b997595-vkrsj\" (UID: \"190603fe-6420-4d17-91f5-c37c9038002c\") " pod="openshift-marketplace/marketplace-operator-79b997595-vkrsj" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.995340 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88b545dd-0faa-4093-8fd9-40b693b3ef87-serving-cert\") pod \"service-ca-operator-777779d784-47ksl\" (UID: \"88b545dd-0faa-4093-8fd9-40b693b3ef87\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-47ksl" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.995391 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4e92446-36cd-4840-97fc-d9f0d60e4e7d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lz7fh\" (UID: \"b4e92446-36cd-4840-97fc-d9f0d60e4e7d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lz7fh" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.995432 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzmmd\" (UniqueName: \"kubernetes.io/projected/8772ecf1-3bce-4573-91de-daf37c1ef762-kube-api-access-fzmmd\") pod \"packageserver-d55dfcdfc-kpdjc\" (UID: \"8772ecf1-3bce-4573-91de-daf37c1ef762\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kpdjc" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.995454 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/f9b832c2-b2a0-4017-a323-c317ec4c1c1c-default-certificate\") pod \"router-default-5444994796-56x8k\" (UID: \"f9b832c2-b2a0-4017-a323-c317ec4c1c1c\") " pod="openshift-ingress/router-default-5444994796-56x8k" Feb 19 09:44:57 crc kubenswrapper[4965]: I0219 09:44:57.996824 4965 csr.go:261] certificate signing request csr-7fwtk is approved, waiting to be issued Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.023514 4965 csr.go:257] certificate signing request csr-7fwtk is issued Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.055541 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6gzld" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.081689 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-b2d5q" event={"ID":"e27db4f2-9044-44cd-838c-ee58b322f026","Type":"ContainerStarted","Data":"362e934561e7a6bb0da09ce25bd51c9ec1a6f840698ef8e40b8028e93bca9b31"} Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.082429 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-b2d5q" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.091810 4965 generic.go:334] "Generic (PLEG): container finished" podID="715c2ecd-ac0a-4758-9ded-2ce22952b44f" containerID="07bdeec3c7ef2a0fcf537712b41ee8eda4c4621cad22caf5dce5138c243a7e8a" exitCode=0 Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.091930 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-25n6z" event={"ID":"715c2ecd-ac0a-4758-9ded-2ce22952b44f","Type":"ContainerDied","Data":"07bdeec3c7ef2a0fcf537712b41ee8eda4c4621cad22caf5dce5138c243a7e8a"} Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.095995 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:44:58 crc kubenswrapper[4965]: E0219 09:44:58.096214 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:44:58.596162586 +0000 UTC m=+154.217483896 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.096629 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88b545dd-0faa-4093-8fd9-40b693b3ef87-config\") pod \"service-ca-operator-777779d784-47ksl\" (UID: \"88b545dd-0faa-4093-8fd9-40b693b3ef87\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-47ksl" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.096668 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/082ad3b5-c0c5-437c-8077-395c6ec09ec3-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-f6hln\" (UID: \"082ad3b5-c0c5-437c-8077-395c6ec09ec3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f6hln" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.096695 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bd91bca5-eb6e-4fcf-b8c8-013e057a95d0-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-zjvhq\" (UID: \"bd91bca5-eb6e-4fcf-b8c8-013e057a95d0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zjvhq" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.096724 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/23c8c1d2-4e7b-4cd4-99cf-92130064bbbf-mountpoint-dir\") pod \"csi-hostpathplugin-sbvpc\" (UID: \"23c8c1d2-4e7b-4cd4-99cf-92130064bbbf\") " pod="hostpath-provisioner/csi-hostpathplugin-sbvpc" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.096746 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwwtg\" (UniqueName: \"kubernetes.io/projected/6fd7c237-27ab-45f9-a23b-d18372f1c28a-kube-api-access-gwwtg\") pod \"multus-admission-controller-857f4d67dd-twxbq\" (UID: \"6fd7c237-27ab-45f9-a23b-d18372f1c28a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-twxbq" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.096788 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.096819 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cd586f77-cb07-42a9-b20c-bf06ed856469-proxy-tls\") pod \"machine-config-controller-84d6567774-qnhlh\" (UID: \"cd586f77-cb07-42a9-b20c-bf06ed856469\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qnhlh" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.096841 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8772ecf1-3bce-4573-91de-daf37c1ef762-apiservice-cert\") pod \"packageserver-d55dfcdfc-kpdjc\" (UID: \"8772ecf1-3bce-4573-91de-daf37c1ef762\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kpdjc" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.096863 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/164f68fa-9132-47c9-9c23-bca749b3f4e8-metrics-tls\") pod \"ingress-operator-5b745b69d9-bjhlw\" (UID: \"164f68fa-9132-47c9-9c23-bca749b3f4e8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bjhlw" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.096887 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/164f68fa-9132-47c9-9c23-bca749b3f4e8-bound-sa-token\") pod \"ingress-operator-5b745b69d9-bjhlw\" (UID: \"164f68fa-9132-47c9-9c23-bca749b3f4e8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bjhlw" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.096927 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cdd71340-9555-4441-b38c-89d5d4cc306b-metrics-tls\") pod \"dns-default-t4rcw\" (UID: \"cdd71340-9555-4441-b38c-89d5d4cc306b\") " pod="openshift-dns/dns-default-t4rcw" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.096952 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff842fd5-2dc6-4411-b644-6de86566ab22-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-vfkkc\" (UID: \"ff842fd5-2dc6-4411-b644-6de86566ab22\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vfkkc" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.096975 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdr4x\" (UniqueName: \"kubernetes.io/projected/ff842fd5-2dc6-4411-b644-6de86566ab22-kube-api-access-gdr4x\") pod \"kube-storage-version-migrator-operator-b67b599dd-vfkkc\" (UID: \"ff842fd5-2dc6-4411-b644-6de86566ab22\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vfkkc" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.097042 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hz8rh\" (UniqueName: \"kubernetes.io/projected/49cf856e-b37d-4ab6-9c6e-241cbc4be93e-kube-api-access-hz8rh\") pod \"machine-api-operator-5694c8668f-hqt8l\" (UID: \"49cf856e-b37d-4ab6-9c6e-241cbc4be93e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hqt8l" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.097068 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/190603fe-6420-4d17-91f5-c37c9038002c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vkrsj\" (UID: \"190603fe-6420-4d17-91f5-c37c9038002c\") " pod="openshift-marketplace/marketplace-operator-79b997595-vkrsj" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.097091 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd91bca5-eb6e-4fcf-b8c8-013e057a95d0-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-zjvhq\" (UID: \"bd91bca5-eb6e-4fcf-b8c8-013e057a95d0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zjvhq" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.097115 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/52aa8dbd-b4ec-4579-b036-a1dcf35567a5-images\") pod \"machine-config-operator-74547568cd-7dd6s\" (UID: \"52aa8dbd-b4ec-4579-b036-a1dcf35567a5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7dd6s" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.097141 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4e92446-36cd-4840-97fc-d9f0d60e4e7d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lz7fh\" (UID: \"b4e92446-36cd-4840-97fc-d9f0d60e4e7d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lz7fh" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.097165 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/082ad3b5-c0c5-437c-8077-395c6ec09ec3-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-f6hln\" (UID: \"082ad3b5-c0c5-437c-8077-395c6ec09ec3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f6hln" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.097209 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/22aed16a-0375-45f1-8762-8d5afddf848a-registry-certificates\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.097240 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cggwj\" (UniqueName: \"kubernetes.io/projected/164f68fa-9132-47c9-9c23-bca749b3f4e8-kube-api-access-cggwj\") pod \"ingress-operator-5b745b69d9-bjhlw\" (UID: \"164f68fa-9132-47c9-9c23-bca749b3f4e8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bjhlw" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.097264 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tbd4\" (UniqueName: \"kubernetes.io/projected/6c8a56a8-ed46-4e4a-9dbd-de3914ee3581-kube-api-access-7tbd4\") pod \"service-ca-9c57cc56f-kvwsm\" (UID: \"6c8a56a8-ed46-4e4a-9dbd-de3914ee3581\") " pod="openshift-service-ca/service-ca-9c57cc56f-kvwsm" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.097289 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/22aed16a-0375-45f1-8762-8d5afddf848a-registry-tls\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.097314 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qv2cm\" (UniqueName: \"kubernetes.io/projected/6bd24030-a535-4db1-b620-44d9c5c7a655-kube-api-access-qv2cm\") pod \"etcd-operator-b45778765-h4p5q\" (UID: \"6bd24030-a535-4db1-b620-44d9c5c7a655\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h4p5q" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.097340 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6bd24030-a535-4db1-b620-44d9c5c7a655-etcd-client\") pod \"etcd-operator-b45778765-h4p5q\" (UID: \"6bd24030-a535-4db1-b620-44d9c5c7a655\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h4p5q" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.097378 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/22aed16a-0375-45f1-8762-8d5afddf848a-trusted-ca\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.097399 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/22aed16a-0375-45f1-8762-8d5afddf848a-installation-pull-secrets\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.097420 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/6c8a56a8-ed46-4e4a-9dbd-de3914ee3581-signing-key\") pod \"service-ca-9c57cc56f-kvwsm\" (UID: \"6c8a56a8-ed46-4e4a-9dbd-de3914ee3581\") " pod="openshift-service-ca/service-ca-9c57cc56f-kvwsm" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.097471 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/32c44420-8d84-4c62-afc4-dad00a930b62-cert\") pod \"ingress-canary-gmzqb\" (UID: \"32c44420-8d84-4c62-afc4-dad00a930b62\") " pod="openshift-ingress-canary/ingress-canary-gmzqb" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.097492 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4vxm\" (UniqueName: \"kubernetes.io/projected/32c44420-8d84-4c62-afc4-dad00a930b62-kube-api-access-c4vxm\") pod \"ingress-canary-gmzqb\" (UID: \"32c44420-8d84-4c62-afc4-dad00a930b62\") " pod="openshift-ingress-canary/ingress-canary-gmzqb" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.097696 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88b545dd-0faa-4093-8fd9-40b693b3ef87-config\") pod \"service-ca-operator-777779d784-47ksl\" (UID: \"88b545dd-0faa-4093-8fd9-40b693b3ef87\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-47ksl" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.102095 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/082ad3b5-c0c5-437c-8077-395c6ec09ec3-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-f6hln\" (UID: \"082ad3b5-c0c5-437c-8077-395c6ec09ec3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f6hln" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.104479 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/52aa8dbd-b4ec-4579-b036-a1dcf35567a5-images\") pod \"machine-config-operator-74547568cd-7dd6s\" (UID: \"52aa8dbd-b4ec-4579-b036-a1dcf35567a5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7dd6s" Feb 19 09:44:58 crc kubenswrapper[4965]: E0219 09:44:58.104975 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:44:58.60496138 +0000 UTC m=+154.226282690 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-842k4" (UID: "22aed16a-0375-45f1-8762-8d5afddf848a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.105182 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff842fd5-2dc6-4411-b644-6de86566ab22-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-vfkkc\" (UID: \"ff842fd5-2dc6-4411-b644-6de86566ab22\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vfkkc" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.107772 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/190603fe-6420-4d17-91f5-c37c9038002c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vkrsj\" (UID: \"190603fe-6420-4d17-91f5-c37c9038002c\") " pod="openshift-marketplace/marketplace-operator-79b997595-vkrsj" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.109746 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cd586f77-cb07-42a9-b20c-bf06ed856469-proxy-tls\") pod \"machine-config-controller-84d6567774-qnhlh\" (UID: \"cd586f77-cb07-42a9-b20c-bf06ed856469\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qnhlh" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.110283 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4e92446-36cd-4840-97fc-d9f0d60e4e7d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lz7fh\" (UID: \"b4e92446-36cd-4840-97fc-d9f0d60e4e7d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lz7fh" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.110851 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/6c8a56a8-ed46-4e4a-9dbd-de3914ee3581-signing-key\") pod \"service-ca-9c57cc56f-kvwsm\" (UID: \"6c8a56a8-ed46-4e4a-9dbd-de3914ee3581\") " pod="openshift-service-ca/service-ca-9c57cc56f-kvwsm" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.112252 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/082ad3b5-c0c5-437c-8077-395c6ec09ec3-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-f6hln\" (UID: \"082ad3b5-c0c5-437c-8077-395c6ec09ec3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f6hln" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.114000 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6bd24030-a535-4db1-b620-44d9c5c7a655-etcd-client\") pod \"etcd-operator-b45778765-h4p5q\" (UID: \"6bd24030-a535-4db1-b620-44d9c5c7a655\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h4p5q" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.114073 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/6c8a56a8-ed46-4e4a-9dbd-de3914ee3581-signing-cabundle\") pod \"service-ca-9c57cc56f-kvwsm\" (UID: \"6c8a56a8-ed46-4e4a-9dbd-de3914ee3581\") " pod="openshift-service-ca/service-ca-9c57cc56f-kvwsm" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.114260 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/190603fe-6420-4d17-91f5-c37c9038002c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vkrsj\" (UID: \"190603fe-6420-4d17-91f5-c37c9038002c\") " pod="openshift-marketplace/marketplace-operator-79b997595-vkrsj" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.114296 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ph6d9\" (UniqueName: \"kubernetes.io/projected/cdd71340-9555-4441-b38c-89d5d4cc306b-kube-api-access-ph6d9\") pod \"dns-default-t4rcw\" (UID: \"cdd71340-9555-4441-b38c-89d5d4cc306b\") " pod="openshift-dns/dns-default-t4rcw" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.114321 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5b4md\" (UniqueName: \"kubernetes.io/projected/52aa8dbd-b4ec-4579-b036-a1dcf35567a5-kube-api-access-5b4md\") pod \"machine-config-operator-74547568cd-7dd6s\" (UID: \"52aa8dbd-b4ec-4579-b036-a1dcf35567a5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7dd6s" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.114414 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6bd24030-a535-4db1-b620-44d9c5c7a655-etcd-service-ca\") pod \"etcd-operator-b45778765-h4p5q\" (UID: \"6bd24030-a535-4db1-b620-44d9c5c7a655\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h4p5q" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.114441 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff842fd5-2dc6-4411-b644-6de86566ab22-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-vfkkc\" (UID: \"ff842fd5-2dc6-4411-b644-6de86566ab22\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vfkkc" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.114463 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9b832c2-b2a0-4017-a323-c317ec4c1c1c-service-ca-bundle\") pod \"router-default-5444994796-56x8k\" (UID: \"f9b832c2-b2a0-4017-a323-c317ec4c1c1c\") " pod="openshift-ingress/router-default-5444994796-56x8k" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.114487 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f9b832c2-b2a0-4017-a323-c317ec4c1c1c-metrics-certs\") pod \"router-default-5444994796-56x8k\" (UID: \"f9b832c2-b2a0-4017-a323-c317ec4c1c1c\") " pod="openshift-ingress/router-default-5444994796-56x8k" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.114541 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsd2b\" (UniqueName: \"kubernetes.io/projected/88b545dd-0faa-4093-8fd9-40b693b3ef87-kube-api-access-dsd2b\") pod \"service-ca-operator-777779d784-47ksl\" (UID: \"88b545dd-0faa-4093-8fd9-40b693b3ef87\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-47ksl" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.114560 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6fd7c237-27ab-45f9-a23b-d18372f1c28a-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-twxbq\" (UID: \"6fd7c237-27ab-45f9-a23b-d18372f1c28a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-twxbq" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.114594 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/22aed16a-0375-45f1-8762-8d5afddf848a-ca-trust-extracted\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.114614 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd91bca5-eb6e-4fcf-b8c8-013e057a95d0-config\") pod \"kube-controller-manager-operator-78b949d7b-zjvhq\" (UID: \"bd91bca5-eb6e-4fcf-b8c8-013e057a95d0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zjvhq" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.114635 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b4e92446-36cd-4840-97fc-d9f0d60e4e7d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lz7fh\" (UID: \"b4e92446-36cd-4840-97fc-d9f0d60e4e7d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lz7fh" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.114671 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lj27w\" (UniqueName: \"kubernetes.io/projected/190603fe-6420-4d17-91f5-c37c9038002c-kube-api-access-lj27w\") pod \"marketplace-operator-79b997595-vkrsj\" (UID: \"190603fe-6420-4d17-91f5-c37c9038002c\") " pod="openshift-marketplace/marketplace-operator-79b997595-vkrsj" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.114688 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88b545dd-0faa-4093-8fd9-40b693b3ef87-serving-cert\") pod \"service-ca-operator-777779d784-47ksl\" (UID: \"88b545dd-0faa-4093-8fd9-40b693b3ef87\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-47ksl" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.114707 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/23c8c1d2-4e7b-4cd4-99cf-92130064bbbf-plugins-dir\") pod \"csi-hostpathplugin-sbvpc\" (UID: \"23c8c1d2-4e7b-4cd4-99cf-92130064bbbf\") " pod="hostpath-provisioner/csi-hostpathplugin-sbvpc" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.114728 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4e92446-36cd-4840-97fc-d9f0d60e4e7d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lz7fh\" (UID: \"b4e92446-36cd-4840-97fc-d9f0d60e4e7d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lz7fh" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.114744 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/f9b832c2-b2a0-4017-a323-c317ec4c1c1c-default-certificate\") pod \"router-default-5444994796-56x8k\" (UID: \"f9b832c2-b2a0-4017-a323-c317ec4c1c1c\") " pod="openshift-ingress/router-default-5444994796-56x8k" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.114760 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/23c8c1d2-4e7b-4cd4-99cf-92130064bbbf-socket-dir\") pod \"csi-hostpathplugin-sbvpc\" (UID: \"23c8c1d2-4e7b-4cd4-99cf-92130064bbbf\") " pod="hostpath-provisioner/csi-hostpathplugin-sbvpc" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.114782 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzmmd\" (UniqueName: \"kubernetes.io/projected/8772ecf1-3bce-4573-91de-daf37c1ef762-kube-api-access-fzmmd\") pod \"packageserver-d55dfcdfc-kpdjc\" (UID: \"8772ecf1-3bce-4573-91de-daf37c1ef762\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kpdjc" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.114802 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/49cf856e-b37d-4ab6-9c6e-241cbc4be93e-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-hqt8l\" (UID: \"49cf856e-b37d-4ab6-9c6e-241cbc4be93e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hqt8l" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.114821 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2nnb\" (UniqueName: \"kubernetes.io/projected/8c7730fc-ade8-4092-8923-54264965e892-kube-api-access-p2nnb\") pod \"migrator-59844c95c7-nq2jk\" (UID: \"8c7730fc-ade8-4092-8923-54264965e892\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nq2jk" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.114842 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltxjm\" (UniqueName: \"kubernetes.io/projected/22aed16a-0375-45f1-8762-8d5afddf848a-kube-api-access-ltxjm\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.114860 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49cf856e-b37d-4ab6-9c6e-241cbc4be93e-config\") pod \"machine-api-operator-5694c8668f-hqt8l\" (UID: \"49cf856e-b37d-4ab6-9c6e-241cbc4be93e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hqt8l" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.114878 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/23c8c1d2-4e7b-4cd4-99cf-92130064bbbf-registration-dir\") pod \"csi-hostpathplugin-sbvpc\" (UID: \"23c8c1d2-4e7b-4cd4-99cf-92130064bbbf\") " pod="hostpath-provisioner/csi-hostpathplugin-sbvpc" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.114905 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b28sm\" (UniqueName: \"kubernetes.io/projected/23c8c1d2-4e7b-4cd4-99cf-92130064bbbf-kube-api-access-b28sm\") pod \"csi-hostpathplugin-sbvpc\" (UID: \"23c8c1d2-4e7b-4cd4-99cf-92130064bbbf\") " pod="hostpath-provisioner/csi-hostpathplugin-sbvpc" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.114924 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prq2s\" (UniqueName: \"kubernetes.io/projected/082ad3b5-c0c5-437c-8077-395c6ec09ec3-kube-api-access-prq2s\") pod \"cluster-image-registry-operator-dc59b4c8b-f6hln\" (UID: \"082ad3b5-c0c5-437c-8077-395c6ec09ec3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f6hln" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.114945 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/10c3f447-9428-4d95-98ce-69f6c5f8bf38-node-bootstrap-token\") pod \"machine-config-server-th66b\" (UID: \"10c3f447-9428-4d95-98ce-69f6c5f8bf38\") " pod="openshift-machine-config-operator/machine-config-server-th66b" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.114968 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/164f68fa-9132-47c9-9c23-bca749b3f4e8-trusted-ca\") pod \"ingress-operator-5b745b69d9-bjhlw\" (UID: \"164f68fa-9132-47c9-9c23-bca749b3f4e8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bjhlw" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.115004 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/5e0fcf66-e50c-4c4c-9370-08ed336d25d9-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-2v6fb\" (UID: \"5e0fcf66-e50c-4c4c-9370-08ed336d25d9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2v6fb" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.115020 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6bd24030-a535-4db1-b620-44d9c5c7a655-etcd-ca\") pod \"etcd-operator-b45778765-h4p5q\" (UID: \"6bd24030-a535-4db1-b620-44d9c5c7a655\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h4p5q" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.115043 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/22aed16a-0375-45f1-8762-8d5afddf848a-bound-sa-token\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.115075 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/23c8c1d2-4e7b-4cd4-99cf-92130064bbbf-csi-data-dir\") pod \"csi-hostpathplugin-sbvpc\" (UID: \"23c8c1d2-4e7b-4cd4-99cf-92130064bbbf\") " pod="hostpath-provisioner/csi-hostpathplugin-sbvpc" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.116037 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/6c8a56a8-ed46-4e4a-9dbd-de3914ee3581-signing-cabundle\") pod \"service-ca-9c57cc56f-kvwsm\" (UID: \"6c8a56a8-ed46-4e4a-9dbd-de3914ee3581\") " pod="openshift-service-ca/service-ca-9c57cc56f-kvwsm" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.116761 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/22aed16a-0375-45f1-8762-8d5afddf848a-trusted-ca\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.123311 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/22aed16a-0375-45f1-8762-8d5afddf848a-registry-certificates\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.124280 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/22aed16a-0375-45f1-8762-8d5afddf848a-registry-tls\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.124403 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/082ad3b5-c0c5-437c-8077-395c6ec09ec3-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-f6hln\" (UID: \"082ad3b5-c0c5-437c-8077-395c6ec09ec3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f6hln" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.124449 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cdd71340-9555-4441-b38c-89d5d4cc306b-config-volume\") pod \"dns-default-t4rcw\" (UID: \"cdd71340-9555-4441-b38c-89d5d4cc306b\") " pod="openshift-dns/dns-default-t4rcw" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.124569 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/49cf856e-b37d-4ab6-9c6e-241cbc4be93e-images\") pod \"machine-api-operator-5694c8668f-hqt8l\" (UID: \"49cf856e-b37d-4ab6-9c6e-241cbc4be93e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hqt8l" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.124612 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cd586f77-cb07-42a9-b20c-bf06ed856469-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-qnhlh\" (UID: \"cd586f77-cb07-42a9-b20c-bf06ed856469\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qnhlh" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.124642 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/f9b832c2-b2a0-4017-a323-c317ec4c1c1c-stats-auth\") pod \"router-default-5444994796-56x8k\" (UID: \"f9b832c2-b2a0-4017-a323-c317ec4c1c1c\") " pod="openshift-ingress/router-default-5444994796-56x8k" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.124684 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9ft7\" (UniqueName: \"kubernetes.io/projected/5e0fcf66-e50c-4c4c-9370-08ed336d25d9-kube-api-access-k9ft7\") pod \"control-plane-machine-set-operator-78cbb6b69f-2v6fb\" (UID: \"5e0fcf66-e50c-4c4c-9370-08ed336d25d9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2v6fb" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.124702 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8772ecf1-3bce-4573-91de-daf37c1ef762-apiservice-cert\") pod \"packageserver-d55dfcdfc-kpdjc\" (UID: \"8772ecf1-3bce-4573-91de-daf37c1ef762\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kpdjc" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.124751 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-msn62" event={"ID":"2aa84740-2db0-45f4-a3e0-2b78422a51e3","Type":"ContainerStarted","Data":"74899f5a2a0eb575a5ea850e39c67dab7f3b0603db49aca99dfa1ef9879e15ef"} Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.124800 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z79fd\" (UniqueName: \"kubernetes.io/projected/f9b832c2-b2a0-4017-a323-c317ec4c1c1c-kube-api-access-z79fd\") pod \"router-default-5444994796-56x8k\" (UID: \"f9b832c2-b2a0-4017-a323-c317ec4c1c1c\") " pod="openshift-ingress/router-default-5444994796-56x8k" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.124864 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8772ecf1-3bce-4573-91de-daf37c1ef762-webhook-cert\") pod \"packageserver-d55dfcdfc-kpdjc\" (UID: \"8772ecf1-3bce-4573-91de-daf37c1ef762\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kpdjc" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.124889 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/52aa8dbd-b4ec-4579-b036-a1dcf35567a5-proxy-tls\") pod \"machine-config-operator-74547568cd-7dd6s\" (UID: \"52aa8dbd-b4ec-4579-b036-a1dcf35567a5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7dd6s" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.124928 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/52aa8dbd-b4ec-4579-b036-a1dcf35567a5-auth-proxy-config\") pod \"machine-config-operator-74547568cd-7dd6s\" (UID: \"52aa8dbd-b4ec-4579-b036-a1dcf35567a5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7dd6s" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.124989 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8772ecf1-3bce-4573-91de-daf37c1ef762-tmpfs\") pod \"packageserver-d55dfcdfc-kpdjc\" (UID: \"8772ecf1-3bce-4573-91de-daf37c1ef762\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kpdjc" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.125058 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bd24030-a535-4db1-b620-44d9c5c7a655-config\") pod \"etcd-operator-b45778765-h4p5q\" (UID: \"6bd24030-a535-4db1-b620-44d9c5c7a655\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h4p5q" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.125096 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/10c3f447-9428-4d95-98ce-69f6c5f8bf38-certs\") pod \"machine-config-server-th66b\" (UID: \"10c3f447-9428-4d95-98ce-69f6c5f8bf38\") " pod="openshift-machine-config-operator/machine-config-server-th66b" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.125136 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46bld\" (UniqueName: \"kubernetes.io/projected/10c3f447-9428-4d95-98ce-69f6c5f8bf38-kube-api-access-46bld\") pod \"machine-config-server-th66b\" (UID: \"10c3f447-9428-4d95-98ce-69f6c5f8bf38\") " pod="openshift-machine-config-operator/machine-config-server-th66b" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.125161 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd91bca5-eb6e-4fcf-b8c8-013e057a95d0-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-zjvhq\" (UID: \"bd91bca5-eb6e-4fcf-b8c8-013e057a95d0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zjvhq" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.125185 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6bd24030-a535-4db1-b620-44d9c5c7a655-serving-cert\") pod \"etcd-operator-b45778765-h4p5q\" (UID: \"6bd24030-a535-4db1-b620-44d9c5c7a655\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h4p5q" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.125237 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxfhn\" (UniqueName: \"kubernetes.io/projected/cd586f77-cb07-42a9-b20c-bf06ed856469-kube-api-access-fxfhn\") pod \"machine-config-controller-84d6567774-qnhlh\" (UID: \"cd586f77-cb07-42a9-b20c-bf06ed856469\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qnhlh" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.125582 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/22aed16a-0375-45f1-8762-8d5afddf848a-installation-pull-secrets\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.126054 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/164f68fa-9132-47c9-9c23-bca749b3f4e8-metrics-tls\") pod \"ingress-operator-5b745b69d9-bjhlw\" (UID: \"164f68fa-9132-47c9-9c23-bca749b3f4e8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bjhlw" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.134434 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4e92446-36cd-4840-97fc-d9f0d60e4e7d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lz7fh\" (UID: \"b4e92446-36cd-4840-97fc-d9f0d60e4e7d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lz7fh" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.134546 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/49cf856e-b37d-4ab6-9c6e-241cbc4be93e-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-hqt8l\" (UID: \"49cf856e-b37d-4ab6-9c6e-241cbc4be93e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hqt8l" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.134936 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/190603fe-6420-4d17-91f5-c37c9038002c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vkrsj\" (UID: \"190603fe-6420-4d17-91f5-c37c9038002c\") " pod="openshift-marketplace/marketplace-operator-79b997595-vkrsj" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.135402 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49cf856e-b37d-4ab6-9c6e-241cbc4be93e-config\") pod \"machine-api-operator-5694c8668f-hqt8l\" (UID: \"49cf856e-b37d-4ab6-9c6e-241cbc4be93e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hqt8l" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.135690 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8772ecf1-3bce-4573-91de-daf37c1ef762-tmpfs\") pod \"packageserver-d55dfcdfc-kpdjc\" (UID: \"8772ecf1-3bce-4573-91de-daf37c1ef762\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kpdjc" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.135958 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cd586f77-cb07-42a9-b20c-bf06ed856469-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-qnhlh\" (UID: \"cd586f77-cb07-42a9-b20c-bf06ed856469\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qnhlh" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.139907 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/164f68fa-9132-47c9-9c23-bca749b3f4e8-trusted-ca\") pod \"ingress-operator-5b745b69d9-bjhlw\" (UID: \"164f68fa-9132-47c9-9c23-bca749b3f4e8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bjhlw" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.141301 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/22aed16a-0375-45f1-8762-8d5afddf848a-ca-trust-extracted\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.141679 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6bd24030-a535-4db1-b620-44d9c5c7a655-etcd-ca\") pod \"etcd-operator-b45778765-h4p5q\" (UID: \"6bd24030-a535-4db1-b620-44d9c5c7a655\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h4p5q" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.142517 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/49cf856e-b37d-4ab6-9c6e-241cbc4be93e-images\") pod \"machine-api-operator-5694c8668f-hqt8l\" (UID: \"49cf856e-b37d-4ab6-9c6e-241cbc4be93e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hqt8l" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.145027 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6bd24030-a535-4db1-b620-44d9c5c7a655-etcd-service-ca\") pod \"etcd-operator-b45778765-h4p5q\" (UID: \"6bd24030-a535-4db1-b620-44d9c5c7a655\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h4p5q" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.150617 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff842fd5-2dc6-4411-b644-6de86566ab22-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-vfkkc\" (UID: \"ff842fd5-2dc6-4411-b644-6de86566ab22\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vfkkc" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.150838 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bd24030-a535-4db1-b620-44d9c5c7a655-config\") pod \"etcd-operator-b45778765-h4p5q\" (UID: \"6bd24030-a535-4db1-b620-44d9c5c7a655\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h4p5q" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.151097 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/52aa8dbd-b4ec-4579-b036-a1dcf35567a5-auth-proxy-config\") pod \"machine-config-operator-74547568cd-7dd6s\" (UID: \"52aa8dbd-b4ec-4579-b036-a1dcf35567a5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7dd6s" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.152173 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8772ecf1-3bce-4573-91de-daf37c1ef762-webhook-cert\") pod \"packageserver-d55dfcdfc-kpdjc\" (UID: \"8772ecf1-3bce-4573-91de-daf37c1ef762\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kpdjc" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.153765 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9b832c2-b2a0-4017-a323-c317ec4c1c1c-service-ca-bundle\") pod \"router-default-5444994796-56x8k\" (UID: \"f9b832c2-b2a0-4017-a323-c317ec4c1c1c\") " pod="openshift-ingress/router-default-5444994796-56x8k" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.160181 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bd91bca5-eb6e-4fcf-b8c8-013e057a95d0-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-zjvhq\" (UID: \"bd91bca5-eb6e-4fcf-b8c8-013e057a95d0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zjvhq" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.160510 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6bd24030-a535-4db1-b620-44d9c5c7a655-serving-cert\") pod \"etcd-operator-b45778765-h4p5q\" (UID: \"6bd24030-a535-4db1-b620-44d9c5c7a655\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h4p5q" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.162570 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd91bca5-eb6e-4fcf-b8c8-013e057a95d0-config\") pod \"kube-controller-manager-operator-78b949d7b-zjvhq\" (UID: \"bd91bca5-eb6e-4fcf-b8c8-013e057a95d0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zjvhq" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.162744 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4chpc" event={"ID":"2236bd7c-4f4c-48a1-82cf-0b406ff1934f","Type":"ContainerStarted","Data":"2c72a5794db33c1744c28ce3807b59cd189e1866f71cc5a56db2a4202bf0a1a4"} Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.175645 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/5e0fcf66-e50c-4c4c-9370-08ed336d25d9-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-2v6fb\" (UID: \"5e0fcf66-e50c-4c4c-9370-08ed336d25d9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2v6fb" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.176420 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88b545dd-0faa-4093-8fd9-40b693b3ef87-serving-cert\") pod \"service-ca-operator-777779d784-47ksl\" (UID: \"88b545dd-0faa-4093-8fd9-40b693b3ef87\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-47ksl" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.176591 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/f9b832c2-b2a0-4017-a323-c317ec4c1c1c-stats-auth\") pod \"router-default-5444994796-56x8k\" (UID: \"f9b832c2-b2a0-4017-a323-c317ec4c1c1c\") " pod="openshift-ingress/router-default-5444994796-56x8k" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.176803 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f9b832c2-b2a0-4017-a323-c317ec4c1c1c-metrics-certs\") pod \"router-default-5444994796-56x8k\" (UID: \"f9b832c2-b2a0-4017-a323-c317ec4c1c1c\") " pod="openshift-ingress/router-default-5444994796-56x8k" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.181278 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/52aa8dbd-b4ec-4579-b036-a1dcf35567a5-proxy-tls\") pod \"machine-config-operator-74547568cd-7dd6s\" (UID: \"52aa8dbd-b4ec-4579-b036-a1dcf35567a5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7dd6s" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.182465 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hz8rh\" (UniqueName: \"kubernetes.io/projected/49cf856e-b37d-4ab6-9c6e-241cbc4be93e-kube-api-access-hz8rh\") pod \"machine-api-operator-5694c8668f-hqt8l\" (UID: \"49cf856e-b37d-4ab6-9c6e-241cbc4be93e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hqt8l" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.182470 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/f9b832c2-b2a0-4017-a323-c317ec4c1c1c-default-certificate\") pod \"router-default-5444994796-56x8k\" (UID: \"f9b832c2-b2a0-4017-a323-c317ec4c1c1c\") " pod="openshift-ingress/router-default-5444994796-56x8k" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.189572 4965 generic.go:334] "Generic (PLEG): container finished" podID="c19a4260-b2ba-478a-8fa6-e2045fe1b4ee" containerID="b460590f41e3c7ccfb220c3f2348de8a9c53bc30304d4b09f455a7cf6d1615f5" exitCode=0 Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.191665 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n8ngq" event={"ID":"c19a4260-b2ba-478a-8fa6-e2045fe1b4ee","Type":"ContainerDied","Data":"b460590f41e3c7ccfb220c3f2348de8a9c53bc30304d4b09f455a7cf6d1615f5"} Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.211231 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2v2tm" event={"ID":"06d2ff1d-a88d-464e-9ab2-fe2bc7b67cf3","Type":"ContainerStarted","Data":"af77168703eacb9bff32512f35adb7a61cec91b27bc4f3ef5c7a0ba5c6f2f982"} Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.213490 4965 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-hdbpp container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.213548 4965 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-hdbpp" podUID="f2aae678-17fc-4272-be7e-839946082d8b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.216893 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdr4x\" (UniqueName: \"kubernetes.io/projected/ff842fd5-2dc6-4411-b644-6de86566ab22-kube-api-access-gdr4x\") pod \"kube-storage-version-migrator-operator-b67b599dd-vfkkc\" (UID: \"ff842fd5-2dc6-4411-b644-6de86566ab22\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vfkkc" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.218796 4965 patch_prober.go:28] interesting pod/downloads-7954f5f757-8b2cq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.218840 4965 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-8b2cq" podUID="efb57d4d-b3d4-42fa-a27b-299bdf135836" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.222658 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rttcv" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.227716 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-88wlz" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.228765 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cggwj\" (UniqueName: \"kubernetes.io/projected/164f68fa-9132-47c9-9c23-bca749b3f4e8-kube-api-access-cggwj\") pod \"ingress-operator-5b745b69d9-bjhlw\" (UID: \"164f68fa-9132-47c9-9c23-bca749b3f4e8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bjhlw" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.235975 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.236214 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/32c44420-8d84-4c62-afc4-dad00a930b62-cert\") pod \"ingress-canary-gmzqb\" (UID: \"32c44420-8d84-4c62-afc4-dad00a930b62\") " pod="openshift-ingress-canary/ingress-canary-gmzqb" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.236241 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4vxm\" (UniqueName: \"kubernetes.io/projected/32c44420-8d84-4c62-afc4-dad00a930b62-kube-api-access-c4vxm\") pod \"ingress-canary-gmzqb\" (UID: \"32c44420-8d84-4c62-afc4-dad00a930b62\") " pod="openshift-ingress-canary/ingress-canary-gmzqb" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.236287 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ph6d9\" (UniqueName: \"kubernetes.io/projected/cdd71340-9555-4441-b38c-89d5d4cc306b-kube-api-access-ph6d9\") pod \"dns-default-t4rcw\" (UID: \"cdd71340-9555-4441-b38c-89d5d4cc306b\") " pod="openshift-dns/dns-default-t4rcw" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.236319 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6fd7c237-27ab-45f9-a23b-d18372f1c28a-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-twxbq\" (UID: \"6fd7c237-27ab-45f9-a23b-d18372f1c28a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-twxbq" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.236354 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/23c8c1d2-4e7b-4cd4-99cf-92130064bbbf-plugins-dir\") pod \"csi-hostpathplugin-sbvpc\" (UID: \"23c8c1d2-4e7b-4cd4-99cf-92130064bbbf\") " pod="hostpath-provisioner/csi-hostpathplugin-sbvpc" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.236390 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/23c8c1d2-4e7b-4cd4-99cf-92130064bbbf-socket-dir\") pod \"csi-hostpathplugin-sbvpc\" (UID: \"23c8c1d2-4e7b-4cd4-99cf-92130064bbbf\") " pod="hostpath-provisioner/csi-hostpathplugin-sbvpc" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.236424 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/23c8c1d2-4e7b-4cd4-99cf-92130064bbbf-registration-dir\") pod \"csi-hostpathplugin-sbvpc\" (UID: \"23c8c1d2-4e7b-4cd4-99cf-92130064bbbf\") " pod="hostpath-provisioner/csi-hostpathplugin-sbvpc" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.236441 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b28sm\" (UniqueName: \"kubernetes.io/projected/23c8c1d2-4e7b-4cd4-99cf-92130064bbbf-kube-api-access-b28sm\") pod \"csi-hostpathplugin-sbvpc\" (UID: \"23c8c1d2-4e7b-4cd4-99cf-92130064bbbf\") " pod="hostpath-provisioner/csi-hostpathplugin-sbvpc" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.236461 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/10c3f447-9428-4d95-98ce-69f6c5f8bf38-node-bootstrap-token\") pod \"machine-config-server-th66b\" (UID: \"10c3f447-9428-4d95-98ce-69f6c5f8bf38\") " pod="openshift-machine-config-operator/machine-config-server-th66b" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.236500 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/23c8c1d2-4e7b-4cd4-99cf-92130064bbbf-csi-data-dir\") pod \"csi-hostpathplugin-sbvpc\" (UID: \"23c8c1d2-4e7b-4cd4-99cf-92130064bbbf\") " pod="hostpath-provisioner/csi-hostpathplugin-sbvpc" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.236520 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cdd71340-9555-4441-b38c-89d5d4cc306b-config-volume\") pod \"dns-default-t4rcw\" (UID: \"cdd71340-9555-4441-b38c-89d5d4cc306b\") " pod="openshift-dns/dns-default-t4rcw" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.236579 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/10c3f447-9428-4d95-98ce-69f6c5f8bf38-certs\") pod \"machine-config-server-th66b\" (UID: \"10c3f447-9428-4d95-98ce-69f6c5f8bf38\") " pod="openshift-machine-config-operator/machine-config-server-th66b" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.236599 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46bld\" (UniqueName: \"kubernetes.io/projected/10c3f447-9428-4d95-98ce-69f6c5f8bf38-kube-api-access-46bld\") pod \"machine-config-server-th66b\" (UID: \"10c3f447-9428-4d95-98ce-69f6c5f8bf38\") " pod="openshift-machine-config-operator/machine-config-server-th66b" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.236636 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/23c8c1d2-4e7b-4cd4-99cf-92130064bbbf-mountpoint-dir\") pod \"csi-hostpathplugin-sbvpc\" (UID: \"23c8c1d2-4e7b-4cd4-99cf-92130064bbbf\") " pod="hostpath-provisioner/csi-hostpathplugin-sbvpc" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.236651 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwwtg\" (UniqueName: \"kubernetes.io/projected/6fd7c237-27ab-45f9-a23b-d18372f1c28a-kube-api-access-gwwtg\") pod \"multus-admission-controller-857f4d67dd-twxbq\" (UID: \"6fd7c237-27ab-45f9-a23b-d18372f1c28a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-twxbq" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.237138 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cdd71340-9555-4441-b38c-89d5d4cc306b-metrics-tls\") pod \"dns-default-t4rcw\" (UID: \"cdd71340-9555-4441-b38c-89d5d4cc306b\") " pod="openshift-dns/dns-default-t4rcw" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.238356 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/23c8c1d2-4e7b-4cd4-99cf-92130064bbbf-socket-dir\") pod \"csi-hostpathplugin-sbvpc\" (UID: \"23c8c1d2-4e7b-4cd4-99cf-92130064bbbf\") " pod="hostpath-provisioner/csi-hostpathplugin-sbvpc" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.242402 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/23c8c1d2-4e7b-4cd4-99cf-92130064bbbf-registration-dir\") pod \"csi-hostpathplugin-sbvpc\" (UID: \"23c8c1d2-4e7b-4cd4-99cf-92130064bbbf\") " pod="hostpath-provisioner/csi-hostpathplugin-sbvpc" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.243605 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/23c8c1d2-4e7b-4cd4-99cf-92130064bbbf-plugins-dir\") pod \"csi-hostpathplugin-sbvpc\" (UID: \"23c8c1d2-4e7b-4cd4-99cf-92130064bbbf\") " pod="hostpath-provisioner/csi-hostpathplugin-sbvpc" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.245540 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/23c8c1d2-4e7b-4cd4-99cf-92130064bbbf-csi-data-dir\") pod \"csi-hostpathplugin-sbvpc\" (UID: \"23c8c1d2-4e7b-4cd4-99cf-92130064bbbf\") " pod="hostpath-provisioner/csi-hostpathplugin-sbvpc" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.245771 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cdd71340-9555-4441-b38c-89d5d4cc306b-config-volume\") pod \"dns-default-t4rcw\" (UID: \"cdd71340-9555-4441-b38c-89d5d4cc306b\") " pod="openshift-dns/dns-default-t4rcw" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.245980 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vfkkc" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.246086 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/23c8c1d2-4e7b-4cd4-99cf-92130064bbbf-mountpoint-dir\") pod \"csi-hostpathplugin-sbvpc\" (UID: \"23c8c1d2-4e7b-4cd4-99cf-92130064bbbf\") " pod="hostpath-provisioner/csi-hostpathplugin-sbvpc" Feb 19 09:44:58 crc kubenswrapper[4965]: E0219 09:44:58.246629 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:44:58.746608102 +0000 UTC m=+154.367929412 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.247930 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cdd71340-9555-4441-b38c-89d5d4cc306b-metrics-tls\") pod \"dns-default-t4rcw\" (UID: \"cdd71340-9555-4441-b38c-89d5d4cc306b\") " pod="openshift-dns/dns-default-t4rcw" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.254799 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tbd4\" (UniqueName: \"kubernetes.io/projected/6c8a56a8-ed46-4e4a-9dbd-de3914ee3581-kube-api-access-7tbd4\") pod \"service-ca-9c57cc56f-kvwsm\" (UID: \"6c8a56a8-ed46-4e4a-9dbd-de3914ee3581\") " pod="openshift-service-ca/service-ca-9c57cc56f-kvwsm" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.264886 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/10c3f447-9428-4d95-98ce-69f6c5f8bf38-certs\") pod \"machine-config-server-th66b\" (UID: \"10c3f447-9428-4d95-98ce-69f6c5f8bf38\") " pod="openshift-machine-config-operator/machine-config-server-th66b" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.273934 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6fd7c237-27ab-45f9-a23b-d18372f1c28a-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-twxbq\" (UID: \"6fd7c237-27ab-45f9-a23b-d18372f1c28a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-twxbq" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.280381 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qv2cm\" (UniqueName: \"kubernetes.io/projected/6bd24030-a535-4db1-b620-44d9c5c7a655-kube-api-access-qv2cm\") pod \"etcd-operator-b45778765-h4p5q\" (UID: \"6bd24030-a535-4db1-b620-44d9c5c7a655\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h4p5q" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.301577 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zjvhq" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.303250 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-hqt8l" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.305785 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/32c44420-8d84-4c62-afc4-dad00a930b62-cert\") pod \"ingress-canary-gmzqb\" (UID: \"32c44420-8d84-4c62-afc4-dad00a930b62\") " pod="openshift-ingress-canary/ingress-canary-gmzqb" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.307215 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/10c3f447-9428-4d95-98ce-69f6c5f8bf38-node-bootstrap-token\") pod \"machine-config-server-th66b\" (UID: \"10c3f447-9428-4d95-98ce-69f6c5f8bf38\") " pod="openshift-machine-config-operator/machine-config-server-th66b" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.312761 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/164f68fa-9132-47c9-9c23-bca749b3f4e8-bound-sa-token\") pod \"ingress-operator-5b745b69d9-bjhlw\" (UID: \"164f68fa-9132-47c9-9c23-bca749b3f4e8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bjhlw" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.313504 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltxjm\" (UniqueName: \"kubernetes.io/projected/22aed16a-0375-45f1-8762-8d5afddf848a-kube-api-access-ltxjm\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.318886 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2nnb\" (UniqueName: \"kubernetes.io/projected/8c7730fc-ade8-4092-8923-54264965e892-kube-api-access-p2nnb\") pod \"migrator-59844c95c7-nq2jk\" (UID: \"8c7730fc-ade8-4092-8923-54264965e892\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nq2jk" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.327640 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q8dwq"] Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.330948 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-pbfgx"] Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.335522 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6f8hc"] Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.338622 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.343970 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzmmd\" (UniqueName: \"kubernetes.io/projected/8772ecf1-3bce-4573-91de-daf37c1ef762-kube-api-access-fzmmd\") pod \"packageserver-d55dfcdfc-kpdjc\" (UID: \"8772ecf1-3bce-4573-91de-daf37c1ef762\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kpdjc" Feb 19 09:44:58 crc kubenswrapper[4965]: E0219 09:44:58.346710 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:44:58.84666764 +0000 UTC m=+154.467988950 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-842k4" (UID: "22aed16a-0375-45f1-8762-8d5afddf848a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.355563 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-h4p5q" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.356317 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5b4md\" (UniqueName: \"kubernetes.io/projected/52aa8dbd-b4ec-4579-b036-a1dcf35567a5-kube-api-access-5b4md\") pod \"machine-config-operator-74547568cd-7dd6s\" (UID: \"52aa8dbd-b4ec-4579-b036-a1dcf35567a5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7dd6s" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.375044 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nq2jk" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.375925 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bjhlw" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.384910 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prq2s\" (UniqueName: \"kubernetes.io/projected/082ad3b5-c0c5-437c-8077-395c6ec09ec3-kube-api-access-prq2s\") pod \"cluster-image-registry-operator-dc59b4c8b-f6hln\" (UID: \"082ad3b5-c0c5-437c-8077-395c6ec09ec3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f6hln" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.404804 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/082ad3b5-c0c5-437c-8077-395c6ec09ec3-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-f6hln\" (UID: \"082ad3b5-c0c5-437c-8077-395c6ec09ec3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f6hln" Feb 19 09:44:58 crc kubenswrapper[4965]: W0219 09:44:58.404928 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16b70f8e_c2b6_4545_813e_23b82399a149.slice/crio-a157a50a1fc9255f27c2470da16c0b09b0da070b831f2044736d3e2b618c9161 WatchSource:0}: Error finding container a157a50a1fc9255f27c2470da16c0b09b0da070b831f2044736d3e2b618c9161: Status 404 returned error can't find the container with id a157a50a1fc9255f27c2470da16c0b09b0da070b831f2044736d3e2b618c9161 Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.405143 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f6hln" Feb 19 09:44:58 crc kubenswrapper[4965]: W0219 09:44:58.405534 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6cba3a97_d8fe_4f88_9d3d_ed1a49713f1c.slice/crio-35ceddf626e6b845b8c4ff4742e68ca3c0e9bdd5158e47e83e3a816cdd40eb2d WatchSource:0}: Error finding container 35ceddf626e6b845b8c4ff4742e68ca3c0e9bdd5158e47e83e3a816cdd40eb2d: Status 404 returned error can't find the container with id 35ceddf626e6b845b8c4ff4742e68ca3c0e9bdd5158e47e83e3a816cdd40eb2d Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.413394 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-kvwsm" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.423185 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z79fd\" (UniqueName: \"kubernetes.io/projected/f9b832c2-b2a0-4017-a323-c317ec4c1c1c-kube-api-access-z79fd\") pod \"router-default-5444994796-56x8k\" (UID: \"f9b832c2-b2a0-4017-a323-c317ec4c1c1c\") " pod="openshift-ingress/router-default-5444994796-56x8k" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.430746 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b4e92446-36cd-4840-97fc-d9f0d60e4e7d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lz7fh\" (UID: \"b4e92446-36cd-4840-97fc-d9f0d60e4e7d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lz7fh" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.443431 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:44:58 crc kubenswrapper[4965]: E0219 09:44:58.443837 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:44:58.943821498 +0000 UTC m=+154.565142808 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.502376 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/22aed16a-0375-45f1-8762-8d5afddf848a-bound-sa-token\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.513513 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsd2b\" (UniqueName: \"kubernetes.io/projected/88b545dd-0faa-4093-8fd9-40b693b3ef87-kube-api-access-dsd2b\") pod \"service-ca-operator-777779d784-47ksl\" (UID: \"88b545dd-0faa-4093-8fd9-40b693b3ef87\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-47ksl" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.514621 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxfhn\" (UniqueName: \"kubernetes.io/projected/cd586f77-cb07-42a9-b20c-bf06ed856469-kube-api-access-fxfhn\") pod \"machine-config-controller-84d6567774-qnhlh\" (UID: \"cd586f77-cb07-42a9-b20c-bf06ed856469\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qnhlh" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.520231 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9ft7\" (UniqueName: \"kubernetes.io/projected/5e0fcf66-e50c-4c4c-9370-08ed336d25d9-kube-api-access-k9ft7\") pod \"control-plane-machine-set-operator-78cbb6b69f-2v6fb\" (UID: \"5e0fcf66-e50c-4c4c-9370-08ed336d25d9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2v6fb" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.533707 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-vkxtv"] Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.553766 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lj27w\" (UniqueName: \"kubernetes.io/projected/190603fe-6420-4d17-91f5-c37c9038002c-kube-api-access-lj27w\") pod \"marketplace-operator-79b997595-vkrsj\" (UID: \"190603fe-6420-4d17-91f5-c37c9038002c\") " pod="openshift-marketplace/marketplace-operator-79b997595-vkrsj" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.562718 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7dd6s" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.564132 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:44:58 crc kubenswrapper[4965]: E0219 09:44:58.564569 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:44:59.06455529 +0000 UTC m=+154.685876600 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-842k4" (UID: "22aed16a-0375-45f1-8762-8d5afddf848a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.586532 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b28sm\" (UniqueName: \"kubernetes.io/projected/23c8c1d2-4e7b-4cd4-99cf-92130064bbbf-kube-api-access-b28sm\") pod \"csi-hostpathplugin-sbvpc\" (UID: \"23c8c1d2-4e7b-4cd4-99cf-92130064bbbf\") " pod="hostpath-provisioner/csi-hostpathplugin-sbvpc" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.598054 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mdd5s"] Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.598356 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vkrsj" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.614779 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-b2d5q" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.618601 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qnhlh" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.619403 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ph6d9\" (UniqueName: \"kubernetes.io/projected/cdd71340-9555-4441-b38c-89d5d4cc306b-kube-api-access-ph6d9\") pod \"dns-default-t4rcw\" (UID: \"cdd71340-9555-4441-b38c-89d5d4cc306b\") " pod="openshift-dns/dns-default-t4rcw" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.626004 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kpdjc" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.632628 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lz7fh" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.635063 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4vxm\" (UniqueName: \"kubernetes.io/projected/32c44420-8d84-4c62-afc4-dad00a930b62-kube-api-access-c4vxm\") pod \"ingress-canary-gmzqb\" (UID: \"32c44420-8d84-4c62-afc4-dad00a930b62\") " pod="openshift-ingress-canary/ingress-canary-gmzqb" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.638298 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-56x8k" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.643291 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46bld\" (UniqueName: \"kubernetes.io/projected/10c3f447-9428-4d95-98ce-69f6c5f8bf38-kube-api-access-46bld\") pod \"machine-config-server-th66b\" (UID: \"10c3f447-9428-4d95-98ce-69f6c5f8bf38\") " pod="openshift-machine-config-operator/machine-config-server-th66b" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.662983 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwwtg\" (UniqueName: \"kubernetes.io/projected/6fd7c237-27ab-45f9-a23b-d18372f1c28a-kube-api-access-gwwtg\") pod \"multus-admission-controller-857f4d67dd-twxbq\" (UID: \"6fd7c237-27ab-45f9-a23b-d18372f1c28a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-twxbq" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.666539 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:44:58 crc kubenswrapper[4965]: E0219 09:44:58.667031 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:44:59.167014206 +0000 UTC m=+154.788335516 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.687592 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2v6fb" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.695544 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-47ksl" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.720857 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bdvzx"] Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.725724 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-twxbq" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.729265 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-hgzq5"] Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.759554 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-sbvpc" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.768571 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.768899 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-th66b" Feb 19 09:44:58 crc kubenswrapper[4965]: E0219 09:44:58.769415 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:44:59.269401881 +0000 UTC m=+154.890723191 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-842k4" (UID: "22aed16a-0375-45f1-8762-8d5afddf848a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.774227 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-gmzqb" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.781774 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-t4rcw" Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.787396 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524890-mgzh5"] Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.787903 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6gzld"] Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.871448 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:44:58 crc kubenswrapper[4965]: E0219 09:44:58.871619 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:44:59.371599562 +0000 UTC m=+154.992920872 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.871894 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:44:58 crc kubenswrapper[4965]: E0219 09:44:58.872342 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:44:59.37233574 +0000 UTC m=+154.993657050 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-842k4" (UID: "22aed16a-0375-45f1-8762-8d5afddf848a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:44:58 crc kubenswrapper[4965]: I0219 09:44:58.973064 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:44:58 crc kubenswrapper[4965]: E0219 09:44:58.979315 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:44:59.479271986 +0000 UTC m=+155.100593306 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:44:59 crc kubenswrapper[4965]: I0219 09:44:59.029285 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-19 09:39:57 +0000 UTC, rotation deadline is 2026-11-15 11:23:46.764231839 +0000 UTC Feb 19 09:44:59 crc kubenswrapper[4965]: I0219 09:44:59.029362 4965 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6457h38m47.734873343s for next certificate rotation Feb 19 09:44:59 crc kubenswrapper[4965]: I0219 09:44:59.071099 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zjvhq"] Feb 19 09:44:59 crc kubenswrapper[4965]: I0219 09:44:59.074605 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:44:59 crc kubenswrapper[4965]: E0219 09:44:59.074959 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:44:59.574945018 +0000 UTC m=+155.196266318 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-842k4" (UID: "22aed16a-0375-45f1-8762-8d5afddf848a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:44:59 crc kubenswrapper[4965]: I0219 09:44:59.121422 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-hqt8l"] Feb 19 09:44:59 crc kubenswrapper[4965]: I0219 09:44:59.159121 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-bjhlw"] Feb 19 09:44:59 crc kubenswrapper[4965]: I0219 09:44:59.186908 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:44:59 crc kubenswrapper[4965]: E0219 09:44:59.187314 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:44:59.687288685 +0000 UTC m=+155.308609995 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:44:59 crc kubenswrapper[4965]: I0219 09:44:59.222535 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4chpc" podStartSLOduration=132.222428481 podStartE2EDuration="2m12.222428481s" podCreationTimestamp="2026-02-19 09:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:44:59.219423288 +0000 UTC m=+154.840744588" watchObservedRunningTime="2026-02-19 09:44:59.222428481 +0000 UTC m=+154.843749801" Feb 19 09:44:59 crc kubenswrapper[4965]: I0219 09:44:59.298869 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:44:59 crc kubenswrapper[4965]: E0219 09:44:59.299370 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:44:59.799350886 +0000 UTC m=+155.420672196 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-842k4" (UID: "22aed16a-0375-45f1-8762-8d5afddf848a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:44:59 crc kubenswrapper[4965]: I0219 09:44:59.343040 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n8ngq" Feb 19 09:44:59 crc kubenswrapper[4965]: I0219 09:44:59.343102 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n8ngq" event={"ID":"c19a4260-b2ba-478a-8fa6-e2045fe1b4ee","Type":"ContainerStarted","Data":"e84930ac495ca2818617256ad55cff2361d0e5092ff459646a4a140c532803fb"} Feb 19 09:44:59 crc kubenswrapper[4965]: I0219 09:44:59.343126 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-pbfgx" event={"ID":"13fcdc33-7dcb-4d34-86ca-bd40d679560e","Type":"ContainerStarted","Data":"911554071c53f8ab1d704a5fbd543424a26210d0dd3b99edc085c42f13ebe9dd"} Feb 19 09:44:59 crc kubenswrapper[4965]: I0219 09:44:59.343137 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mdd5s" event={"ID":"a347c331-9240-4c72-940b-60042b98960f","Type":"ContainerStarted","Data":"e2c4a9bf06551dee17cadb49cc7cf3dd49b435d612f747047be9288c71a37dde"} Feb 19 09:44:59 crc kubenswrapper[4965]: I0219 09:44:59.343147 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-hgzq5" event={"ID":"91fd349f-c4be-4636-a5a9-76ed721d9afa","Type":"ContainerStarted","Data":"a613bbd6017aa896e75425cc8b56aaab8c3cb8f219f72fa25f980d60a1fbe4c6"} Feb 19 09:44:59 crc kubenswrapper[4965]: I0219 09:44:59.344375 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6f8hc" event={"ID":"16b70f8e-c2b6-4545-813e-23b82399a149","Type":"ContainerStarted","Data":"a157a50a1fc9255f27c2470da16c0b09b0da070b831f2044736d3e2b618c9161"} Feb 19 09:44:59 crc kubenswrapper[4965]: I0219 09:44:59.345694 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6f8hc" Feb 19 09:44:59 crc kubenswrapper[4965]: I0219 09:44:59.354571 4965 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-6f8hc container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Feb 19 09:44:59 crc kubenswrapper[4965]: I0219 09:44:59.354648 4965 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6f8hc" podUID="16b70f8e-c2b6-4545-813e-23b82399a149" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" Feb 19 09:44:59 crc kubenswrapper[4965]: I0219 09:44:59.407972 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:44:59 crc kubenswrapper[4965]: E0219 09:44:59.409363 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:44:59.909321155 +0000 UTC m=+155.530642475 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:44:59 crc kubenswrapper[4965]: I0219 09:44:59.410172 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q8dwq" event={"ID":"6cba3a97-d8fe-4f88-9d3d-ed1a49713f1c","Type":"ContainerStarted","Data":"9364ab368102f791711e88f3d41d71463ba8ac167a7100f3bf05366aea2262b6"} Feb 19 09:44:59 crc kubenswrapper[4965]: I0219 09:44:59.410247 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q8dwq" event={"ID":"6cba3a97-d8fe-4f88-9d3d-ed1a49713f1c","Type":"ContainerStarted","Data":"35ceddf626e6b845b8c4ff4742e68ca3c0e9bdd5158e47e83e3a816cdd40eb2d"} Feb 19 09:44:59 crc kubenswrapper[4965]: I0219 09:44:59.420109 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-vkxtv" event={"ID":"d6fcc552-ae72-46a3-9525-cfb460da05e1","Type":"ContainerStarted","Data":"40dfbe8f441ead422db43c95e97063754eec29136650462abf8e1f118616bea0"} Feb 19 09:44:59 crc kubenswrapper[4965]: I0219 09:44:59.426160 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2v2tm" podStartSLOduration=131.426142805 podStartE2EDuration="2m11.426142805s" podCreationTimestamp="2026-02-19 09:42:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:44:59.425821708 +0000 UTC m=+155.047143028" watchObservedRunningTime="2026-02-19 09:44:59.426142805 +0000 UTC m=+155.047464105" Feb 19 09:44:59 crc kubenswrapper[4965]: I0219 09:44:59.481225 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-8b2cq" podStartSLOduration=132.481184417 podStartE2EDuration="2m12.481184417s" podCreationTimestamp="2026-02-19 09:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:44:59.480376507 +0000 UTC m=+155.101697827" watchObservedRunningTime="2026-02-19 09:44:59.481184417 +0000 UTC m=+155.102505727" Feb 19 09:44:59 crc kubenswrapper[4965]: I0219 09:44:59.509139 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:44:59 crc kubenswrapper[4965]: E0219 09:44:59.509664 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:45:00.009648061 +0000 UTC m=+155.630969371 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-842k4" (UID: "22aed16a-0375-45f1-8762-8d5afddf848a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:44:59 crc kubenswrapper[4965]: I0219 09:44:59.631453 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:44:59 crc kubenswrapper[4965]: E0219 09:44:59.633737 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:45:00.133710804 +0000 UTC m=+155.755032114 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:44:59 crc kubenswrapper[4965]: I0219 09:44:59.698572 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vfkkc"] Feb 19 09:44:59 crc kubenswrapper[4965]: I0219 09:44:59.733414 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:44:59 crc kubenswrapper[4965]: E0219 09:44:59.733756 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:45:00.233742881 +0000 UTC m=+155.855064191 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-842k4" (UID: "22aed16a-0375-45f1-8762-8d5afddf848a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:44:59 crc kubenswrapper[4965]: I0219 09:44:59.737698 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-hdbpp" podStartSLOduration=132.737677467 podStartE2EDuration="2m12.737677467s" podCreationTimestamp="2026-02-19 09:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:44:59.710610608 +0000 UTC m=+155.331931918" watchObservedRunningTime="2026-02-19 09:44:59.737677467 +0000 UTC m=+155.358998777" Feb 19 09:44:59 crc kubenswrapper[4965]: I0219 09:44:59.834159 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:44:59 crc kubenswrapper[4965]: E0219 09:44:59.834535 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:45:00.334519227 +0000 UTC m=+155.955840537 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:44:59 crc kubenswrapper[4965]: I0219 09:44:59.957265 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:44:59 crc kubenswrapper[4965]: E0219 09:44:59.958127 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:45:00.458111569 +0000 UTC m=+156.079432879 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-842k4" (UID: "22aed16a-0375-45f1-8762-8d5afddf848a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:00 crc kubenswrapper[4965]: I0219 09:45:00.061275 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:45:00 crc kubenswrapper[4965]: E0219 09:45:00.061864 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:45:00.561842016 +0000 UTC m=+156.183163336 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:00 crc kubenswrapper[4965]: I0219 09:45:00.070586 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rttcv" podStartSLOduration=132.07056227 podStartE2EDuration="2m12.07056227s" podCreationTimestamp="2026-02-19 09:42:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:45:00.070084947 +0000 UTC m=+155.691406257" watchObservedRunningTime="2026-02-19 09:45:00.07056227 +0000 UTC m=+155.691883580" Feb 19 09:45:00 crc kubenswrapper[4965]: I0219 09:45:00.071069 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-66l86" podStartSLOduration=133.071064442 podStartE2EDuration="2m13.071064442s" podCreationTimestamp="2026-02-19 09:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:44:59.989440753 +0000 UTC m=+155.610762063" watchObservedRunningTime="2026-02-19 09:45:00.071064442 +0000 UTC m=+155.692385752" Feb 19 09:45:00 crc kubenswrapper[4965]: I0219 09:45:00.171341 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:45:00 crc kubenswrapper[4965]: E0219 09:45:00.171805 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:45:00.671789286 +0000 UTC m=+156.293110596 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-842k4" (UID: "22aed16a-0375-45f1-8762-8d5afddf848a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:00 crc kubenswrapper[4965]: I0219 09:45:00.179434 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524890-mgzh5"] Feb 19 09:45:00 crc kubenswrapper[4965]: I0219 09:45:00.214738 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524905-682z8"] Feb 19 09:45:00 crc kubenswrapper[4965]: I0219 09:45:00.215587 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524905-682z8" Feb 19 09:45:00 crc kubenswrapper[4965]: I0219 09:45:00.237378 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-88wlz" podStartSLOduration=133.237356794 podStartE2EDuration="2m13.237356794s" podCreationTimestamp="2026-02-19 09:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:45:00.23718572 +0000 UTC m=+155.858507030" watchObservedRunningTime="2026-02-19 09:45:00.237356794 +0000 UTC m=+155.858678104" Feb 19 09:45:00 crc kubenswrapper[4965]: I0219 09:45:00.263075 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524905-682z8"] Feb 19 09:45:00 crc kubenswrapper[4965]: I0219 09:45:00.294091 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:45:00 crc kubenswrapper[4965]: I0219 09:45:00.297616 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/79e5acf4-3803-4356-aa12-622cceae90a5-secret-volume\") pod \"collect-profiles-29524905-682z8\" (UID: \"79e5acf4-3803-4356-aa12-622cceae90a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524905-682z8" Feb 19 09:45:00 crc kubenswrapper[4965]: I0219 09:45:00.298361 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pnbw\" (UniqueName: \"kubernetes.io/projected/79e5acf4-3803-4356-aa12-622cceae90a5-kube-api-access-2pnbw\") pod \"collect-profiles-29524905-682z8\" (UID: \"79e5acf4-3803-4356-aa12-622cceae90a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524905-682z8" Feb 19 09:45:00 crc kubenswrapper[4965]: I0219 09:45:00.298418 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/79e5acf4-3803-4356-aa12-622cceae90a5-config-volume\") pod \"collect-profiles-29524905-682z8\" (UID: \"79e5acf4-3803-4356-aa12-622cceae90a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524905-682z8" Feb 19 09:45:00 crc kubenswrapper[4965]: E0219 09:45:00.299012 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:45:00.798989476 +0000 UTC m=+156.420310786 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:00 crc kubenswrapper[4965]: I0219 09:45:00.383459 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f6hln"] Feb 19 09:45:00 crc kubenswrapper[4965]: I0219 09:45:00.411704 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/79e5acf4-3803-4356-aa12-622cceae90a5-secret-volume\") pod \"collect-profiles-29524905-682z8\" (UID: \"79e5acf4-3803-4356-aa12-622cceae90a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524905-682z8" Feb 19 09:45:00 crc kubenswrapper[4965]: I0219 09:45:00.411790 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:45:00 crc kubenswrapper[4965]: I0219 09:45:00.411896 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pnbw\" (UniqueName: \"kubernetes.io/projected/79e5acf4-3803-4356-aa12-622cceae90a5-kube-api-access-2pnbw\") pod \"collect-profiles-29524905-682z8\" (UID: \"79e5acf4-3803-4356-aa12-622cceae90a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524905-682z8" Feb 19 09:45:00 crc kubenswrapper[4965]: I0219 09:45:00.411926 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/79e5acf4-3803-4356-aa12-622cceae90a5-config-volume\") pod \"collect-profiles-29524905-682z8\" (UID: \"79e5acf4-3803-4356-aa12-622cceae90a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524905-682z8" Feb 19 09:45:00 crc kubenswrapper[4965]: I0219 09:45:00.414903 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/79e5acf4-3803-4356-aa12-622cceae90a5-config-volume\") pod \"collect-profiles-29524905-682z8\" (UID: \"79e5acf4-3803-4356-aa12-622cceae90a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524905-682z8" Feb 19 09:45:00 crc kubenswrapper[4965]: E0219 09:45:00.420002 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:45:00.919966754 +0000 UTC m=+156.541288064 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-842k4" (UID: "22aed16a-0375-45f1-8762-8d5afddf848a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:00 crc kubenswrapper[4965]: I0219 09:45:00.422777 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-msn62" podStartSLOduration=133.422752221 podStartE2EDuration="2m13.422752221s" podCreationTimestamp="2026-02-19 09:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:45:00.420139378 +0000 UTC m=+156.041460688" watchObservedRunningTime="2026-02-19 09:45:00.422752221 +0000 UTC m=+156.044073531" Feb 19 09:45:00 crc kubenswrapper[4965]: I0219 09:45:00.471717 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-kvwsm"] Feb 19 09:45:00 crc kubenswrapper[4965]: I0219 09:45:00.497568 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-pbfgx" event={"ID":"13fcdc33-7dcb-4d34-86ca-bd40d679560e","Type":"ContainerStarted","Data":"cffbbfeabbe4ffbb2ce7781d50de23bea6e802d02e25d489b29dcdd6bc0e046a"} Feb 19 09:45:00 crc kubenswrapper[4965]: I0219 09:45:00.528493 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:45:00 crc kubenswrapper[4965]: E0219 09:45:00.529099 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:45:01.029072573 +0000 UTC m=+156.650393883 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:00 crc kubenswrapper[4965]: I0219 09:45:00.550626 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-25n6z" event={"ID":"715c2ecd-ac0a-4758-9ded-2ce22952b44f","Type":"ContainerStarted","Data":"c8d4e4e06be2eb1698bc3e187ba4f60f529bd85189ccd80dfcc467745c88386c"} Feb 19 09:45:00 crc kubenswrapper[4965]: I0219 09:45:00.560762 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/79e5acf4-3803-4356-aa12-622cceae90a5-secret-volume\") pod \"collect-profiles-29524905-682z8\" (UID: \"79e5acf4-3803-4356-aa12-622cceae90a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524905-682z8" Feb 19 09:45:00 crc kubenswrapper[4965]: I0219 09:45:00.560863 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pnbw\" (UniqueName: \"kubernetes.io/projected/79e5acf4-3803-4356-aa12-622cceae90a5-kube-api-access-2pnbw\") pod \"collect-profiles-29524905-682z8\" (UID: \"79e5acf4-3803-4356-aa12-622cceae90a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524905-682z8" Feb 19 09:45:00 crc kubenswrapper[4965]: I0219 09:45:00.562817 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-th66b" event={"ID":"10c3f447-9428-4d95-98ce-69f6c5f8bf38","Type":"ContainerStarted","Data":"956dc2b8311657a1e5ff1422fb49f84d749729dd4d4d6319409a5d56f7a7ecbf"} Feb 19 09:45:00 crc kubenswrapper[4965]: I0219 09:45:00.579659 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6f8hc" event={"ID":"16b70f8e-c2b6-4545-813e-23b82399a149","Type":"ContainerStarted","Data":"8a9d1fdc02f855a5689f1074ede8e50aa65cd72b0b03591140f52877a484bc12"} Feb 19 09:45:00 crc kubenswrapper[4965]: I0219 09:45:00.598092 4965 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-6f8hc container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Feb 19 09:45:00 crc kubenswrapper[4965]: I0219 09:45:00.598153 4965 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6f8hc" podUID="16b70f8e-c2b6-4545-813e-23b82399a149" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" Feb 19 09:45:00 crc kubenswrapper[4965]: I0219 09:45:00.613346 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-hqt8l" event={"ID":"49cf856e-b37d-4ab6-9c6e-241cbc4be93e","Type":"ContainerStarted","Data":"44ef3ff96eec22451071069042b5de35b50e977f692ea2bac7f580f8ac1cf371"} Feb 19 09:45:00 crc kubenswrapper[4965]: I0219 09:45:00.637473 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-h4p5q"] Feb 19 09:45:00 crc kubenswrapper[4965]: I0219 09:45:00.638025 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:45:00 crc kubenswrapper[4965]: E0219 09:45:00.638577 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:45:01.13855591 +0000 UTC m=+156.759877220 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-842k4" (UID: "22aed16a-0375-45f1-8762-8d5afddf848a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:00 crc kubenswrapper[4965]: I0219 09:45:00.642059 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mdd5s" event={"ID":"a347c331-9240-4c72-940b-60042b98960f","Type":"ContainerStarted","Data":"c50833bc9e91fd62be2874b531d1059bd0bb2daa726ac5a448e2c168d6b286aa"} Feb 19 09:45:00 crc kubenswrapper[4965]: I0219 09:45:00.643533 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mdd5s" Feb 19 09:45:00 crc kubenswrapper[4965]: I0219 09:45:00.643821 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-nq2jk"] Feb 19 09:45:00 crc kubenswrapper[4965]: I0219 09:45:00.646966 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524890-mgzh5" event={"ID":"7cd3fe51-e7f2-4d42-b054-5e8dbbb7ebbe","Type":"ContainerStarted","Data":"19ec5e8d7adb6947d3876be8a9af44f570f0c7923e4793ad1ee4e674c3591716"} Feb 19 09:45:00 crc kubenswrapper[4965]: I0219 09:45:00.647020 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524890-mgzh5" event={"ID":"7cd3fe51-e7f2-4d42-b054-5e8dbbb7ebbe","Type":"ContainerStarted","Data":"a25b65e95de239ab8087cd5ea714899614f129b5af9489dd4a1c817a65a7cf60"} Feb 19 09:45:00 crc kubenswrapper[4965]: I0219 09:45:00.647212 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/collect-profiles-29524890-mgzh5" podUID="7cd3fe51-e7f2-4d42-b054-5e8dbbb7ebbe" containerName="collect-profiles" containerID="cri-o://19ec5e8d7adb6947d3876be8a9af44f570f0c7923e4793ad1ee4e674c3591716" gracePeriod=30 Feb 19 09:45:00 crc kubenswrapper[4965]: I0219 09:45:00.657718 4965 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-mdd5s container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Feb 19 09:45:00 crc kubenswrapper[4965]: I0219 09:45:00.657802 4965 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mdd5s" podUID="a347c331-9240-4c72-940b-60042b98960f" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Feb 19 09:45:00 crc kubenswrapper[4965]: I0219 09:45:00.659207 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524905-682z8" Feb 19 09:45:00 crc kubenswrapper[4965]: I0219 09:45:00.681370 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-hgzq5" event={"ID":"91fd349f-c4be-4636-a5a9-76ed721d9afa","Type":"ContainerStarted","Data":"0fbbf58178d7bb77de8f645e6b9b5209d0f9eca6ff3a1b19f345fb0cb4e93c29"} Feb 19 09:45:00 crc kubenswrapper[4965]: I0219 09:45:00.681418 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-b2d5q" podStartSLOduration=133.681378164 podStartE2EDuration="2m13.681378164s" podCreationTimestamp="2026-02-19 09:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:45:00.672394176 +0000 UTC m=+156.293715506" watchObservedRunningTime="2026-02-19 09:45:00.681378164 +0000 UTC m=+156.302699484" Feb 19 09:45:00 crc kubenswrapper[4965]: I0219 09:45:00.689360 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-56x8k" event={"ID":"f9b832c2-b2a0-4017-a323-c317ec4c1c1c","Type":"ContainerStarted","Data":"72c62924449df7d71b846a5c222ea6fa241829cfa01665dc1599b87138cb97a6"} Feb 19 09:45:00 crc kubenswrapper[4965]: I0219 09:45:00.689425 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-56x8k" event={"ID":"f9b832c2-b2a0-4017-a323-c317ec4c1c1c","Type":"ContainerStarted","Data":"4caa1f43e8c4786c073aaa5725b59f5e4f3b92b6ec25a19f495b61ffdae443be"} Feb 19 09:45:00 crc kubenswrapper[4965]: I0219 09:45:00.704548 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zjvhq" event={"ID":"bd91bca5-eb6e-4fcf-b8c8-013e057a95d0","Type":"ContainerStarted","Data":"f44d24248d00f0b14648913944856841c5a9c17dd88192806661baf11ff1ea07"} Feb 19 09:45:00 crc kubenswrapper[4965]: I0219 09:45:00.705879 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bjhlw" event={"ID":"164f68fa-9132-47c9-9c23-bca749b3f4e8","Type":"ContainerStarted","Data":"91880a1db1ba7a923c3403765261502352b0d50021336c0e103acb0ea2ae4482"} Feb 19 09:45:00 crc kubenswrapper[4965]: I0219 09:45:00.720802 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bdvzx" event={"ID":"9dfc9a41-03c9-411f-82ad-e212654e4bc3","Type":"ContainerStarted","Data":"bf33b652fe01183eb8e9097b6fdd0f2b198a6d5505316a8119efc612238b7ba9"} Feb 19 09:45:00 crc kubenswrapper[4965]: I0219 09:45:00.720855 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bdvzx" event={"ID":"9dfc9a41-03c9-411f-82ad-e212654e4bc3","Type":"ContainerStarted","Data":"77c28e08118a2e0efcd34b71718a7de34a89e680d9aea9bf496ec981d87240d3"} Feb 19 09:45:00 crc kubenswrapper[4965]: I0219 09:45:00.732711 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vfkkc" event={"ID":"ff842fd5-2dc6-4411-b644-6de86566ab22","Type":"ContainerStarted","Data":"7a24e09936407e58fa872daddf36844a7ecb173e72c02d4fa215163d93615640"} Feb 19 09:45:00 crc kubenswrapper[4965]: I0219 09:45:00.743366 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6gzld" event={"ID":"c6cb9e72-09ea-41da-97ef-a5501b57a58b","Type":"ContainerStarted","Data":"9b4d20d2a1db21a048d7abaf502943c7651c4c8eb8bad2e91b133ea5e98d37af"} Feb 19 09:45:00 crc kubenswrapper[4965]: I0219 09:45:00.755444 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:45:00 crc kubenswrapper[4965]: E0219 09:45:00.756351 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:45:01.25633303 +0000 UTC m=+156.877654340 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:00 crc kubenswrapper[4965]: I0219 09:45:00.778831 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-7dd6s"] Feb 19 09:45:00 crc kubenswrapper[4965]: I0219 09:45:00.788317 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-qnhlh"] Feb 19 09:45:00 crc kubenswrapper[4965]: I0219 09:45:00.864182 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:45:00 crc kubenswrapper[4965]: E0219 09:45:00.864625 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:45:01.36461232 +0000 UTC m=+156.985933630 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-842k4" (UID: "22aed16a-0375-45f1-8762-8d5afddf848a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:00 crc kubenswrapper[4965]: I0219 09:45:00.865402 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29524890-mgzh5" podStartSLOduration=133.865358157 podStartE2EDuration="2m13.865358157s" podCreationTimestamp="2026-02-19 09:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:45:00.844429337 +0000 UTC m=+156.465750657" watchObservedRunningTime="2026-02-19 09:45:00.865358157 +0000 UTC m=+156.486679467" Feb 19 09:45:00 crc kubenswrapper[4965]: I0219 09:45:00.971428 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:45:00 crc kubenswrapper[4965]: E0219 09:45:00.973177 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:45:01.473146624 +0000 UTC m=+157.094467934 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:00 crc kubenswrapper[4965]: I0219 09:45:00.979032 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mdd5s" podStartSLOduration=132.979012698 podStartE2EDuration="2m12.979012698s" podCreationTimestamp="2026-02-19 09:42:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:45:00.977667025 +0000 UTC m=+156.598988335" watchObservedRunningTime="2026-02-19 09:45:00.979012698 +0000 UTC m=+156.600334008" Feb 19 09:45:00 crc kubenswrapper[4965]: I0219 09:45:00.979519 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-hgzq5" podStartSLOduration=133.979515509 podStartE2EDuration="2m13.979515509s" podCreationTimestamp="2026-02-19 09:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:45:00.927777499 +0000 UTC m=+156.549098809" watchObservedRunningTime="2026-02-19 09:45:00.979515509 +0000 UTC m=+156.600836809" Feb 19 09:45:01 crc kubenswrapper[4965]: I0219 09:45:01.045858 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-56x8k" podStartSLOduration=134.045833775 podStartE2EDuration="2m14.045833775s" podCreationTimestamp="2026-02-19 09:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:45:01.036545729 +0000 UTC m=+156.657867049" watchObservedRunningTime="2026-02-19 09:45:01.045833775 +0000 UTC m=+156.667155085" Feb 19 09:45:01 crc kubenswrapper[4965]: I0219 09:45:01.046042 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n8ngq" podStartSLOduration=134.04603377 podStartE2EDuration="2m14.04603377s" podCreationTimestamp="2026-02-19 09:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:45:01.015428495 +0000 UTC m=+156.636749815" watchObservedRunningTime="2026-02-19 09:45:01.04603377 +0000 UTC m=+156.667355100" Feb 19 09:45:01 crc kubenswrapper[4965]: I0219 09:45:01.081551 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:45:01 crc kubenswrapper[4965]: E0219 09:45:01.082025 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:45:01.582007897 +0000 UTC m=+157.203329217 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-842k4" (UID: "22aed16a-0375-45f1-8762-8d5afddf848a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:01 crc kubenswrapper[4965]: I0219 09:45:01.082424 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2v2tm" Feb 19 09:45:01 crc kubenswrapper[4965]: I0219 09:45:01.082459 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2v2tm" Feb 19 09:45:01 crc kubenswrapper[4965]: I0219 09:45:01.082659 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-pbfgx" podStartSLOduration=134.082633652 podStartE2EDuration="2m14.082633652s" podCreationTimestamp="2026-02-19 09:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:45:01.081950445 +0000 UTC m=+156.703271755" watchObservedRunningTime="2026-02-19 09:45:01.082633652 +0000 UTC m=+156.703954982" Feb 19 09:45:01 crc kubenswrapper[4965]: I0219 09:45:01.112584 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6f8hc" podStartSLOduration=133.112551231 podStartE2EDuration="2m13.112551231s" podCreationTimestamp="2026-02-19 09:42:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:45:01.111228719 +0000 UTC m=+156.732550029" watchObservedRunningTime="2026-02-19 09:45:01.112551231 +0000 UTC m=+156.733872571" Feb 19 09:45:01 crc kubenswrapper[4965]: I0219 09:45:01.114909 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2v2tm" Feb 19 09:45:01 crc kubenswrapper[4965]: I0219 09:45:01.193540 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:45:01 crc kubenswrapper[4965]: E0219 09:45:01.195294 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:45:01.695275217 +0000 UTC m=+157.316596527 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:01 crc kubenswrapper[4965]: I0219 09:45:01.216181 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q8dwq" podStartSLOduration=134.216143246 podStartE2EDuration="2m14.216143246s" podCreationTimestamp="2026-02-19 09:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:45:01.159869514 +0000 UTC m=+156.781190824" watchObservedRunningTime="2026-02-19 09:45:01.216143246 +0000 UTC m=+156.837464556" Feb 19 09:45:01 crc kubenswrapper[4965]: I0219 09:45:01.285016 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-t4rcw"] Feb 19 09:45:01 crc kubenswrapper[4965]: I0219 09:45:01.290000 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vkrsj"] Feb 19 09:45:01 crc kubenswrapper[4965]: I0219 09:45:01.339257 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:45:01 crc kubenswrapper[4965]: E0219 09:45:01.340776 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:45:01.840735632 +0000 UTC m=+157.462056942 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-842k4" (UID: "22aed16a-0375-45f1-8762-8d5afddf848a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:01 crc kubenswrapper[4965]: I0219 09:45:01.405144 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2v6fb"] Feb 19 09:45:01 crc kubenswrapper[4965]: I0219 09:45:01.410301 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-twxbq"] Feb 19 09:45:01 crc kubenswrapper[4965]: I0219 09:45:01.440772 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:45:01 crc kubenswrapper[4965]: E0219 09:45:01.441351 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:45:01.941305633 +0000 UTC m=+157.562626953 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:01 crc kubenswrapper[4965]: I0219 09:45:01.530278 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-47ksl"] Feb 19 09:45:01 crc kubenswrapper[4965]: I0219 09:45:01.530499 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lz7fh"] Feb 19 09:45:01 crc kubenswrapper[4965]: I0219 09:45:01.544118 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:45:01 crc kubenswrapper[4965]: E0219 09:45:01.544520 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:45:02.044503797 +0000 UTC m=+157.665825107 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-842k4" (UID: "22aed16a-0375-45f1-8762-8d5afddf848a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:01 crc kubenswrapper[4965]: I0219 09:45:01.594734 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-sbvpc"] Feb 19 09:45:01 crc kubenswrapper[4965]: I0219 09:45:01.641455 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-56x8k" Feb 19 09:45:01 crc kubenswrapper[4965]: I0219 09:45:01.644884 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:45:01 crc kubenswrapper[4965]: E0219 09:45:01.645274 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:45:02.145255372 +0000 UTC m=+157.766576682 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:01 crc kubenswrapper[4965]: I0219 09:45:01.662437 4965 patch_prober.go:28] interesting pod/router-default-5444994796-56x8k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 09:45:01 crc kubenswrapper[4965]: [-]has-synced failed: reason withheld Feb 19 09:45:01 crc kubenswrapper[4965]: [+]process-running ok Feb 19 09:45:01 crc kubenswrapper[4965]: healthz check failed Feb 19 09:45:01 crc kubenswrapper[4965]: I0219 09:45:01.662508 4965 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-56x8k" podUID="f9b832c2-b2a0-4017-a323-c317ec4c1c1c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 09:45:01 crc kubenswrapper[4965]: W0219 09:45:01.663395 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88b545dd_0faa_4093_8fd9_40b693b3ef87.slice/crio-fee414a6db47407acc6cb3592a6ef694fcedb71ff7eb8a445d41077b1fb99a3a WatchSource:0}: Error finding container fee414a6db47407acc6cb3592a6ef694fcedb71ff7eb8a445d41077b1fb99a3a: Status 404 returned error can't find the container with id fee414a6db47407acc6cb3592a6ef694fcedb71ff7eb8a445d41077b1fb99a3a Feb 19 09:45:01 crc kubenswrapper[4965]: I0219 09:45:01.718049 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kpdjc"] Feb 19 09:45:01 crc kubenswrapper[4965]: I0219 09:45:01.718696 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524905-682z8"] Feb 19 09:45:01 crc kubenswrapper[4965]: I0219 09:45:01.746951 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:45:01 crc kubenswrapper[4965]: E0219 09:45:01.747448 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:45:02.247432362 +0000 UTC m=+157.868753672 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-842k4" (UID: "22aed16a-0375-45f1-8762-8d5afddf848a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:01 crc kubenswrapper[4965]: I0219 09:45:01.791916 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-gmzqb"] Feb 19 09:45:01 crc kubenswrapper[4965]: I0219 09:45:01.799263 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lz7fh" event={"ID":"b4e92446-36cd-4840-97fc-d9f0d60e4e7d","Type":"ContainerStarted","Data":"e2ff1603fd7098e684b3d730261aa92899577681428e3204e29cf80f48624f3b"} Feb 19 09:45:01 crc kubenswrapper[4965]: I0219 09:45:01.806625 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bjhlw" event={"ID":"164f68fa-9132-47c9-9c23-bca749b3f4e8","Type":"ContainerStarted","Data":"c1a0d228a45f07ffcc018db9b8e4193c03743e5900beacaa598dfdcb46ff21d0"} Feb 19 09:45:01 crc kubenswrapper[4965]: I0219 09:45:01.813398 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-47ksl" event={"ID":"88b545dd-0faa-4093-8fd9-40b693b3ef87","Type":"ContainerStarted","Data":"fee414a6db47407acc6cb3592a6ef694fcedb71ff7eb8a445d41077b1fb99a3a"} Feb 19 09:45:01 crc kubenswrapper[4965]: I0219 09:45:01.825990 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-sbvpc" event={"ID":"23c8c1d2-4e7b-4cd4-99cf-92130064bbbf","Type":"ContainerStarted","Data":"f7f9d24854ac29530fcf5c00934e6466d6b4c12b0bc959031e72346882213b5b"} Feb 19 09:45:01 crc kubenswrapper[4965]: I0219 09:45:01.840648 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-hqt8l" event={"ID":"49cf856e-b37d-4ab6-9c6e-241cbc4be93e","Type":"ContainerStarted","Data":"7410765110e62fa2790886b3b13c9d36fe3e0cf18cba3decdc2fa0e0bc8d7c96"} Feb 19 09:45:01 crc kubenswrapper[4965]: I0219 09:45:01.849118 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:45:01 crc kubenswrapper[4965]: E0219 09:45:01.849516 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:45:02.34949926 +0000 UTC m=+157.970820570 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:01 crc kubenswrapper[4965]: I0219 09:45:01.849824 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_collect-profiles-29524890-mgzh5_7cd3fe51-e7f2-4d42-b054-5e8dbbb7ebbe/collect-profiles/0.log" Feb 19 09:45:01 crc kubenswrapper[4965]: I0219 09:45:01.849860 4965 generic.go:334] "Generic (PLEG): container finished" podID="7cd3fe51-e7f2-4d42-b054-5e8dbbb7ebbe" containerID="19ec5e8d7adb6947d3876be8a9af44f570f0c7923e4793ad1ee4e674c3591716" exitCode=2 Feb 19 09:45:01 crc kubenswrapper[4965]: I0219 09:45:01.849921 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524890-mgzh5" event={"ID":"7cd3fe51-e7f2-4d42-b054-5e8dbbb7ebbe","Type":"ContainerDied","Data":"19ec5e8d7adb6947d3876be8a9af44f570f0c7923e4793ad1ee4e674c3591716"} Feb 19 09:45:01 crc kubenswrapper[4965]: I0219 09:45:01.859813 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-25n6z" event={"ID":"715c2ecd-ac0a-4758-9ded-2ce22952b44f","Type":"ContainerStarted","Data":"69ec7f92f42e1d1938483c2ecbb77a46c4520c488abac31236f93e71aa72fc34"} Feb 19 09:45:01 crc kubenswrapper[4965]: I0219 09:45:01.868785 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-th66b" event={"ID":"10c3f447-9428-4d95-98ce-69f6c5f8bf38","Type":"ContainerStarted","Data":"50b32d04df9140fa3591e73265c51b5cd61ea32f20665edd4dafb572b10d09a9"} Feb 19 09:45:01 crc kubenswrapper[4965]: I0219 09:45:01.899486 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bdvzx" event={"ID":"9dfc9a41-03c9-411f-82ad-e212654e4bc3","Type":"ContainerStarted","Data":"e1c02482dd4f26b21cdaa525dfbce1d9a9e7be3326e458b393d4323102bfa28b"} Feb 19 09:45:01 crc kubenswrapper[4965]: I0219 09:45:01.900582 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bdvzx" Feb 19 09:45:01 crc kubenswrapper[4965]: I0219 09:45:01.905473 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-25n6z" podStartSLOduration=134.905454493 podStartE2EDuration="2m14.905454493s" podCreationTimestamp="2026-02-19 09:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:45:01.901813165 +0000 UTC m=+157.523134495" watchObservedRunningTime="2026-02-19 09:45:01.905454493 +0000 UTC m=+157.526775803" Feb 19 09:45:01 crc kubenswrapper[4965]: I0219 09:45:01.913730 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2v6fb" event={"ID":"5e0fcf66-e50c-4c4c-9370-08ed336d25d9","Type":"ContainerStarted","Data":"29f2b3b61dfc0c749b275aecd74ad6f3e4ae02e3ee8ca67fb5d0638f74566f9c"} Feb 19 09:45:01 crc kubenswrapper[4965]: I0219 09:45:01.941337 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bdvzx" podStartSLOduration=133.941302687 podStartE2EDuration="2m13.941302687s" podCreationTimestamp="2026-02-19 09:42:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:45:01.93812259 +0000 UTC m=+157.559443900" watchObservedRunningTime="2026-02-19 09:45:01.941302687 +0000 UTC m=+157.562623997" Feb 19 09:45:01 crc kubenswrapper[4965]: I0219 09:45:01.945292 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6gzld" event={"ID":"c6cb9e72-09ea-41da-97ef-a5501b57a58b","Type":"ContainerStarted","Data":"dcbb2978f82282a3cf76215c2bff71738f9fb01dca44c4a2918311ad3240be06"} Feb 19 09:45:01 crc kubenswrapper[4965]: I0219 09:45:01.954955 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:45:01 crc kubenswrapper[4965]: E0219 09:45:01.958346 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:45:02.458326412 +0000 UTC m=+158.079647722 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-842k4" (UID: "22aed16a-0375-45f1-8762-8d5afddf848a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:01 crc kubenswrapper[4965]: I0219 09:45:01.970463 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-vkxtv" event={"ID":"d6fcc552-ae72-46a3-9525-cfb460da05e1","Type":"ContainerStarted","Data":"fa1cac2e1e7afa8e431de7ccc2571ef049729061efb9b81892e1261e8f3dfc49"} Feb 19 09:45:01 crc kubenswrapper[4965]: I0219 09:45:01.988135 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-th66b" podStartSLOduration=6.988114907 podStartE2EDuration="6.988114907s" podCreationTimestamp="2026-02-19 09:44:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:45:01.972586129 +0000 UTC m=+157.593907459" watchObservedRunningTime="2026-02-19 09:45:01.988114907 +0000 UTC m=+157.609436207" Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.010022 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vfkkc" event={"ID":"ff842fd5-2dc6-4411-b644-6de86566ab22","Type":"ContainerStarted","Data":"0c5ddb66a64395169c0f2f9fcfda442f06c17318fa19423dcbdc57c5ab804a05"} Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.024250 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6gzld" podStartSLOduration=135.024226438 podStartE2EDuration="2m15.024226438s" podCreationTimestamp="2026-02-19 09:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:45:02.022989617 +0000 UTC m=+157.644310947" watchObservedRunningTime="2026-02-19 09:45:02.024226438 +0000 UTC m=+157.645547748" Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.042856 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-twxbq" event={"ID":"6fd7c237-27ab-45f9-a23b-d18372f1c28a","Type":"ContainerStarted","Data":"8f0a3adcdc9e144c57c23c75ccdc19f260443e945f56113884a3c2a8896a98cf"} Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.055171 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-h4p5q" event={"ID":"6bd24030-a535-4db1-b620-44d9c5c7a655","Type":"ContainerStarted","Data":"8644b10b13e1ad8b085bf617134ce88bfcf57f180286eef35c8bb7af5fe08c7b"} Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.055764 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:45:02 crc kubenswrapper[4965]: E0219 09:45:02.057133 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:45:02.557111009 +0000 UTC m=+158.178432319 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.120575 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nq2jk" event={"ID":"8c7730fc-ade8-4092-8923-54264965e892","Type":"ContainerStarted","Data":"36a03a498c6adf6429b75ed27060e5786e6dc2bfcd4739b3ac20e9139abb6327"} Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.146052 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-t4rcw" event={"ID":"cdd71340-9555-4441-b38c-89d5d4cc306b","Type":"ContainerStarted","Data":"ff236984025c8aba187d0f7675bd9ea6a7e1e81ca7588f1e8cc99195b71acf4d"} Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.148988 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_collect-profiles-29524890-mgzh5_7cd3fe51-e7f2-4d42-b054-5e8dbbb7ebbe/collect-profiles/0.log" Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.149058 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524890-mgzh5" Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.157031 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f6hln" event={"ID":"082ad3b5-c0c5-437c-8077-395c6ec09ec3","Type":"ContainerStarted","Data":"09f33199f98853bf9b325ff775dda8593c85e9cbad9c1668676cd6c8f9e170c2"} Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.158392 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:45:02 crc kubenswrapper[4965]: E0219 09:45:02.158754 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:45:02.658739645 +0000 UTC m=+158.280060955 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-842k4" (UID: "22aed16a-0375-45f1-8762-8d5afddf848a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.178456 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vfkkc" podStartSLOduration=135.178437105 podStartE2EDuration="2m15.178437105s" podCreationTimestamp="2026-02-19 09:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:45:02.070247409 +0000 UTC m=+157.691568719" watchObservedRunningTime="2026-02-19 09:45:02.178437105 +0000 UTC m=+157.799758415" Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.185437 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7dd6s" event={"ID":"52aa8dbd-b4ec-4579-b036-a1dcf35567a5","Type":"ContainerStarted","Data":"fba7e81722b96804f518e20feb1a725cf4e659ee933334368e84c047ffb3aded"} Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.185488 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7dd6s" event={"ID":"52aa8dbd-b4ec-4579-b036-a1dcf35567a5","Type":"ContainerStarted","Data":"12d4c58c0052830b8414cbd24f94a29d6d008f27f890be39f62c410810ae7cbb"} Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.216554 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-kvwsm" event={"ID":"6c8a56a8-ed46-4e4a-9dbd-de3914ee3581","Type":"ContainerStarted","Data":"b30f08c2aef34d0a94e8867a99ccf33ec363f7b1efb7615a777d7f1aa3f3bd37"} Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.219719 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f6hln" podStartSLOduration=135.219687831 podStartE2EDuration="2m15.219687831s" podCreationTimestamp="2026-02-19 09:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:45:02.207475783 +0000 UTC m=+157.828797103" watchObservedRunningTime="2026-02-19 09:45:02.219687831 +0000 UTC m=+157.841009151" Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.235456 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n8ngq" Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.237683 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vkrsj" event={"ID":"190603fe-6420-4d17-91f5-c37c9038002c","Type":"ContainerStarted","Data":"02ea8d50383e909b37ce15d4ac29849cdb889c7c02eeff8f7bf8a9a27ccd5675"} Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.248279 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qnhlh" event={"ID":"cd586f77-cb07-42a9-b20c-bf06ed856469","Type":"ContainerStarted","Data":"82276cacab9a88c5c3da7233538421d612916c0178b244cdbf09690e64796686"} Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.260147 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbvbj\" (UniqueName: \"kubernetes.io/projected/7cd3fe51-e7f2-4d42-b054-5e8dbbb7ebbe-kube-api-access-nbvbj\") pod \"7cd3fe51-e7f2-4d42-b054-5e8dbbb7ebbe\" (UID: \"7cd3fe51-e7f2-4d42-b054-5e8dbbb7ebbe\") " Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.260222 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7cd3fe51-e7f2-4d42-b054-5e8dbbb7ebbe-secret-volume\") pod \"7cd3fe51-e7f2-4d42-b054-5e8dbbb7ebbe\" (UID: \"7cd3fe51-e7f2-4d42-b054-5e8dbbb7ebbe\") " Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.260416 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.260557 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7cd3fe51-e7f2-4d42-b054-5e8dbbb7ebbe-config-volume\") pod \"7cd3fe51-e7f2-4d42-b054-5e8dbbb7ebbe\" (UID: \"7cd3fe51-e7f2-4d42-b054-5e8dbbb7ebbe\") " Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.260541 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-kvwsm" podStartSLOduration=134.260519116 podStartE2EDuration="2m14.260519116s" podCreationTimestamp="2026-02-19 09:42:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:45:02.259584303 +0000 UTC m=+157.880905613" watchObservedRunningTime="2026-02-19 09:45:02.260519116 +0000 UTC m=+157.881840426" Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.287386 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2v2tm" Feb 19 09:45:02 crc kubenswrapper[4965]: E0219 09:45:02.295157 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:45:02.795135429 +0000 UTC m=+158.416456739 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.295167 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6f8hc" Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.297865 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mdd5s" Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.299541 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cd3fe51-e7f2-4d42-b054-5e8dbbb7ebbe-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7cd3fe51-e7f2-4d42-b054-5e8dbbb7ebbe" (UID: "7cd3fe51-e7f2-4d42-b054-5e8dbbb7ebbe"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.301182 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cd3fe51-e7f2-4d42-b054-5e8dbbb7ebbe-config-volume" (OuterVolumeSpecName: "config-volume") pod "7cd3fe51-e7f2-4d42-b054-5e8dbbb7ebbe" (UID: "7cd3fe51-e7f2-4d42-b054-5e8dbbb7ebbe"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.316222 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cd3fe51-e7f2-4d42-b054-5e8dbbb7ebbe-kube-api-access-nbvbj" (OuterVolumeSpecName: "kube-api-access-nbvbj") pod "7cd3fe51-e7f2-4d42-b054-5e8dbbb7ebbe" (UID: "7cd3fe51-e7f2-4d42-b054-5e8dbbb7ebbe"). InnerVolumeSpecName "kube-api-access-nbvbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.386042 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.386161 4965 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7cd3fe51-e7f2-4d42-b054-5e8dbbb7ebbe-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.386179 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbvbj\" (UniqueName: \"kubernetes.io/projected/7cd3fe51-e7f2-4d42-b054-5e8dbbb7ebbe-kube-api-access-nbvbj\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.386215 4965 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7cd3fe51-e7f2-4d42-b054-5e8dbbb7ebbe-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:02 crc kubenswrapper[4965]: E0219 09:45:02.394010 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:45:02.893992598 +0000 UTC m=+158.515313908 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-842k4" (UID: "22aed16a-0375-45f1-8762-8d5afddf848a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.430931 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fxnw5"] Feb 19 09:45:02 crc kubenswrapper[4965]: E0219 09:45:02.431293 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cd3fe51-e7f2-4d42-b054-5e8dbbb7ebbe" containerName="collect-profiles" Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.431322 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cd3fe51-e7f2-4d42-b054-5e8dbbb7ebbe" containerName="collect-profiles" Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.431470 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cd3fe51-e7f2-4d42-b054-5e8dbbb7ebbe" containerName="collect-profiles" Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.432263 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fxnw5" Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.445664 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.451046 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fxnw5"] Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.487717 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.487881 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2ea1b40-1bc8-462a-a2a2-218c24c27584-catalog-content\") pod \"certified-operators-fxnw5\" (UID: \"c2ea1b40-1bc8-462a-a2a2-218c24c27584\") " pod="openshift-marketplace/certified-operators-fxnw5" Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.487912 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbdk8\" (UniqueName: \"kubernetes.io/projected/c2ea1b40-1bc8-462a-a2a2-218c24c27584-kube-api-access-hbdk8\") pod \"certified-operators-fxnw5\" (UID: \"c2ea1b40-1bc8-462a-a2a2-218c24c27584\") " pod="openshift-marketplace/certified-operators-fxnw5" Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.487968 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2ea1b40-1bc8-462a-a2a2-218c24c27584-utilities\") pod \"certified-operators-fxnw5\" (UID: \"c2ea1b40-1bc8-462a-a2a2-218c24c27584\") " pod="openshift-marketplace/certified-operators-fxnw5" Feb 19 09:45:02 crc kubenswrapper[4965]: E0219 09:45:02.488104 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:45:02.988087451 +0000 UTC m=+158.609408761 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.591601 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbdk8\" (UniqueName: \"kubernetes.io/projected/c2ea1b40-1bc8-462a-a2a2-218c24c27584-kube-api-access-hbdk8\") pod \"certified-operators-fxnw5\" (UID: \"c2ea1b40-1bc8-462a-a2a2-218c24c27584\") " pod="openshift-marketplace/certified-operators-fxnw5" Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.592115 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.592177 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2ea1b40-1bc8-462a-a2a2-218c24c27584-utilities\") pod \"certified-operators-fxnw5\" (UID: \"c2ea1b40-1bc8-462a-a2a2-218c24c27584\") " pod="openshift-marketplace/certified-operators-fxnw5" Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.592276 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2ea1b40-1bc8-462a-a2a2-218c24c27584-catalog-content\") pod \"certified-operators-fxnw5\" (UID: \"c2ea1b40-1bc8-462a-a2a2-218c24c27584\") " pod="openshift-marketplace/certified-operators-fxnw5" Feb 19 09:45:02 crc kubenswrapper[4965]: E0219 09:45:02.592859 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:45:03.092836894 +0000 UTC m=+158.714158384 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-842k4" (UID: "22aed16a-0375-45f1-8762-8d5afddf848a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.592873 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2ea1b40-1bc8-462a-a2a2-218c24c27584-catalog-content\") pod \"certified-operators-fxnw5\" (UID: \"c2ea1b40-1bc8-462a-a2a2-218c24c27584\") " pod="openshift-marketplace/certified-operators-fxnw5" Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.592997 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2ea1b40-1bc8-462a-a2a2-218c24c27584-utilities\") pod \"certified-operators-fxnw5\" (UID: \"c2ea1b40-1bc8-462a-a2a2-218c24c27584\") " pod="openshift-marketplace/certified-operators-fxnw5" Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.602579 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tlmst"] Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.603582 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tlmst" Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.616954 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.637143 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbdk8\" (UniqueName: \"kubernetes.io/projected/c2ea1b40-1bc8-462a-a2a2-218c24c27584-kube-api-access-hbdk8\") pod \"certified-operators-fxnw5\" (UID: \"c2ea1b40-1bc8-462a-a2a2-218c24c27584\") " pod="openshift-marketplace/certified-operators-fxnw5" Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.638164 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tlmst"] Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.661159 4965 patch_prober.go:28] interesting pod/router-default-5444994796-56x8k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 09:45:02 crc kubenswrapper[4965]: [-]has-synced failed: reason withheld Feb 19 09:45:02 crc kubenswrapper[4965]: [+]process-running ok Feb 19 09:45:02 crc kubenswrapper[4965]: healthz check failed Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.661239 4965 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-56x8k" podUID="f9b832c2-b2a0-4017-a323-c317ec4c1c1c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.694910 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.695181 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/badd7c24-44c3-4853-9611-aeb49c3df0ab-utilities\") pod \"community-operators-tlmst\" (UID: \"badd7c24-44c3-4853-9611-aeb49c3df0ab\") " pod="openshift-marketplace/community-operators-tlmst" Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.695291 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/badd7c24-44c3-4853-9611-aeb49c3df0ab-catalog-content\") pod \"community-operators-tlmst\" (UID: \"badd7c24-44c3-4853-9611-aeb49c3df0ab\") " pod="openshift-marketplace/community-operators-tlmst" Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.695374 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db2vx\" (UniqueName: \"kubernetes.io/projected/badd7c24-44c3-4853-9611-aeb49c3df0ab-kube-api-access-db2vx\") pod \"community-operators-tlmst\" (UID: \"badd7c24-44c3-4853-9611-aeb49c3df0ab\") " pod="openshift-marketplace/community-operators-tlmst" Feb 19 09:45:02 crc kubenswrapper[4965]: E0219 09:45:02.695538 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:45:03.195512596 +0000 UTC m=+158.816833906 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.797804 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/badd7c24-44c3-4853-9611-aeb49c3df0ab-utilities\") pod \"community-operators-tlmst\" (UID: \"badd7c24-44c3-4853-9611-aeb49c3df0ab\") " pod="openshift-marketplace/community-operators-tlmst" Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.798250 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/badd7c24-44c3-4853-9611-aeb49c3df0ab-catalog-content\") pod \"community-operators-tlmst\" (UID: \"badd7c24-44c3-4853-9611-aeb49c3df0ab\") " pod="openshift-marketplace/community-operators-tlmst" Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.798324 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.798403 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-db2vx\" (UniqueName: \"kubernetes.io/projected/badd7c24-44c3-4853-9611-aeb49c3df0ab-kube-api-access-db2vx\") pod \"community-operators-tlmst\" (UID: \"badd7c24-44c3-4853-9611-aeb49c3df0ab\") " pod="openshift-marketplace/community-operators-tlmst" Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.807871 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/badd7c24-44c3-4853-9611-aeb49c3df0ab-catalog-content\") pod \"community-operators-tlmst\" (UID: \"badd7c24-44c3-4853-9611-aeb49c3df0ab\") " pod="openshift-marketplace/community-operators-tlmst" Feb 19 09:45:02 crc kubenswrapper[4965]: E0219 09:45:02.808332 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:45:03.308312175 +0000 UTC m=+158.929633485 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-842k4" (UID: "22aed16a-0375-45f1-8762-8d5afddf848a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.812264 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-shcvk"] Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.813710 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-shcvk" Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.812287 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/badd7c24-44c3-4853-9611-aeb49c3df0ab-utilities\") pod \"community-operators-tlmst\" (UID: \"badd7c24-44c3-4853-9611-aeb49c3df0ab\") " pod="openshift-marketplace/community-operators-tlmst" Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.836108 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-shcvk"] Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.856383 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-db2vx\" (UniqueName: \"kubernetes.io/projected/badd7c24-44c3-4853-9611-aeb49c3df0ab-kube-api-access-db2vx\") pod \"community-operators-tlmst\" (UID: \"badd7c24-44c3-4853-9611-aeb49c3df0ab\") " pod="openshift-marketplace/community-operators-tlmst" Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.875632 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fxnw5" Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.903910 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.904328 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98f1b66c-456a-415c-b093-20ab1fa33b9b-utilities\") pod \"certified-operators-shcvk\" (UID: \"98f1b66c-456a-415c-b093-20ab1fa33b9b\") " pod="openshift-marketplace/certified-operators-shcvk" Feb 19 09:45:02 crc kubenswrapper[4965]: E0219 09:45:02.904464 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:45:03.404424538 +0000 UTC m=+159.025746298 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.904656 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98f1b66c-456a-415c-b093-20ab1fa33b9b-catalog-content\") pod \"certified-operators-shcvk\" (UID: \"98f1b66c-456a-415c-b093-20ab1fa33b9b\") " pod="openshift-marketplace/certified-operators-shcvk" Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.904728 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nch5g\" (UniqueName: \"kubernetes.io/projected/98f1b66c-456a-415c-b093-20ab1fa33b9b-kube-api-access-nch5g\") pod \"certified-operators-shcvk\" (UID: \"98f1b66c-456a-415c-b093-20ab1fa33b9b\") " pod="openshift-marketplace/certified-operators-shcvk" Feb 19 09:45:02 crc kubenswrapper[4965]: I0219 09:45:02.994442 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tlmst" Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.014567 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-std27"] Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.015433 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98f1b66c-456a-415c-b093-20ab1fa33b9b-utilities\") pod \"certified-operators-shcvk\" (UID: \"98f1b66c-456a-415c-b093-20ab1fa33b9b\") " pod="openshift-marketplace/certified-operators-shcvk" Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.015511 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.015561 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98f1b66c-456a-415c-b093-20ab1fa33b9b-catalog-content\") pod \"certified-operators-shcvk\" (UID: \"98f1b66c-456a-415c-b093-20ab1fa33b9b\") " pod="openshift-marketplace/certified-operators-shcvk" Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.015598 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nch5g\" (UniqueName: \"kubernetes.io/projected/98f1b66c-456a-415c-b093-20ab1fa33b9b-kube-api-access-nch5g\") pod \"certified-operators-shcvk\" (UID: \"98f1b66c-456a-415c-b093-20ab1fa33b9b\") " pod="openshift-marketplace/certified-operators-shcvk" Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.015648 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-std27" Feb 19 09:45:03 crc kubenswrapper[4965]: E0219 09:45:03.016456 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:45:03.516443457 +0000 UTC m=+159.137764767 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-842k4" (UID: "22aed16a-0375-45f1-8762-8d5afddf848a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.016689 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98f1b66c-456a-415c-b093-20ab1fa33b9b-utilities\") pod \"certified-operators-shcvk\" (UID: \"98f1b66c-456a-415c-b093-20ab1fa33b9b\") " pod="openshift-marketplace/certified-operators-shcvk" Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.016883 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98f1b66c-456a-415c-b093-20ab1fa33b9b-catalog-content\") pod \"certified-operators-shcvk\" (UID: \"98f1b66c-456a-415c-b093-20ab1fa33b9b\") " pod="openshift-marketplace/certified-operators-shcvk" Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.049793 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nch5g\" (UniqueName: \"kubernetes.io/projected/98f1b66c-456a-415c-b093-20ab1fa33b9b-kube-api-access-nch5g\") pod \"certified-operators-shcvk\" (UID: \"98f1b66c-456a-415c-b093-20ab1fa33b9b\") " pod="openshift-marketplace/certified-operators-shcvk" Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.053299 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-std27"] Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.120046 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.120288 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20180c3a-aa7a-4263-9057-c85c636bfc48-catalog-content\") pod \"community-operators-std27\" (UID: \"20180c3a-aa7a-4263-9057-c85c636bfc48\") " pod="openshift-marketplace/community-operators-std27" Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.120362 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20180c3a-aa7a-4263-9057-c85c636bfc48-utilities\") pod \"community-operators-std27\" (UID: \"20180c3a-aa7a-4263-9057-c85c636bfc48\") " pod="openshift-marketplace/community-operators-std27" Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.120388 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42dwh\" (UniqueName: \"kubernetes.io/projected/20180c3a-aa7a-4263-9057-c85c636bfc48-kube-api-access-42dwh\") pod \"community-operators-std27\" (UID: \"20180c3a-aa7a-4263-9057-c85c636bfc48\") " pod="openshift-marketplace/community-operators-std27" Feb 19 09:45:03 crc kubenswrapper[4965]: E0219 09:45:03.120573 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:45:03.620552614 +0000 UTC m=+159.241873924 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.208150 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-shcvk" Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.222069 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20180c3a-aa7a-4263-9057-c85c636bfc48-catalog-content\") pod \"community-operators-std27\" (UID: \"20180c3a-aa7a-4263-9057-c85c636bfc48\") " pod="openshift-marketplace/community-operators-std27" Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.222246 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20180c3a-aa7a-4263-9057-c85c636bfc48-utilities\") pod \"community-operators-std27\" (UID: \"20180c3a-aa7a-4263-9057-c85c636bfc48\") " pod="openshift-marketplace/community-operators-std27" Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.222369 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42dwh\" (UniqueName: \"kubernetes.io/projected/20180c3a-aa7a-4263-9057-c85c636bfc48-kube-api-access-42dwh\") pod \"community-operators-std27\" (UID: \"20180c3a-aa7a-4263-9057-c85c636bfc48\") " pod="openshift-marketplace/community-operators-std27" Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.222490 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:45:03 crc kubenswrapper[4965]: E0219 09:45:03.223018 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:45:03.722999701 +0000 UTC m=+159.344321011 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-842k4" (UID: "22aed16a-0375-45f1-8762-8d5afddf848a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.223891 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20180c3a-aa7a-4263-9057-c85c636bfc48-catalog-content\") pod \"community-operators-std27\" (UID: \"20180c3a-aa7a-4263-9057-c85c636bfc48\") " pod="openshift-marketplace/community-operators-std27" Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.224304 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20180c3a-aa7a-4263-9057-c85c636bfc48-utilities\") pod \"community-operators-std27\" (UID: \"20180c3a-aa7a-4263-9057-c85c636bfc48\") " pod="openshift-marketplace/community-operators-std27" Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.352404 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:45:03 crc kubenswrapper[4965]: E0219 09:45:03.353781 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:45:03.853760038 +0000 UTC m=+159.475081348 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.403553 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42dwh\" (UniqueName: \"kubernetes.io/projected/20180c3a-aa7a-4263-9057-c85c636bfc48-kube-api-access-42dwh\") pod \"community-operators-std27\" (UID: \"20180c3a-aa7a-4263-9057-c85c636bfc48\") " pod="openshift-marketplace/community-operators-std27" Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.457731 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lz7fh" event={"ID":"b4e92446-36cd-4840-97fc-d9f0d60e4e7d","Type":"ContainerStarted","Data":"42f1a3fa82a6348f06b7b1308b5f475a8bd821353cc0b03a1a2f2a3d81c201b6"} Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.466998 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_collect-profiles-29524890-mgzh5_7cd3fe51-e7f2-4d42-b054-5e8dbbb7ebbe/collect-profiles/0.log" Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.467075 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524890-mgzh5" event={"ID":"7cd3fe51-e7f2-4d42-b054-5e8dbbb7ebbe","Type":"ContainerDied","Data":"a25b65e95de239ab8087cd5ea714899614f129b5af9489dd4a1c817a65a7cf60"} Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.467115 4965 scope.go:117] "RemoveContainer" containerID="19ec5e8d7adb6947d3876be8a9af44f570f0c7923e4793ad1ee4e674c3591716" Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.467289 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524890-mgzh5" Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.473842 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:45:03 crc kubenswrapper[4965]: E0219 09:45:03.474245 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:45:03.974231513 +0000 UTC m=+159.595552823 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-842k4" (UID: "22aed16a-0375-45f1-8762-8d5afddf848a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.489139 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524905-682z8" event={"ID":"79e5acf4-3803-4356-aa12-622cceae90a5","Type":"ContainerStarted","Data":"c29553c314bda6ac7c978a596a9a6573778e5f62e54d551ff98c289ee574ae21"} Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.489216 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524905-682z8" event={"ID":"79e5acf4-3803-4356-aa12-622cceae90a5","Type":"ContainerStarted","Data":"f4abfa615cf4eebe69c97ee930decc901a4e747bbfabad8138853553ed713433"} Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.519740 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-h4p5q" event={"ID":"6bd24030-a535-4db1-b620-44d9c5c7a655","Type":"ContainerStarted","Data":"564afe9d8ba4a45dc0bc9f78c0613810b39c381e95d83adcc8a7f446f97676c8"} Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.522692 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vkrsj" event={"ID":"190603fe-6420-4d17-91f5-c37c9038002c","Type":"ContainerStarted","Data":"7178023f0380d6235cb79ddd0cd42412ad0ab55cdb1fb062a5ef0b2216b15cf0"} Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.523499 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-vkrsj" Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.525616 4965 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vkrsj container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.525655 4965 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-vkrsj" podUID="190603fe-6420-4d17-91f5-c37c9038002c" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.559276 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lz7fh" podStartSLOduration=136.559257785 podStartE2EDuration="2m16.559257785s" podCreationTimestamp="2026-02-19 09:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:45:03.509075773 +0000 UTC m=+159.130397083" watchObservedRunningTime="2026-02-19 09:45:03.559257785 +0000 UTC m=+159.180579095" Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.560051 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qnhlh" event={"ID":"cd586f77-cb07-42a9-b20c-bf06ed856469","Type":"ContainerStarted","Data":"ff6bdade4c7fb6a6934945a3d70d976aa47db56a9dd6237f00e09ea3f9dfc0f5"} Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.560147 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qnhlh" event={"ID":"cd586f77-cb07-42a9-b20c-bf06ed856469","Type":"ContainerStarted","Data":"3c2940e0a31f9e5ca2f424de716b66dd83887f76e87c7ca4e89f7ef8879ef6f4"} Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.565433 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524890-mgzh5"] Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.566149 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524890-mgzh5"] Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.577530 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:45:03 crc kubenswrapper[4965]: E0219 09:45:03.578711 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:45:04.078689498 +0000 UTC m=+159.700010808 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.599227 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7dd6s" event={"ID":"52aa8dbd-b4ec-4579-b036-a1dcf35567a5","Type":"ContainerStarted","Data":"988d6139f10a429160f754bfa01bb83265fc241eeb60ac53b2895a2818354a25"} Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.646481 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-std27" Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.657619 4965 patch_prober.go:28] interesting pod/router-default-5444994796-56x8k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 09:45:03 crc kubenswrapper[4965]: [-]has-synced failed: reason withheld Feb 19 09:45:03 crc kubenswrapper[4965]: [+]process-running ok Feb 19 09:45:03 crc kubenswrapper[4965]: healthz check failed Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.657690 4965 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-56x8k" podUID="f9b832c2-b2a0-4017-a323-c317ec4c1c1c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.658582 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29524905-682z8" podStartSLOduration=3.658562085 podStartE2EDuration="3.658562085s" podCreationTimestamp="2026-02-19 09:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:45:03.628335809 +0000 UTC m=+159.249657129" watchObservedRunningTime="2026-02-19 09:45:03.658562085 +0000 UTC m=+159.279883395" Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.670677 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-vkxtv" event={"ID":"d6fcc552-ae72-46a3-9525-cfb460da05e1","Type":"ContainerStarted","Data":"acca881ff5a42756f6abf281944e1a0deadbdb3875ef8a9ee41f3c48229fdf0a"} Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.683268 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:45:03 crc kubenswrapper[4965]: E0219 09:45:03.684666 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:45:04.184646421 +0000 UTC m=+159.805967731 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-842k4" (UID: "22aed16a-0375-45f1-8762-8d5afddf848a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.695876 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-h4p5q" podStartSLOduration=136.695849504 podStartE2EDuration="2m16.695849504s" podCreationTimestamp="2026-02-19 09:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:45:03.65836657 +0000 UTC m=+159.279687890" watchObservedRunningTime="2026-02-19 09:45:03.695849504 +0000 UTC m=+159.317170824" Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.696359 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qnhlh" podStartSLOduration=136.696354196 podStartE2EDuration="2m16.696354196s" podCreationTimestamp="2026-02-19 09:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:45:03.682551329 +0000 UTC m=+159.303872639" watchObservedRunningTime="2026-02-19 09:45:03.696354196 +0000 UTC m=+159.317675506" Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.705615 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zjvhq" event={"ID":"bd91bca5-eb6e-4fcf-b8c8-013e057a95d0","Type":"ContainerStarted","Data":"84cfb05ebbcd0e96405b8a504728e8f2175739d3519917d24e7bc5ee2ec03bcd"} Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.738001 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7dd6s" podStartSLOduration=136.73797976 podStartE2EDuration="2m16.73797976s" podCreationTimestamp="2026-02-19 09:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:45:03.737840347 +0000 UTC m=+159.359161657" watchObservedRunningTime="2026-02-19 09:45:03.73797976 +0000 UTC m=+159.359301070" Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.770599 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bjhlw" event={"ID":"164f68fa-9132-47c9-9c23-bca749b3f4e8","Type":"ContainerStarted","Data":"d259caa1c1d3259bdf8dead71c045be7463b3174c4141dd7dd20fd92b6da8162"} Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.781664 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-vkrsj" podStartSLOduration=135.781642004 podStartE2EDuration="2m15.781642004s" podCreationTimestamp="2026-02-19 09:42:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:45:03.78066203 +0000 UTC m=+159.401983350" watchObservedRunningTime="2026-02-19 09:45:03.781642004 +0000 UTC m=+159.402963314" Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.791517 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:45:03 crc kubenswrapper[4965]: E0219 09:45:03.793074 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:45:04.293045302 +0000 UTC m=+159.914366612 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.804761 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-kvwsm" event={"ID":"6c8a56a8-ed46-4e4a-9dbd-de3914ee3581","Type":"ContainerStarted","Data":"1b427e5cc37eb85bfdcf3a7001a190c16c4db3abd74be27e36eea8b220fd69bd"} Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.841319 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-t4rcw" event={"ID":"cdd71340-9555-4441-b38c-89d5d4cc306b","Type":"ContainerStarted","Data":"788de7c96d2d39342016510fec31ced9bd192fa6fc2f673ed9efac075c655902"} Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.855308 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-gmzqb" event={"ID":"32c44420-8d84-4c62-afc4-dad00a930b62","Type":"ContainerStarted","Data":"6233e94fc395e329d643523d647d45c6a0f3bf88592819b58eace7ee10fc1db1"} Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.855705 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-gmzqb" event={"ID":"32c44420-8d84-4c62-afc4-dad00a930b62","Type":"ContainerStarted","Data":"9e7c7b5a57293571612f925e7bb2365204717232781bc8fd39605a1bc4e3696a"} Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.870915 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zjvhq" podStartSLOduration=136.870891929 podStartE2EDuration="2m16.870891929s" podCreationTimestamp="2026-02-19 09:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:45:03.869974047 +0000 UTC m=+159.491295357" watchObservedRunningTime="2026-02-19 09:45:03.870891929 +0000 UTC m=+159.492213249" Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.880940 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fxnw5"] Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.882060 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-hqt8l" event={"ID":"49cf856e-b37d-4ab6-9c6e-241cbc4be93e","Type":"ContainerStarted","Data":"7ee6818d3e5f0f0da203c9c0fd6d63352ca4ffb4ce7ea8c8a885cb1e5fa9eb3d"} Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.894610 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:45:03 crc kubenswrapper[4965]: E0219 09:45:03.895784 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:45:04.395765016 +0000 UTC m=+160.017086316 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-842k4" (UID: "22aed16a-0375-45f1-8762-8d5afddf848a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.921291 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-47ksl" event={"ID":"88b545dd-0faa-4093-8fd9-40b693b3ef87","Type":"ContainerStarted","Data":"540b882c526bc6754984c2f74f5eae160a7a588df62f0dcc9dcb9a88f3e73290"} Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.966176 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bjhlw" podStartSLOduration=136.966158421 podStartE2EDuration="2m16.966158421s" podCreationTimestamp="2026-02-19 09:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:45:03.957360767 +0000 UTC m=+159.578682087" watchObservedRunningTime="2026-02-19 09:45:03.966158421 +0000 UTC m=+159.587479731" Feb 19 09:45:03 crc kubenswrapper[4965]: I0219 09:45:03.980203 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-twxbq" event={"ID":"6fd7c237-27ab-45f9-a23b-d18372f1c28a","Type":"ContainerStarted","Data":"33093bdd4ed4719bc70a84ec8e243e98b447c21d357bf483a0d551a3e716ec29"} Feb 19 09:45:04 crc kubenswrapper[4965]: I0219 09:45:04.001316 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:45:04 crc kubenswrapper[4965]: E0219 09:45:04.002666 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:45:04.50264427 +0000 UTC m=+160.123965580 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:04 crc kubenswrapper[4965]: I0219 09:45:04.003053 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2v6fb" event={"ID":"5e0fcf66-e50c-4c4c-9370-08ed336d25d9","Type":"ContainerStarted","Data":"45ef7418011efa727aee6c644bbb6c80e0f02f7120387fee2ecfd2a3e84d3f3d"} Feb 19 09:45:04 crc kubenswrapper[4965]: I0219 09:45:04.004084 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-vkxtv" podStartSLOduration=137.004057034 podStartE2EDuration="2m17.004057034s" podCreationTimestamp="2026-02-19 09:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:45:04.002760523 +0000 UTC m=+159.624081853" watchObservedRunningTime="2026-02-19 09:45:04.004057034 +0000 UTC m=+159.625378354" Feb 19 09:45:04 crc kubenswrapper[4965]: I0219 09:45:04.041893 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nq2jk" event={"ID":"8c7730fc-ade8-4092-8923-54264965e892","Type":"ContainerStarted","Data":"fe3b6077bea4fbe7d20cf5e35da95a2fdac7163442bcc15dbcbd131889aafa62"} Feb 19 09:45:04 crc kubenswrapper[4965]: I0219 09:45:04.042143 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nq2jk" event={"ID":"8c7730fc-ade8-4092-8923-54264965e892","Type":"ContainerStarted","Data":"32a3f94bf7f13a18ce2373ff35ed3093c33e6b972b276f39f1f9b3a7ae109e43"} Feb 19 09:45:04 crc kubenswrapper[4965]: I0219 09:45:04.079536 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kpdjc" event={"ID":"8772ecf1-3bce-4573-91de-daf37c1ef762","Type":"ContainerStarted","Data":"e0bf30b7ea0e4daacdfc6e95fa242435027366c0f8542ac40e08ca4c56ad7309"} Feb 19 09:45:04 crc kubenswrapper[4965]: I0219 09:45:04.079611 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kpdjc" event={"ID":"8772ecf1-3bce-4573-91de-daf37c1ef762","Type":"ContainerStarted","Data":"b426d098b9e200c47d0e5441582dafa5e3115cd8cd17105f5b0089f23f698427"} Feb 19 09:45:04 crc kubenswrapper[4965]: I0219 09:45:04.080854 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kpdjc" Feb 19 09:45:04 crc kubenswrapper[4965]: I0219 09:45:04.093682 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-47ksl" podStartSLOduration=136.093655978 podStartE2EDuration="2m16.093655978s" podCreationTimestamp="2026-02-19 09:42:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:45:04.03960173 +0000 UTC m=+159.660923050" watchObservedRunningTime="2026-02-19 09:45:04.093655978 +0000 UTC m=+159.714977288" Feb 19 09:45:04 crc kubenswrapper[4965]: I0219 09:45:04.095336 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-hqt8l" podStartSLOduration=137.095328308 podStartE2EDuration="2m17.095328308s" podCreationTimestamp="2026-02-19 09:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:45:04.092616852 +0000 UTC m=+159.713938172" watchObservedRunningTime="2026-02-19 09:45:04.095328308 +0000 UTC m=+159.716649618" Feb 19 09:45:04 crc kubenswrapper[4965]: I0219 09:45:04.103335 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:45:04 crc kubenswrapper[4965]: I0219 09:45:04.103360 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f6hln" event={"ID":"082ad3b5-c0c5-437c-8077-395c6ec09ec3","Type":"ContainerStarted","Data":"29ded0b7271a48db43ff5aac09b1a51512db7d7954efc2fb14b48f329615081c"} Feb 19 09:45:04 crc kubenswrapper[4965]: E0219 09:45:04.104662 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:45:04.604639526 +0000 UTC m=+160.225961026 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-842k4" (UID: "22aed16a-0375-45f1-8762-8d5afddf848a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:04 crc kubenswrapper[4965]: I0219 09:45:04.114563 4965 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-kpdjc container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:5443/healthz\": dial tcp 10.217.0.31:5443: connect: connection refused" start-of-body= Feb 19 09:45:04 crc kubenswrapper[4965]: I0219 09:45:04.114627 4965 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kpdjc" podUID="8772ecf1-3bce-4573-91de-daf37c1ef762" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.31:5443/healthz\": dial tcp 10.217.0.31:5443: connect: connection refused" Feb 19 09:45:04 crc kubenswrapper[4965]: I0219 09:45:04.140031 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-gmzqb" podStartSLOduration=9.140008177 podStartE2EDuration="9.140008177s" podCreationTimestamp="2026-02-19 09:44:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:45:04.137846985 +0000 UTC m=+159.759168295" watchObservedRunningTime="2026-02-19 09:45:04.140008177 +0000 UTC m=+159.761329487" Feb 19 09:45:04 crc kubenswrapper[4965]: I0219 09:45:04.171519 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2v6fb" podStartSLOduration=137.171496995 podStartE2EDuration="2m17.171496995s" podCreationTimestamp="2026-02-19 09:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:45:04.160418145 +0000 UTC m=+159.781739465" watchObservedRunningTime="2026-02-19 09:45:04.171496995 +0000 UTC m=+159.792818305" Feb 19 09:45:04 crc kubenswrapper[4965]: I0219 09:45:04.214765 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:45:04 crc kubenswrapper[4965]: E0219 09:45:04.216459 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:45:04.71643925 +0000 UTC m=+160.337760560 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:04 crc kubenswrapper[4965]: I0219 09:45:04.224950 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nq2jk" podStartSLOduration=137.224921677 podStartE2EDuration="2m17.224921677s" podCreationTimestamp="2026-02-19 09:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:45:04.213939759 +0000 UTC m=+159.835261069" watchObservedRunningTime="2026-02-19 09:45:04.224921677 +0000 UTC m=+159.846242987" Feb 19 09:45:04 crc kubenswrapper[4965]: I0219 09:45:04.293859 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kpdjc" podStartSLOduration=136.293839416 podStartE2EDuration="2m16.293839416s" podCreationTimestamp="2026-02-19 09:42:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:45:04.293536629 +0000 UTC m=+159.914857939" watchObservedRunningTime="2026-02-19 09:45:04.293839416 +0000 UTC m=+159.915160726" Feb 19 09:45:04 crc kubenswrapper[4965]: I0219 09:45:04.316952 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:45:04 crc kubenswrapper[4965]: E0219 09:45:04.317382 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:45:04.81736937 +0000 UTC m=+160.438690680 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-842k4" (UID: "22aed16a-0375-45f1-8762-8d5afddf848a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:04 crc kubenswrapper[4965]: I0219 09:45:04.418579 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:45:04 crc kubenswrapper[4965]: E0219 09:45:04.418816 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:45:04.91876921 +0000 UTC m=+160.540090520 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:04 crc kubenswrapper[4965]: I0219 09:45:04.418989 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:45:04 crc kubenswrapper[4965]: E0219 09:45:04.419586 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:45:04.91957817 +0000 UTC m=+160.540899480 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-842k4" (UID: "22aed16a-0375-45f1-8762-8d5afddf848a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:04 crc kubenswrapper[4965]: I0219 09:45:04.422823 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-c55hf"] Feb 19 09:45:04 crc kubenswrapper[4965]: I0219 09:45:04.424501 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c55hf" Feb 19 09:45:04 crc kubenswrapper[4965]: I0219 09:45:04.435513 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c55hf"] Feb 19 09:45:04 crc kubenswrapper[4965]: I0219 09:45:04.435526 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 19 09:45:04 crc kubenswrapper[4965]: I0219 09:45:04.493405 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tlmst"] Feb 19 09:45:04 crc kubenswrapper[4965]: I0219 09:45:04.515054 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-shcvk"] Feb 19 09:45:04 crc kubenswrapper[4965]: I0219 09:45:04.524704 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:45:04 crc kubenswrapper[4965]: I0219 09:45:04.524994 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1832525-d3f5-47bc-879b-4d4e4f3c14bd-catalog-content\") pod \"redhat-marketplace-c55hf\" (UID: \"b1832525-d3f5-47bc-879b-4d4e4f3c14bd\") " pod="openshift-marketplace/redhat-marketplace-c55hf" Feb 19 09:45:04 crc kubenswrapper[4965]: I0219 09:45:04.525029 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pbqz\" (UniqueName: \"kubernetes.io/projected/b1832525-d3f5-47bc-879b-4d4e4f3c14bd-kube-api-access-4pbqz\") pod \"redhat-marketplace-c55hf\" (UID: \"b1832525-d3f5-47bc-879b-4d4e4f3c14bd\") " pod="openshift-marketplace/redhat-marketplace-c55hf" Feb 19 09:45:04 crc kubenswrapper[4965]: I0219 09:45:04.525068 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1832525-d3f5-47bc-879b-4d4e4f3c14bd-utilities\") pod \"redhat-marketplace-c55hf\" (UID: \"b1832525-d3f5-47bc-879b-4d4e4f3c14bd\") " pod="openshift-marketplace/redhat-marketplace-c55hf" Feb 19 09:45:04 crc kubenswrapper[4965]: E0219 09:45:04.525207 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:45:05.025174493 +0000 UTC m=+160.646495803 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:04 crc kubenswrapper[4965]: I0219 09:45:04.627168 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1832525-d3f5-47bc-879b-4d4e4f3c14bd-catalog-content\") pod \"redhat-marketplace-c55hf\" (UID: \"b1832525-d3f5-47bc-879b-4d4e4f3c14bd\") " pod="openshift-marketplace/redhat-marketplace-c55hf" Feb 19 09:45:04 crc kubenswrapper[4965]: I0219 09:45:04.627253 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pbqz\" (UniqueName: \"kubernetes.io/projected/b1832525-d3f5-47bc-879b-4d4e4f3c14bd-kube-api-access-4pbqz\") pod \"redhat-marketplace-c55hf\" (UID: \"b1832525-d3f5-47bc-879b-4d4e4f3c14bd\") " pod="openshift-marketplace/redhat-marketplace-c55hf" Feb 19 09:45:04 crc kubenswrapper[4965]: I0219 09:45:04.627289 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:45:04 crc kubenswrapper[4965]: I0219 09:45:04.627331 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1832525-d3f5-47bc-879b-4d4e4f3c14bd-utilities\") pod \"redhat-marketplace-c55hf\" (UID: \"b1832525-d3f5-47bc-879b-4d4e4f3c14bd\") " pod="openshift-marketplace/redhat-marketplace-c55hf" Feb 19 09:45:04 crc kubenswrapper[4965]: I0219 09:45:04.627845 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1832525-d3f5-47bc-879b-4d4e4f3c14bd-utilities\") pod \"redhat-marketplace-c55hf\" (UID: \"b1832525-d3f5-47bc-879b-4d4e4f3c14bd\") " pod="openshift-marketplace/redhat-marketplace-c55hf" Feb 19 09:45:04 crc kubenswrapper[4965]: I0219 09:45:04.628148 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1832525-d3f5-47bc-879b-4d4e4f3c14bd-catalog-content\") pod \"redhat-marketplace-c55hf\" (UID: \"b1832525-d3f5-47bc-879b-4d4e4f3c14bd\") " pod="openshift-marketplace/redhat-marketplace-c55hf" Feb 19 09:45:04 crc kubenswrapper[4965]: E0219 09:45:04.628887 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:45:05.12887196 +0000 UTC m=+160.750193270 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-842k4" (UID: "22aed16a-0375-45f1-8762-8d5afddf848a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:04 crc kubenswrapper[4965]: I0219 09:45:04.655076 4965 patch_prober.go:28] interesting pod/router-default-5444994796-56x8k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 09:45:04 crc kubenswrapper[4965]: [-]has-synced failed: reason withheld Feb 19 09:45:04 crc kubenswrapper[4965]: [+]process-running ok Feb 19 09:45:04 crc kubenswrapper[4965]: healthz check failed Feb 19 09:45:04 crc kubenswrapper[4965]: I0219 09:45:04.655144 4965 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-56x8k" podUID="f9b832c2-b2a0-4017-a323-c317ec4c1c1c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 09:45:04 crc kubenswrapper[4965]: I0219 09:45:04.673432 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-std27"] Feb 19 09:45:04 crc kubenswrapper[4965]: I0219 09:45:04.674495 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pbqz\" (UniqueName: \"kubernetes.io/projected/b1832525-d3f5-47bc-879b-4d4e4f3c14bd-kube-api-access-4pbqz\") pod \"redhat-marketplace-c55hf\" (UID: \"b1832525-d3f5-47bc-879b-4d4e4f3c14bd\") " pod="openshift-marketplace/redhat-marketplace-c55hf" Feb 19 09:45:04 crc kubenswrapper[4965]: I0219 09:45:04.728663 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:45:04 crc kubenswrapper[4965]: E0219 09:45:04.729179 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:45:05.229163634 +0000 UTC m=+160.850484944 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:04 crc kubenswrapper[4965]: I0219 09:45:04.761018 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c55hf" Feb 19 09:45:04 crc kubenswrapper[4965]: I0219 09:45:04.834382 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:45:04 crc kubenswrapper[4965]: E0219 09:45:04.834806 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:45:05.334783648 +0000 UTC m=+160.956104958 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-842k4" (UID: "22aed16a-0375-45f1-8762-8d5afddf848a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:04 crc kubenswrapper[4965]: I0219 09:45:04.847718 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5667c"] Feb 19 09:45:04 crc kubenswrapper[4965]: I0219 09:45:04.848896 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5667c" Feb 19 09:45:04 crc kubenswrapper[4965]: I0219 09:45:04.869216 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5667c"] Feb 19 09:45:04 crc kubenswrapper[4965]: I0219 09:45:04.935558 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:45:04 crc kubenswrapper[4965]: E0219 09:45:04.935679 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:45:05.435661926 +0000 UTC m=+161.056983236 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:04 crc kubenswrapper[4965]: I0219 09:45:04.935864 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gwp5\" (UniqueName: \"kubernetes.io/projected/5e1e7158-ad23-4414-9858-0c1056a71f56-kube-api-access-6gwp5\") pod \"redhat-marketplace-5667c\" (UID: \"5e1e7158-ad23-4414-9858-0c1056a71f56\") " pod="openshift-marketplace/redhat-marketplace-5667c" Feb 19 09:45:04 crc kubenswrapper[4965]: I0219 09:45:04.935896 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e1e7158-ad23-4414-9858-0c1056a71f56-catalog-content\") pod \"redhat-marketplace-5667c\" (UID: \"5e1e7158-ad23-4414-9858-0c1056a71f56\") " pod="openshift-marketplace/redhat-marketplace-5667c" Feb 19 09:45:04 crc kubenswrapper[4965]: I0219 09:45:04.935950 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e1e7158-ad23-4414-9858-0c1056a71f56-utilities\") pod \"redhat-marketplace-5667c\" (UID: \"5e1e7158-ad23-4414-9858-0c1056a71f56\") " pod="openshift-marketplace/redhat-marketplace-5667c" Feb 19 09:45:04 crc kubenswrapper[4965]: I0219 09:45:04.935997 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:45:04 crc kubenswrapper[4965]: E0219 09:45:04.936319 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:45:05.436309992 +0000 UTC m=+161.057631302 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-842k4" (UID: "22aed16a-0375-45f1-8762-8d5afddf848a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:05 crc kubenswrapper[4965]: I0219 09:45:05.038055 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:45:05 crc kubenswrapper[4965]: I0219 09:45:05.038293 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e1e7158-ad23-4414-9858-0c1056a71f56-utilities\") pod \"redhat-marketplace-5667c\" (UID: \"5e1e7158-ad23-4414-9858-0c1056a71f56\") " pod="openshift-marketplace/redhat-marketplace-5667c" Feb 19 09:45:05 crc kubenswrapper[4965]: I0219 09:45:05.038381 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gwp5\" (UniqueName: \"kubernetes.io/projected/5e1e7158-ad23-4414-9858-0c1056a71f56-kube-api-access-6gwp5\") pod \"redhat-marketplace-5667c\" (UID: \"5e1e7158-ad23-4414-9858-0c1056a71f56\") " pod="openshift-marketplace/redhat-marketplace-5667c" Feb 19 09:45:05 crc kubenswrapper[4965]: I0219 09:45:05.038404 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e1e7158-ad23-4414-9858-0c1056a71f56-catalog-content\") pod \"redhat-marketplace-5667c\" (UID: \"5e1e7158-ad23-4414-9858-0c1056a71f56\") " pod="openshift-marketplace/redhat-marketplace-5667c" Feb 19 09:45:05 crc kubenswrapper[4965]: I0219 09:45:05.038850 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e1e7158-ad23-4414-9858-0c1056a71f56-catalog-content\") pod \"redhat-marketplace-5667c\" (UID: \"5e1e7158-ad23-4414-9858-0c1056a71f56\") " pod="openshift-marketplace/redhat-marketplace-5667c" Feb 19 09:45:05 crc kubenswrapper[4965]: E0219 09:45:05.038938 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:45:05.538918022 +0000 UTC m=+161.160239332 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:05 crc kubenswrapper[4965]: I0219 09:45:05.039238 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e1e7158-ad23-4414-9858-0c1056a71f56-utilities\") pod \"redhat-marketplace-5667c\" (UID: \"5e1e7158-ad23-4414-9858-0c1056a71f56\") " pod="openshift-marketplace/redhat-marketplace-5667c" Feb 19 09:45:05 crc kubenswrapper[4965]: I0219 09:45:05.095080 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gwp5\" (UniqueName: \"kubernetes.io/projected/5e1e7158-ad23-4414-9858-0c1056a71f56-kube-api-access-6gwp5\") pod \"redhat-marketplace-5667c\" (UID: \"5e1e7158-ad23-4414-9858-0c1056a71f56\") " pod="openshift-marketplace/redhat-marketplace-5667c" Feb 19 09:45:05 crc kubenswrapper[4965]: I0219 09:45:05.138144 4965 generic.go:334] "Generic (PLEG): container finished" podID="c2ea1b40-1bc8-462a-a2a2-218c24c27584" containerID="21e4ea1808ec358fa098edc41aacf9ea07b3de663c30f9619cac5cfbefe48704" exitCode=0 Feb 19 09:45:05 crc kubenswrapper[4965]: I0219 09:45:05.138609 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fxnw5" event={"ID":"c2ea1b40-1bc8-462a-a2a2-218c24c27584","Type":"ContainerDied","Data":"21e4ea1808ec358fa098edc41aacf9ea07b3de663c30f9619cac5cfbefe48704"} Feb 19 09:45:05 crc kubenswrapper[4965]: I0219 09:45:05.138637 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fxnw5" event={"ID":"c2ea1b40-1bc8-462a-a2a2-218c24c27584","Type":"ContainerStarted","Data":"356bc9fc4963daa169e51788c873906cb35206f2efdd76fff51e547267ca86d6"} Feb 19 09:45:05 crc kubenswrapper[4965]: I0219 09:45:05.139354 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:45:05 crc kubenswrapper[4965]: E0219 09:45:05.139676 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:45:05.639662087 +0000 UTC m=+161.260983397 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-842k4" (UID: "22aed16a-0375-45f1-8762-8d5afddf848a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:05 crc kubenswrapper[4965]: I0219 09:45:05.141829 4965 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 09:45:05 crc kubenswrapper[4965]: I0219 09:45:05.143542 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-t4rcw" event={"ID":"cdd71340-9555-4441-b38c-89d5d4cc306b","Type":"ContainerStarted","Data":"c2c2d688981500c5a8e309fa3b41ad99d17796f7406f2fe4c2715f6201215525"} Feb 19 09:45:05 crc kubenswrapper[4965]: I0219 09:45:05.144165 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-t4rcw" Feb 19 09:45:05 crc kubenswrapper[4965]: I0219 09:45:05.146865 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-std27" event={"ID":"20180c3a-aa7a-4263-9057-c85c636bfc48","Type":"ContainerStarted","Data":"10de4366a806f9e7d262c223f984ee3cf7838ba2b1ce8d35e0aaffedecfcd699"} Feb 19 09:45:05 crc kubenswrapper[4965]: I0219 09:45:05.146894 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-std27" event={"ID":"20180c3a-aa7a-4263-9057-c85c636bfc48","Type":"ContainerStarted","Data":"c887238d8ac72033f8640766d1d0c76c32255b85797c323f404e12da619a2224"} Feb 19 09:45:05 crc kubenswrapper[4965]: I0219 09:45:05.149357 4965 generic.go:334] "Generic (PLEG): container finished" podID="badd7c24-44c3-4853-9611-aeb49c3df0ab" containerID="78deb2d1dc7c2770edaea43ae2d5b815f4042f8588f74e7705759234299dbe8d" exitCode=0 Feb 19 09:45:05 crc kubenswrapper[4965]: I0219 09:45:05.149406 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tlmst" event={"ID":"badd7c24-44c3-4853-9611-aeb49c3df0ab","Type":"ContainerDied","Data":"78deb2d1dc7c2770edaea43ae2d5b815f4042f8588f74e7705759234299dbe8d"} Feb 19 09:45:05 crc kubenswrapper[4965]: I0219 09:45:05.149426 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tlmst" event={"ID":"badd7c24-44c3-4853-9611-aeb49c3df0ab","Type":"ContainerStarted","Data":"a82bb1a06ca6e1f8e6b53476b4e05d1e214d47d5d42523bfa113a2ec08db7362"} Feb 19 09:45:05 crc kubenswrapper[4965]: I0219 09:45:05.177120 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-sbvpc" event={"ID":"23c8c1d2-4e7b-4cd4-99cf-92130064bbbf","Type":"ContainerStarted","Data":"3c5f2e1cae11fb1e63e4b4680f2e5eea80a0d36f9e183f325703c35b76c6bb75"} Feb 19 09:45:05 crc kubenswrapper[4965]: I0219 09:45:05.187624 4965 generic.go:334] "Generic (PLEG): container finished" podID="98f1b66c-456a-415c-b093-20ab1fa33b9b" containerID="514f8c61e959ed459a09302a32819b44280cdb216c8943f448872c8e27fd5501" exitCode=0 Feb 19 09:45:05 crc kubenswrapper[4965]: I0219 09:45:05.187700 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-shcvk" event={"ID":"98f1b66c-456a-415c-b093-20ab1fa33b9b","Type":"ContainerDied","Data":"514f8c61e959ed459a09302a32819b44280cdb216c8943f448872c8e27fd5501"} Feb 19 09:45:05 crc kubenswrapper[4965]: I0219 09:45:05.187734 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-shcvk" event={"ID":"98f1b66c-456a-415c-b093-20ab1fa33b9b","Type":"ContainerStarted","Data":"0f41403e0e617b30b30c5f8e46855ac16bb25da788b2dbb246fc4687c7586506"} Feb 19 09:45:05 crc kubenswrapper[4965]: I0219 09:45:05.189871 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5667c" Feb 19 09:45:05 crc kubenswrapper[4965]: I0219 09:45:05.244880 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:45:05 crc kubenswrapper[4965]: E0219 09:45:05.246322 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:45:05.746306366 +0000 UTC m=+161.367627676 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:05 crc kubenswrapper[4965]: I0219 09:45:05.278417 4965 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vkrsj container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Feb 19 09:45:05 crc kubenswrapper[4965]: I0219 09:45:05.278500 4965 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-vkrsj" podUID="190603fe-6420-4d17-91f5-c37c9038002c" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" Feb 19 09:45:05 crc kubenswrapper[4965]: I0219 09:45:05.331644 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cd3fe51-e7f2-4d42-b054-5e8dbbb7ebbe" path="/var/lib/kubelet/pods/7cd3fe51-e7f2-4d42-b054-5e8dbbb7ebbe/volumes" Feb 19 09:45:05 crc kubenswrapper[4965]: I0219 09:45:05.332327 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-twxbq" event={"ID":"6fd7c237-27ab-45f9-a23b-d18372f1c28a","Type":"ContainerStarted","Data":"f64d5dd6bfe2aca5e7215e30f704d0884ab4a169ebe03cfdbae5c86ced33c1f1"} Feb 19 09:45:05 crc kubenswrapper[4965]: I0219 09:45:05.339069 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-t4rcw" podStartSLOduration=10.339047477 podStartE2EDuration="10.339047477s" podCreationTimestamp="2026-02-19 09:44:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:45:05.337619471 +0000 UTC m=+160.958940781" watchObservedRunningTime="2026-02-19 09:45:05.339047477 +0000 UTC m=+160.960368787" Feb 19 09:45:05 crc kubenswrapper[4965]: I0219 09:45:05.350181 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:45:05 crc kubenswrapper[4965]: E0219 09:45:05.359738 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:45:05.85971917 +0000 UTC m=+161.481040480 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-842k4" (UID: "22aed16a-0375-45f1-8762-8d5afddf848a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:05 crc kubenswrapper[4965]: I0219 09:45:05.451876 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:45:05 crc kubenswrapper[4965]: E0219 09:45:05.452116 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:45:05.952084311 +0000 UTC m=+161.573405621 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:05 crc kubenswrapper[4965]: I0219 09:45:05.452724 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:45:05 crc kubenswrapper[4965]: E0219 09:45:05.453034 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:45:05.953022194 +0000 UTC m=+161.574343504 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-842k4" (UID: "22aed16a-0375-45f1-8762-8d5afddf848a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:05 crc kubenswrapper[4965]: I0219 09:45:05.454444 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c55hf"] Feb 19 09:45:05 crc kubenswrapper[4965]: I0219 09:45:05.460574 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-kpdjc" Feb 19 09:45:05 crc kubenswrapper[4965]: I0219 09:45:05.537072 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-twxbq" podStartSLOduration=137.537055341 podStartE2EDuration="2m17.537055341s" podCreationTimestamp="2026-02-19 09:42:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:45:05.533636118 +0000 UTC m=+161.154957428" watchObservedRunningTime="2026-02-19 09:45:05.537055341 +0000 UTC m=+161.158376641" Feb 19 09:45:05 crc kubenswrapper[4965]: I0219 09:45:05.553478 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:45:05 crc kubenswrapper[4965]: E0219 09:45:05.553949 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:45:06.053927553 +0000 UTC m=+161.675248863 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:05 crc kubenswrapper[4965]: I0219 09:45:05.655438 4965 patch_prober.go:28] interesting pod/router-default-5444994796-56x8k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 09:45:05 crc kubenswrapper[4965]: [-]has-synced failed: reason withheld Feb 19 09:45:05 crc kubenswrapper[4965]: [+]process-running ok Feb 19 09:45:05 crc kubenswrapper[4965]: healthz check failed Feb 19 09:45:05 crc kubenswrapper[4965]: I0219 09:45:05.655502 4965 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-56x8k" podUID="f9b832c2-b2a0-4017-a323-c317ec4c1c1c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 09:45:05 crc kubenswrapper[4965]: I0219 09:45:05.656957 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:45:05 crc kubenswrapper[4965]: E0219 09:45:05.657292 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:45:06.157281201 +0000 UTC m=+161.778602511 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-842k4" (UID: "22aed16a-0375-45f1-8762-8d5afddf848a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:05 crc kubenswrapper[4965]: I0219 09:45:05.673457 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-s8pfp"] Feb 19 09:45:05 crc kubenswrapper[4965]: I0219 09:45:05.675685 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s8pfp" Feb 19 09:45:05 crc kubenswrapper[4965]: I0219 09:45:05.694403 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s8pfp"] Feb 19 09:45:05 crc kubenswrapper[4965]: I0219 09:45:05.712165 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 19 09:45:05 crc kubenswrapper[4965]: I0219 09:45:05.759081 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:45:05 crc kubenswrapper[4965]: I0219 09:45:05.759465 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e428e472-401e-45b3-b70b-d2e0f19b52f9-catalog-content\") pod \"redhat-operators-s8pfp\" (UID: \"e428e472-401e-45b3-b70b-d2e0f19b52f9\") " pod="openshift-marketplace/redhat-operators-s8pfp" Feb 19 09:45:05 crc kubenswrapper[4965]: I0219 09:45:05.759502 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e428e472-401e-45b3-b70b-d2e0f19b52f9-utilities\") pod \"redhat-operators-s8pfp\" (UID: \"e428e472-401e-45b3-b70b-d2e0f19b52f9\") " pod="openshift-marketplace/redhat-operators-s8pfp" Feb 19 09:45:05 crc kubenswrapper[4965]: I0219 09:45:05.759535 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfljm\" (UniqueName: \"kubernetes.io/projected/e428e472-401e-45b3-b70b-d2e0f19b52f9-kube-api-access-lfljm\") pod \"redhat-operators-s8pfp\" (UID: \"e428e472-401e-45b3-b70b-d2e0f19b52f9\") " pod="openshift-marketplace/redhat-operators-s8pfp" Feb 19 09:45:05 crc kubenswrapper[4965]: E0219 09:45:05.759695 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:45:06.259678457 +0000 UTC m=+161.880999767 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:05 crc kubenswrapper[4965]: I0219 09:45:05.861660 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:45:05 crc kubenswrapper[4965]: I0219 09:45:05.861728 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e428e472-401e-45b3-b70b-d2e0f19b52f9-catalog-content\") pod \"redhat-operators-s8pfp\" (UID: \"e428e472-401e-45b3-b70b-d2e0f19b52f9\") " pod="openshift-marketplace/redhat-operators-s8pfp" Feb 19 09:45:05 crc kubenswrapper[4965]: I0219 09:45:05.861750 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e428e472-401e-45b3-b70b-d2e0f19b52f9-utilities\") pod \"redhat-operators-s8pfp\" (UID: \"e428e472-401e-45b3-b70b-d2e0f19b52f9\") " pod="openshift-marketplace/redhat-operators-s8pfp" Feb 19 09:45:05 crc kubenswrapper[4965]: I0219 09:45:05.861777 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfljm\" (UniqueName: \"kubernetes.io/projected/e428e472-401e-45b3-b70b-d2e0f19b52f9-kube-api-access-lfljm\") pod \"redhat-operators-s8pfp\" (UID: \"e428e472-401e-45b3-b70b-d2e0f19b52f9\") " pod="openshift-marketplace/redhat-operators-s8pfp" Feb 19 09:45:05 crc kubenswrapper[4965]: E0219 09:45:05.862522 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:45:06.362505002 +0000 UTC m=+161.983826312 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-842k4" (UID: "22aed16a-0375-45f1-8762-8d5afddf848a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:05 crc kubenswrapper[4965]: I0219 09:45:05.862923 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e428e472-401e-45b3-b70b-d2e0f19b52f9-catalog-content\") pod \"redhat-operators-s8pfp\" (UID: \"e428e472-401e-45b3-b70b-d2e0f19b52f9\") " pod="openshift-marketplace/redhat-operators-s8pfp" Feb 19 09:45:05 crc kubenswrapper[4965]: I0219 09:45:05.863181 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e428e472-401e-45b3-b70b-d2e0f19b52f9-utilities\") pod \"redhat-operators-s8pfp\" (UID: \"e428e472-401e-45b3-b70b-d2e0f19b52f9\") " pod="openshift-marketplace/redhat-operators-s8pfp" Feb 19 09:45:05 crc kubenswrapper[4965]: I0219 09:45:05.893520 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfljm\" (UniqueName: \"kubernetes.io/projected/e428e472-401e-45b3-b70b-d2e0f19b52f9-kube-api-access-lfljm\") pod \"redhat-operators-s8pfp\" (UID: \"e428e472-401e-45b3-b70b-d2e0f19b52f9\") " pod="openshift-marketplace/redhat-operators-s8pfp" Feb 19 09:45:05 crc kubenswrapper[4965]: I0219 09:45:05.905857 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5667c"] Feb 19 09:45:05 crc kubenswrapper[4965]: I0219 09:45:05.962756 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:45:05 crc kubenswrapper[4965]: E0219 09:45:05.963179 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:45:06.463157815 +0000 UTC m=+162.084479125 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:05 crc kubenswrapper[4965]: I0219 09:45:05.963312 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:45:05 crc kubenswrapper[4965]: E0219 09:45:05.963709 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:45:06.463695798 +0000 UTC m=+162.085017098 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-842k4" (UID: "22aed16a-0375-45f1-8762-8d5afddf848a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:05 crc kubenswrapper[4965]: I0219 09:45:05.999017 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8tjpx"] Feb 19 09:45:06 crc kubenswrapper[4965]: I0219 09:45:06.005836 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8tjpx" Feb 19 09:45:06 crc kubenswrapper[4965]: I0219 09:45:06.034872 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8tjpx"] Feb 19 09:45:06 crc kubenswrapper[4965]: I0219 09:45:06.055246 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s8pfp" Feb 19 09:45:06 crc kubenswrapper[4965]: I0219 09:45:06.064260 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:45:06 crc kubenswrapper[4965]: I0219 09:45:06.064801 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7296417d-9dfd-4ca9-8ad7-c0016daa9b53-utilities\") pod \"redhat-operators-8tjpx\" (UID: \"7296417d-9dfd-4ca9-8ad7-c0016daa9b53\") " pod="openshift-marketplace/redhat-operators-8tjpx" Feb 19 09:45:06 crc kubenswrapper[4965]: I0219 09:45:06.064959 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7296417d-9dfd-4ca9-8ad7-c0016daa9b53-catalog-content\") pod \"redhat-operators-8tjpx\" (UID: \"7296417d-9dfd-4ca9-8ad7-c0016daa9b53\") " pod="openshift-marketplace/redhat-operators-8tjpx" Feb 19 09:45:06 crc kubenswrapper[4965]: I0219 09:45:06.065072 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5p7n4\" (UniqueName: \"kubernetes.io/projected/7296417d-9dfd-4ca9-8ad7-c0016daa9b53-kube-api-access-5p7n4\") pod \"redhat-operators-8tjpx\" (UID: \"7296417d-9dfd-4ca9-8ad7-c0016daa9b53\") " pod="openshift-marketplace/redhat-operators-8tjpx" Feb 19 09:45:06 crc kubenswrapper[4965]: E0219 09:45:06.065259 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:45:06.565243213 +0000 UTC m=+162.186564523 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:06 crc kubenswrapper[4965]: I0219 09:45:06.091802 4965 patch_prober.go:28] interesting pod/downloads-7954f5f757-8b2cq container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 19 09:45:06 crc kubenswrapper[4965]: I0219 09:45:06.091876 4965 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-8b2cq" podUID="efb57d4d-b3d4-42fa-a27b-299bdf135836" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 19 09:45:06 crc kubenswrapper[4965]: I0219 09:45:06.091944 4965 patch_prober.go:28] interesting pod/downloads-7954f5f757-8b2cq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 19 09:45:06 crc kubenswrapper[4965]: I0219 09:45:06.091991 4965 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-8b2cq" podUID="efb57d4d-b3d4-42fa-a27b-299bdf135836" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 19 09:45:06 crc kubenswrapper[4965]: I0219 09:45:06.149638 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-hdbpp" Feb 19 09:45:06 crc kubenswrapper[4965]: I0219 09:45:06.166588 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7296417d-9dfd-4ca9-8ad7-c0016daa9b53-catalog-content\") pod \"redhat-operators-8tjpx\" (UID: \"7296417d-9dfd-4ca9-8ad7-c0016daa9b53\") " pod="openshift-marketplace/redhat-operators-8tjpx" Feb 19 09:45:06 crc kubenswrapper[4965]: I0219 09:45:06.166675 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5p7n4\" (UniqueName: \"kubernetes.io/projected/7296417d-9dfd-4ca9-8ad7-c0016daa9b53-kube-api-access-5p7n4\") pod \"redhat-operators-8tjpx\" (UID: \"7296417d-9dfd-4ca9-8ad7-c0016daa9b53\") " pod="openshift-marketplace/redhat-operators-8tjpx" Feb 19 09:45:06 crc kubenswrapper[4965]: I0219 09:45:06.166700 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7296417d-9dfd-4ca9-8ad7-c0016daa9b53-utilities\") pod \"redhat-operators-8tjpx\" (UID: \"7296417d-9dfd-4ca9-8ad7-c0016daa9b53\") " pod="openshift-marketplace/redhat-operators-8tjpx" Feb 19 09:45:06 crc kubenswrapper[4965]: I0219 09:45:06.166728 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:45:06 crc kubenswrapper[4965]: E0219 09:45:06.167042 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:45:06.667029203 +0000 UTC m=+162.288350513 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-842k4" (UID: "22aed16a-0375-45f1-8762-8d5afddf848a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:06 crc kubenswrapper[4965]: I0219 09:45:06.167497 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7296417d-9dfd-4ca9-8ad7-c0016daa9b53-catalog-content\") pod \"redhat-operators-8tjpx\" (UID: \"7296417d-9dfd-4ca9-8ad7-c0016daa9b53\") " pod="openshift-marketplace/redhat-operators-8tjpx" Feb 19 09:45:06 crc kubenswrapper[4965]: I0219 09:45:06.167957 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7296417d-9dfd-4ca9-8ad7-c0016daa9b53-utilities\") pod \"redhat-operators-8tjpx\" (UID: \"7296417d-9dfd-4ca9-8ad7-c0016daa9b53\") " pod="openshift-marketplace/redhat-operators-8tjpx" Feb 19 09:45:06 crc kubenswrapper[4965]: I0219 09:45:06.207957 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5p7n4\" (UniqueName: \"kubernetes.io/projected/7296417d-9dfd-4ca9-8ad7-c0016daa9b53-kube-api-access-5p7n4\") pod \"redhat-operators-8tjpx\" (UID: \"7296417d-9dfd-4ca9-8ad7-c0016daa9b53\") " pod="openshift-marketplace/redhat-operators-8tjpx" Feb 19 09:45:06 crc kubenswrapper[4965]: I0219 09:45:06.276888 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:45:06 crc kubenswrapper[4965]: E0219 09:45:06.279927 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:45:06.779867822 +0000 UTC m=+162.401189142 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:06 crc kubenswrapper[4965]: I0219 09:45:06.299499 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:45:06 crc kubenswrapper[4965]: E0219 09:45:06.301702 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:45:06.801671734 +0000 UTC m=+162.422993044 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-842k4" (UID: "22aed16a-0375-45f1-8762-8d5afddf848a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:06 crc kubenswrapper[4965]: I0219 09:45:06.333030 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8tjpx" Feb 19 09:45:06 crc kubenswrapper[4965]: I0219 09:45:06.346426 4965 generic.go:334] "Generic (PLEG): container finished" podID="b1832525-d3f5-47bc-879b-4d4e4f3c14bd" containerID="c49f21393ac45879a0bc1cb39fb0dbc55824e5b4cc347116fff8a53e4f81cc0e" exitCode=0 Feb 19 09:45:06 crc kubenswrapper[4965]: I0219 09:45:06.346568 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c55hf" event={"ID":"b1832525-d3f5-47bc-879b-4d4e4f3c14bd","Type":"ContainerDied","Data":"c49f21393ac45879a0bc1cb39fb0dbc55824e5b4cc347116fff8a53e4f81cc0e"} Feb 19 09:45:06 crc kubenswrapper[4965]: I0219 09:45:06.346612 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c55hf" event={"ID":"b1832525-d3f5-47bc-879b-4d4e4f3c14bd","Type":"ContainerStarted","Data":"04b20bac64110b159ef127b8cec2c6414fd3727d547eb620289aa6de722364d4"} Feb 19 09:45:06 crc kubenswrapper[4965]: I0219 09:45:06.357809 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5667c" event={"ID":"5e1e7158-ad23-4414-9858-0c1056a71f56","Type":"ContainerStarted","Data":"48a986bd7335b23f8dbc5da07fb6dc84d757b200cd61a2761c7f518bf90360c0"} Feb 19 09:45:06 crc kubenswrapper[4965]: I0219 09:45:06.383351 4965 generic.go:334] "Generic (PLEG): container finished" podID="20180c3a-aa7a-4263-9057-c85c636bfc48" containerID="10de4366a806f9e7d262c223f984ee3cf7838ba2b1ce8d35e0aaffedecfcd699" exitCode=0 Feb 19 09:45:06 crc kubenswrapper[4965]: I0219 09:45:06.384387 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-std27" event={"ID":"20180c3a-aa7a-4263-9057-c85c636bfc48","Type":"ContainerDied","Data":"10de4366a806f9e7d262c223f984ee3cf7838ba2b1ce8d35e0aaffedecfcd699"} Feb 19 09:45:06 crc kubenswrapper[4965]: I0219 09:45:06.401980 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-25n6z" Feb 19 09:45:06 crc kubenswrapper[4965]: I0219 09:45:06.402014 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-25n6z" Feb 19 09:45:06 crc kubenswrapper[4965]: I0219 09:45:06.402773 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:45:06 crc kubenswrapper[4965]: E0219 09:45:06.402923 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:45:06.902900381 +0000 UTC m=+162.524221691 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:06 crc kubenswrapper[4965]: I0219 09:45:06.403098 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:45:06 crc kubenswrapper[4965]: E0219 09:45:06.403519 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:45:06.903505306 +0000 UTC m=+162.524826616 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-842k4" (UID: "22aed16a-0375-45f1-8762-8d5afddf848a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:06 crc kubenswrapper[4965]: E0219 09:45:06.422064 4965 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e1e7158_ad23_4414_9858_0c1056a71f56.slice/crio-conmon-0fba96ecb8083271d135fcba1da4ad09e63f802b7ed5069346524c2216ebba66.scope\": RecentStats: unable to find data in memory cache]" Feb 19 09:45:06 crc kubenswrapper[4965]: I0219 09:45:06.442188 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s8pfp"] Feb 19 09:45:06 crc kubenswrapper[4965]: I0219 09:45:06.458609 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-25n6z" Feb 19 09:45:06 crc kubenswrapper[4965]: I0219 09:45:06.466755 4965 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 19 09:45:06 crc kubenswrapper[4965]: I0219 09:45:06.466970 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-sbvpc" event={"ID":"23c8c1d2-4e7b-4cd4-99cf-92130064bbbf","Type":"ContainerStarted","Data":"0d28050de2d770a88cfbe4e796ff85d9fb7ef33ab7f44b28d8c67105149412fb"} Feb 19 09:45:06 crc kubenswrapper[4965]: I0219 09:45:06.480555 4965 generic.go:334] "Generic (PLEG): container finished" podID="79e5acf4-3803-4356-aa12-622cceae90a5" containerID="c29553c314bda6ac7c978a596a9a6573778e5f62e54d551ff98c289ee574ae21" exitCode=0 Feb 19 09:45:06 crc kubenswrapper[4965]: I0219 09:45:06.482006 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524905-682z8" event={"ID":"79e5acf4-3803-4356-aa12-622cceae90a5","Type":"ContainerDied","Data":"c29553c314bda6ac7c978a596a9a6573778e5f62e54d551ff98c289ee574ae21"} Feb 19 09:45:06 crc kubenswrapper[4965]: I0219 09:45:06.487907 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-vkrsj" Feb 19 09:45:06 crc kubenswrapper[4965]: W0219 09:45:06.488840 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode428e472_401e_45b3_b70b_d2e0f19b52f9.slice/crio-f4492a55226f9991aff63051d538d280d8b1acc8bdf1982928b3eb0c3ec27e6e WatchSource:0}: Error finding container f4492a55226f9991aff63051d538d280d8b1acc8bdf1982928b3eb0c3ec27e6e: Status 404 returned error can't find the container with id f4492a55226f9991aff63051d538d280d8b1acc8bdf1982928b3eb0c3ec27e6e Feb 19 09:45:06 crc kubenswrapper[4965]: I0219 09:45:06.506151 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:45:06 crc kubenswrapper[4965]: E0219 09:45:06.507981 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:45:07.007948921 +0000 UTC m=+162.629270301 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:06 crc kubenswrapper[4965]: I0219 09:45:06.608206 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:45:06 crc kubenswrapper[4965]: E0219 09:45:06.609169 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:45:07.109156588 +0000 UTC m=+162.730477898 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-842k4" (UID: "22aed16a-0375-45f1-8762-8d5afddf848a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:06 crc kubenswrapper[4965]: I0219 09:45:06.646984 4965 patch_prober.go:28] interesting pod/router-default-5444994796-56x8k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 09:45:06 crc kubenswrapper[4965]: [-]has-synced failed: reason withheld Feb 19 09:45:06 crc kubenswrapper[4965]: [+]process-running ok Feb 19 09:45:06 crc kubenswrapper[4965]: healthz check failed Feb 19 09:45:06 crc kubenswrapper[4965]: I0219 09:45:06.647228 4965 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-56x8k" podUID="f9b832c2-b2a0-4017-a323-c317ec4c1c1c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 09:45:06 crc kubenswrapper[4965]: I0219 09:45:06.713007 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:45:06 crc kubenswrapper[4965]: E0219 09:45:06.713182 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:45:07.213155242 +0000 UTC m=+162.834476552 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:06 crc kubenswrapper[4965]: I0219 09:45:06.713255 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:45:06 crc kubenswrapper[4965]: E0219 09:45:06.713594 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:45:07.213582612 +0000 UTC m=+162.834903922 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-842k4" (UID: "22aed16a-0375-45f1-8762-8d5afddf848a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:06 crc kubenswrapper[4965]: I0219 09:45:06.774849 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8tjpx"] Feb 19 09:45:06 crc kubenswrapper[4965]: I0219 09:45:06.814735 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:45:06 crc kubenswrapper[4965]: E0219 09:45:06.815115 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:45:07.315098866 +0000 UTC m=+162.936420176 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:06 crc kubenswrapper[4965]: W0219 09:45:06.839917 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7296417d_9dfd_4ca9_8ad7_c0016daa9b53.slice/crio-fd6de5089869824e38dba5d55f2ba30914a2e723685ba4fd50d3161a36935588 WatchSource:0}: Error finding container fd6de5089869824e38dba5d55f2ba30914a2e723685ba4fd50d3161a36935588: Status 404 returned error can't find the container with id fd6de5089869824e38dba5d55f2ba30914a2e723685ba4fd50d3161a36935588 Feb 19 09:45:06 crc kubenswrapper[4965]: I0219 09:45:06.916018 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:45:06 crc kubenswrapper[4965]: E0219 09:45:06.916504 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:45:07.416492077 +0000 UTC m=+163.037813387 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-842k4" (UID: "22aed16a-0375-45f1-8762-8d5afddf848a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:07 crc kubenswrapper[4965]: I0219 09:45:07.017885 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:45:07 crc kubenswrapper[4965]: E0219 09:45:07.018338 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:45:07.518286507 +0000 UTC m=+163.139607827 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:07 crc kubenswrapper[4965]: I0219 09:45:07.120505 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:45:07 crc kubenswrapper[4965]: E0219 09:45:07.120976 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:45:07.620958959 +0000 UTC m=+163.242280269 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-842k4" (UID: "22aed16a-0375-45f1-8762-8d5afddf848a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:07 crc kubenswrapper[4965]: I0219 09:45:07.221414 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:45:07 crc kubenswrapper[4965]: E0219 09:45:07.221694 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:45:07.721632443 +0000 UTC m=+163.342953753 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:07 crc kubenswrapper[4965]: I0219 09:45:07.221999 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:45:07 crc kubenswrapper[4965]: E0219 09:45:07.222913 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:45:07.722898643 +0000 UTC m=+163.344219953 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-842k4" (UID: "22aed16a-0375-45f1-8762-8d5afddf848a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:07 crc kubenswrapper[4965]: I0219 09:45:07.333060 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:45:07 crc kubenswrapper[4965]: E0219 09:45:07.333311 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:45:07.833288954 +0000 UTC m=+163.454610264 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:07 crc kubenswrapper[4965]: I0219 09:45:07.333472 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:45:07 crc kubenswrapper[4965]: E0219 09:45:07.333863 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 09:45:07.833853467 +0000 UTC m=+163.455174777 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-842k4" (UID: "22aed16a-0375-45f1-8762-8d5afddf848a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:07 crc kubenswrapper[4965]: I0219 09:45:07.402367 4965 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-19T09:45:06.466784888Z","Handler":null,"Name":""} Feb 19 09:45:07 crc kubenswrapper[4965]: I0219 09:45:07.438077 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:45:07 crc kubenswrapper[4965]: E0219 09:45:07.443354 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 09:45:07.943325355 +0000 UTC m=+163.564646755 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 09:45:07 crc kubenswrapper[4965]: I0219 09:45:07.456125 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 19 09:45:07 crc kubenswrapper[4965]: I0219 09:45:07.457713 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 09:45:07 crc kubenswrapper[4965]: I0219 09:45:07.460913 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 19 09:45:07 crc kubenswrapper[4965]: I0219 09:45:07.461139 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 19 09:45:07 crc kubenswrapper[4965]: I0219 09:45:07.470942 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 19 09:45:07 crc kubenswrapper[4965]: I0219 09:45:07.481902 4965 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 19 09:45:07 crc kubenswrapper[4965]: I0219 09:45:07.481947 4965 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 19 09:45:07 crc kubenswrapper[4965]: I0219 09:45:07.503459 4965 generic.go:334] "Generic (PLEG): container finished" podID="5e1e7158-ad23-4414-9858-0c1056a71f56" containerID="0fba96ecb8083271d135fcba1da4ad09e63f802b7ed5069346524c2216ebba66" exitCode=0 Feb 19 09:45:07 crc kubenswrapper[4965]: I0219 09:45:07.503538 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5667c" event={"ID":"5e1e7158-ad23-4414-9858-0c1056a71f56","Type":"ContainerDied","Data":"0fba96ecb8083271d135fcba1da4ad09e63f802b7ed5069346524c2216ebba66"} Feb 19 09:45:07 crc kubenswrapper[4965]: I0219 09:45:07.541475 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/92de66ac-8962-49a2-890a-a5fe5902fe5c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"92de66ac-8962-49a2-890a-a5fe5902fe5c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 09:45:07 crc kubenswrapper[4965]: I0219 09:45:07.541958 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/92de66ac-8962-49a2-890a-a5fe5902fe5c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"92de66ac-8962-49a2-890a-a5fe5902fe5c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 09:45:07 crc kubenswrapper[4965]: I0219 09:45:07.542097 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:45:07 crc kubenswrapper[4965]: I0219 09:45:07.550574 4965 generic.go:334] "Generic (PLEG): container finished" podID="7296417d-9dfd-4ca9-8ad7-c0016daa9b53" containerID="8e651eb8e2f2eee86cc9bbe2aabbbf56297f5fd760e9f6136bd99b5a24fbc1e7" exitCode=0 Feb 19 09:45:07 crc kubenswrapper[4965]: I0219 09:45:07.550661 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8tjpx" event={"ID":"7296417d-9dfd-4ca9-8ad7-c0016daa9b53","Type":"ContainerDied","Data":"8e651eb8e2f2eee86cc9bbe2aabbbf56297f5fd760e9f6136bd99b5a24fbc1e7"} Feb 19 09:45:07 crc kubenswrapper[4965]: I0219 09:45:07.550690 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8tjpx" event={"ID":"7296417d-9dfd-4ca9-8ad7-c0016daa9b53","Type":"ContainerStarted","Data":"fd6de5089869824e38dba5d55f2ba30914a2e723685ba4fd50d3161a36935588"} Feb 19 09:45:07 crc kubenswrapper[4965]: I0219 09:45:07.556516 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-sbvpc" event={"ID":"23c8c1d2-4e7b-4cd4-99cf-92130064bbbf","Type":"ContainerStarted","Data":"83c4a576cfc1335561543f989ad16ae5e858894195acf19c96c689943674bb49"} Feb 19 09:45:07 crc kubenswrapper[4965]: I0219 09:45:07.593260 4965 generic.go:334] "Generic (PLEG): container finished" podID="e428e472-401e-45b3-b70b-d2e0f19b52f9" containerID="4a72fe077eaedeec97ae4df87b84a7eb2a078ddcd277f35a656ab35710592180" exitCode=0 Feb 19 09:45:07 crc kubenswrapper[4965]: I0219 09:45:07.594136 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s8pfp" event={"ID":"e428e472-401e-45b3-b70b-d2e0f19b52f9","Type":"ContainerDied","Data":"4a72fe077eaedeec97ae4df87b84a7eb2a078ddcd277f35a656ab35710592180"} Feb 19 09:45:07 crc kubenswrapper[4965]: I0219 09:45:07.594162 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s8pfp" event={"ID":"e428e472-401e-45b3-b70b-d2e0f19b52f9","Type":"ContainerStarted","Data":"f4492a55226f9991aff63051d538d280d8b1acc8bdf1982928b3eb0c3ec27e6e"} Feb 19 09:45:07 crc kubenswrapper[4965]: I0219 09:45:07.615510 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-25n6z" Feb 19 09:45:07 crc kubenswrapper[4965]: I0219 09:45:07.622897 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-hgzq5" Feb 19 09:45:07 crc kubenswrapper[4965]: I0219 09:45:07.622931 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-hgzq5" Feb 19 09:45:07 crc kubenswrapper[4965]: I0219 09:45:07.626570 4965 patch_prober.go:28] interesting pod/console-f9d7485db-hgzq5 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Feb 19 09:45:07 crc kubenswrapper[4965]: I0219 09:45:07.626613 4965 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-hgzq5" podUID="91fd349f-c4be-4636-a5a9-76ed721d9afa" containerName="console" probeResult="failure" output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" Feb 19 09:45:07 crc kubenswrapper[4965]: I0219 09:45:07.643454 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/92de66ac-8962-49a2-890a-a5fe5902fe5c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"92de66ac-8962-49a2-890a-a5fe5902fe5c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 09:45:07 crc kubenswrapper[4965]: I0219 09:45:07.643546 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/92de66ac-8962-49a2-890a-a5fe5902fe5c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"92de66ac-8962-49a2-890a-a5fe5902fe5c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 09:45:07 crc kubenswrapper[4965]: I0219 09:45:07.643866 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/92de66ac-8962-49a2-890a-a5fe5902fe5c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"92de66ac-8962-49a2-890a-a5fe5902fe5c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 09:45:07 crc kubenswrapper[4965]: I0219 09:45:07.657393 4965 patch_prober.go:28] interesting pod/router-default-5444994796-56x8k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 09:45:07 crc kubenswrapper[4965]: [-]has-synced failed: reason withheld Feb 19 09:45:07 crc kubenswrapper[4965]: [+]process-running ok Feb 19 09:45:07 crc kubenswrapper[4965]: healthz check failed Feb 19 09:45:07 crc kubenswrapper[4965]: I0219 09:45:07.657458 4965 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-56x8k" podUID="f9b832c2-b2a0-4017-a323-c317ec4c1c1c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 09:45:07 crc kubenswrapper[4965]: I0219 09:45:07.688412 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/92de66ac-8962-49a2-890a-a5fe5902fe5c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"92de66ac-8962-49a2-890a-a5fe5902fe5c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 09:45:07 crc kubenswrapper[4965]: I0219 09:45:07.699587 4965 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 09:45:07 crc kubenswrapper[4965]: I0219 09:45:07.699648 4965 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:45:07 crc kubenswrapper[4965]: I0219 09:45:07.800479 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-842k4\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:45:07 crc kubenswrapper[4965]: I0219 09:45:07.816467 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 09:45:07 crc kubenswrapper[4965]: I0219 09:45:07.852999 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 09:45:07 crc kubenswrapper[4965]: I0219 09:45:07.869924 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 09:45:07 crc kubenswrapper[4965]: I0219 09:45:07.996267 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:45:08 crc kubenswrapper[4965]: I0219 09:45:08.177464 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524905-682z8" Feb 19 09:45:08 crc kubenswrapper[4965]: I0219 09:45:08.258319 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/79e5acf4-3803-4356-aa12-622cceae90a5-config-volume\") pod \"79e5acf4-3803-4356-aa12-622cceae90a5\" (UID: \"79e5acf4-3803-4356-aa12-622cceae90a5\") " Feb 19 09:45:08 crc kubenswrapper[4965]: I0219 09:45:08.258874 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pnbw\" (UniqueName: \"kubernetes.io/projected/79e5acf4-3803-4356-aa12-622cceae90a5-kube-api-access-2pnbw\") pod \"79e5acf4-3803-4356-aa12-622cceae90a5\" (UID: \"79e5acf4-3803-4356-aa12-622cceae90a5\") " Feb 19 09:45:08 crc kubenswrapper[4965]: I0219 09:45:08.259093 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/79e5acf4-3803-4356-aa12-622cceae90a5-secret-volume\") pod \"79e5acf4-3803-4356-aa12-622cceae90a5\" (UID: \"79e5acf4-3803-4356-aa12-622cceae90a5\") " Feb 19 09:45:08 crc kubenswrapper[4965]: I0219 09:45:08.259494 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79e5acf4-3803-4356-aa12-622cceae90a5-config-volume" (OuterVolumeSpecName: "config-volume") pod "79e5acf4-3803-4356-aa12-622cceae90a5" (UID: "79e5acf4-3803-4356-aa12-622cceae90a5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:08 crc kubenswrapper[4965]: I0219 09:45:08.272766 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79e5acf4-3803-4356-aa12-622cceae90a5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "79e5acf4-3803-4356-aa12-622cceae90a5" (UID: "79e5acf4-3803-4356-aa12-622cceae90a5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:08 crc kubenswrapper[4965]: I0219 09:45:08.283011 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79e5acf4-3803-4356-aa12-622cceae90a5-kube-api-access-2pnbw" (OuterVolumeSpecName: "kube-api-access-2pnbw") pod "79e5acf4-3803-4356-aa12-622cceae90a5" (UID: "79e5acf4-3803-4356-aa12-622cceae90a5"). InnerVolumeSpecName "kube-api-access-2pnbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:08 crc kubenswrapper[4965]: I0219 09:45:08.362339 4965 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/79e5acf4-3803-4356-aa12-622cceae90a5-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:08 crc kubenswrapper[4965]: I0219 09:45:08.362386 4965 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/79e5acf4-3803-4356-aa12-622cceae90a5-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:08 crc kubenswrapper[4965]: I0219 09:45:08.362396 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pnbw\" (UniqueName: \"kubernetes.io/projected/79e5acf4-3803-4356-aa12-622cceae90a5-kube-api-access-2pnbw\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:08 crc kubenswrapper[4965]: I0219 09:45:08.511109 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-842k4"] Feb 19 09:45:08 crc kubenswrapper[4965]: W0219 09:45:08.530206 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22aed16a_0375_45f1_8762_8d5afddf848a.slice/crio-e4441b8dc7142e1452908c8d31eca19f2bde14eb629afd1838130863ef4b1d7a WatchSource:0}: Error finding container e4441b8dc7142e1452908c8d31eca19f2bde14eb629afd1838130863ef4b1d7a: Status 404 returned error can't find the container with id e4441b8dc7142e1452908c8d31eca19f2bde14eb629afd1838130863ef4b1d7a Feb 19 09:45:08 crc kubenswrapper[4965]: I0219 09:45:08.607900 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-842k4" event={"ID":"22aed16a-0375-45f1-8762-8d5afddf848a","Type":"ContainerStarted","Data":"e4441b8dc7142e1452908c8d31eca19f2bde14eb629afd1838130863ef4b1d7a"} Feb 19 09:45:08 crc kubenswrapper[4965]: I0219 09:45:08.629746 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-sbvpc" event={"ID":"23c8c1d2-4e7b-4cd4-99cf-92130064bbbf","Type":"ContainerStarted","Data":"b6059e7b1c3373f7140aa8c1619ace64f8ba68ca5ae9bff1b056ea9d9ccf560b"} Feb 19 09:45:08 crc kubenswrapper[4965]: I0219 09:45:08.639272 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-56x8k" Feb 19 09:45:08 crc kubenswrapper[4965]: I0219 09:45:08.646463 4965 patch_prober.go:28] interesting pod/router-default-5444994796-56x8k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 09:45:08 crc kubenswrapper[4965]: [-]has-synced failed: reason withheld Feb 19 09:45:08 crc kubenswrapper[4965]: [+]process-running ok Feb 19 09:45:08 crc kubenswrapper[4965]: healthz check failed Feb 19 09:45:08 crc kubenswrapper[4965]: I0219 09:45:08.646547 4965 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-56x8k" podUID="f9b832c2-b2a0-4017-a323-c317ec4c1c1c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 09:45:08 crc kubenswrapper[4965]: I0219 09:45:08.646813 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524905-682z8" event={"ID":"79e5acf4-3803-4356-aa12-622cceae90a5","Type":"ContainerDied","Data":"f4abfa615cf4eebe69c97ee930decc901a4e747bbfabad8138853553ed713433"} Feb 19 09:45:08 crc kubenswrapper[4965]: I0219 09:45:08.646879 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4abfa615cf4eebe69c97ee930decc901a4e747bbfabad8138853553ed713433" Feb 19 09:45:08 crc kubenswrapper[4965]: I0219 09:45:08.646831 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524905-682z8" Feb 19 09:45:08 crc kubenswrapper[4965]: I0219 09:45:08.674379 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 19 09:45:08 crc kubenswrapper[4965]: I0219 09:45:08.675279 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-sbvpc" podStartSLOduration=13.675158273 podStartE2EDuration="13.675158273s" podCreationTimestamp="2026-02-19 09:44:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:45:08.659317947 +0000 UTC m=+164.280639277" watchObservedRunningTime="2026-02-19 09:45:08.675158273 +0000 UTC m=+164.296479593" Feb 19 09:45:09 crc kubenswrapper[4965]: I0219 09:45:09.211721 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 19 09:45:09 crc kubenswrapper[4965]: I0219 09:45:09.648697 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-56x8k" Feb 19 09:45:09 crc kubenswrapper[4965]: I0219 09:45:09.786111 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"92de66ac-8962-49a2-890a-a5fe5902fe5c","Type":"ContainerStarted","Data":"708d7a04896d3054934b6fded672aac37691c90f1f4487f123faa46684c5ffa3"} Feb 19 09:45:09 crc kubenswrapper[4965]: I0219 09:45:09.786178 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"92de66ac-8962-49a2-890a-a5fe5902fe5c","Type":"ContainerStarted","Data":"1f6a8e735ddba7dcbe45bb32c9e0a0098815ec43705e996d93076764ed4bf571"} Feb 19 09:45:09 crc kubenswrapper[4965]: I0219 09:45:09.797320 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-842k4" event={"ID":"22aed16a-0375-45f1-8762-8d5afddf848a","Type":"ContainerStarted","Data":"f6fcf298219a38a8a4cdf7848a9faf7a45835e13a724f0074bcb959825282cc5"} Feb 19 09:45:09 crc kubenswrapper[4965]: I0219 09:45:09.797908 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:45:09 crc kubenswrapper[4965]: I0219 09:45:09.799666 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-56x8k" Feb 19 09:45:09 crc kubenswrapper[4965]: I0219 09:45:09.845979 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.845947823 podStartE2EDuration="2.845947823s" podCreationTimestamp="2026-02-19 09:45:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:45:09.839695562 +0000 UTC m=+165.461016902" watchObservedRunningTime="2026-02-19 09:45:09.845947823 +0000 UTC m=+165.467269133" Feb 19 09:45:09 crc kubenswrapper[4965]: I0219 09:45:09.891575 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-842k4" podStartSLOduration=142.891553485 podStartE2EDuration="2m22.891553485s" podCreationTimestamp="2026-02-19 09:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:45:09.891067884 +0000 UTC m=+165.512389214" watchObservedRunningTime="2026-02-19 09:45:09.891553485 +0000 UTC m=+165.512874795" Feb 19 09:45:10 crc kubenswrapper[4965]: I0219 09:45:10.117950 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1e1b431a-0390-4366-82d1-6cb782c7a9e8-metrics-certs\") pod \"network-metrics-daemon-lwjwk\" (UID: \"1e1b431a-0390-4366-82d1-6cb782c7a9e8\") " pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:45:10 crc kubenswrapper[4965]: I0219 09:45:10.149273 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1e1b431a-0390-4366-82d1-6cb782c7a9e8-metrics-certs\") pod \"network-metrics-daemon-lwjwk\" (UID: \"1e1b431a-0390-4366-82d1-6cb782c7a9e8\") " pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:45:10 crc kubenswrapper[4965]: I0219 09:45:10.320922 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lwjwk" Feb 19 09:45:10 crc kubenswrapper[4965]: I0219 09:45:10.871625 4965 generic.go:334] "Generic (PLEG): container finished" podID="92de66ac-8962-49a2-890a-a5fe5902fe5c" containerID="708d7a04896d3054934b6fded672aac37691c90f1f4487f123faa46684c5ffa3" exitCode=0 Feb 19 09:45:10 crc kubenswrapper[4965]: I0219 09:45:10.873457 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"92de66ac-8962-49a2-890a-a5fe5902fe5c","Type":"ContainerDied","Data":"708d7a04896d3054934b6fded672aac37691c90f1f4487f123faa46684c5ffa3"} Feb 19 09:45:10 crc kubenswrapper[4965]: I0219 09:45:10.987411 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-lwjwk"] Feb 19 09:45:11 crc kubenswrapper[4965]: I0219 09:45:11.459213 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 19 09:45:11 crc kubenswrapper[4965]: E0219 09:45:11.459604 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79e5acf4-3803-4356-aa12-622cceae90a5" containerName="collect-profiles" Feb 19 09:45:11 crc kubenswrapper[4965]: I0219 09:45:11.459619 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="79e5acf4-3803-4356-aa12-622cceae90a5" containerName="collect-profiles" Feb 19 09:45:11 crc kubenswrapper[4965]: I0219 09:45:11.459777 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="79e5acf4-3803-4356-aa12-622cceae90a5" containerName="collect-profiles" Feb 19 09:45:11 crc kubenswrapper[4965]: I0219 09:45:11.460330 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 09:45:11 crc kubenswrapper[4965]: I0219 09:45:11.464445 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 19 09:45:11 crc kubenswrapper[4965]: I0219 09:45:11.464805 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 19 09:45:11 crc kubenswrapper[4965]: I0219 09:45:11.472374 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 19 09:45:11 crc kubenswrapper[4965]: I0219 09:45:11.583793 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/df0d6633-0476-445b-9e17-1a2108aa7530-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"df0d6633-0476-445b-9e17-1a2108aa7530\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 09:45:11 crc kubenswrapper[4965]: I0219 09:45:11.583881 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/df0d6633-0476-445b-9e17-1a2108aa7530-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"df0d6633-0476-445b-9e17-1a2108aa7530\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 09:45:11 crc kubenswrapper[4965]: I0219 09:45:11.693286 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/df0d6633-0476-445b-9e17-1a2108aa7530-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"df0d6633-0476-445b-9e17-1a2108aa7530\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 09:45:11 crc kubenswrapper[4965]: I0219 09:45:11.693422 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/df0d6633-0476-445b-9e17-1a2108aa7530-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"df0d6633-0476-445b-9e17-1a2108aa7530\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 09:45:11 crc kubenswrapper[4965]: I0219 09:45:11.693576 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/df0d6633-0476-445b-9e17-1a2108aa7530-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"df0d6633-0476-445b-9e17-1a2108aa7530\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 09:45:11 crc kubenswrapper[4965]: I0219 09:45:11.738058 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/df0d6633-0476-445b-9e17-1a2108aa7530-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"df0d6633-0476-445b-9e17-1a2108aa7530\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 09:45:11 crc kubenswrapper[4965]: I0219 09:45:11.791292 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 09:45:11 crc kubenswrapper[4965]: I0219 09:45:11.884157 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-lwjwk" event={"ID":"1e1b431a-0390-4366-82d1-6cb782c7a9e8","Type":"ContainerStarted","Data":"e1fb751f8e4052bf07874a20b3d4a852dc5adc53e00a5e91bbbeedd29b4e3caf"} Feb 19 09:45:12 crc kubenswrapper[4965]: I0219 09:45:12.177245 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 19 09:45:12 crc kubenswrapper[4965]: I0219 09:45:12.373250 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 09:45:12 crc kubenswrapper[4965]: I0219 09:45:12.517729 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/92de66ac-8962-49a2-890a-a5fe5902fe5c-kube-api-access\") pod \"92de66ac-8962-49a2-890a-a5fe5902fe5c\" (UID: \"92de66ac-8962-49a2-890a-a5fe5902fe5c\") " Feb 19 09:45:12 crc kubenswrapper[4965]: I0219 09:45:12.517910 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/92de66ac-8962-49a2-890a-a5fe5902fe5c-kubelet-dir\") pod \"92de66ac-8962-49a2-890a-a5fe5902fe5c\" (UID: \"92de66ac-8962-49a2-890a-a5fe5902fe5c\") " Feb 19 09:45:12 crc kubenswrapper[4965]: I0219 09:45:12.518003 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92de66ac-8962-49a2-890a-a5fe5902fe5c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "92de66ac-8962-49a2-890a-a5fe5902fe5c" (UID: "92de66ac-8962-49a2-890a-a5fe5902fe5c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:45:12 crc kubenswrapper[4965]: I0219 09:45:12.518469 4965 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/92de66ac-8962-49a2-890a-a5fe5902fe5c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:12 crc kubenswrapper[4965]: I0219 09:45:12.526249 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92de66ac-8962-49a2-890a-a5fe5902fe5c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "92de66ac-8962-49a2-890a-a5fe5902fe5c" (UID: "92de66ac-8962-49a2-890a-a5fe5902fe5c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:12 crc kubenswrapper[4965]: I0219 09:45:12.622679 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/92de66ac-8962-49a2-890a-a5fe5902fe5c-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:12 crc kubenswrapper[4965]: I0219 09:45:12.926948 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-lwjwk" event={"ID":"1e1b431a-0390-4366-82d1-6cb782c7a9e8","Type":"ContainerStarted","Data":"63a97a4ec5a56f3e96cc0cc5e78d20dec6d51c208e79ce66b45903c7d2581fb7"} Feb 19 09:45:12 crc kubenswrapper[4965]: I0219 09:45:12.933318 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"df0d6633-0476-445b-9e17-1a2108aa7530","Type":"ContainerStarted","Data":"738a143a38ae0421e56c354af43b0e1a63d328a2cbe5c1ba3f7024236c3dda9b"} Feb 19 09:45:12 crc kubenswrapper[4965]: I0219 09:45:12.944599 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"92de66ac-8962-49a2-890a-a5fe5902fe5c","Type":"ContainerDied","Data":"1f6a8e735ddba7dcbe45bb32c9e0a0098815ec43705e996d93076764ed4bf571"} Feb 19 09:45:12 crc kubenswrapper[4965]: I0219 09:45:12.944647 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f6a8e735ddba7dcbe45bb32c9e0a0098815ec43705e996d93076764ed4bf571" Feb 19 09:45:12 crc kubenswrapper[4965]: I0219 09:45:12.944764 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 09:45:13 crc kubenswrapper[4965]: I0219 09:45:13.796335 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-t4rcw" Feb 19 09:45:14 crc kubenswrapper[4965]: I0219 09:45:13.995242 4965 generic.go:334] "Generic (PLEG): container finished" podID="df0d6633-0476-445b-9e17-1a2108aa7530" containerID="06fad9c6643ed9e7c4a3a2c909472dd6883b3128401e919920ffd71e5bb55e2a" exitCode=0 Feb 19 09:45:14 crc kubenswrapper[4965]: I0219 09:45:13.995551 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"df0d6633-0476-445b-9e17-1a2108aa7530","Type":"ContainerDied","Data":"06fad9c6643ed9e7c4a3a2c909472dd6883b3128401e919920ffd71e5bb55e2a"} Feb 19 09:45:14 crc kubenswrapper[4965]: I0219 09:45:14.001682 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-lwjwk" event={"ID":"1e1b431a-0390-4366-82d1-6cb782c7a9e8","Type":"ContainerStarted","Data":"d623a23ca3930a44a86422087bd9a7d76fe31cac4a3323f298710e86fa4ba2fc"} Feb 19 09:45:14 crc kubenswrapper[4965]: I0219 09:45:14.028764 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-lwjwk" podStartSLOduration=147.028739863 podStartE2EDuration="2m27.028739863s" podCreationTimestamp="2026-02-19 09:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:45:14.027871412 +0000 UTC m=+169.649192732" watchObservedRunningTime="2026-02-19 09:45:14.028739863 +0000 UTC m=+169.650061173" Feb 19 09:45:16 crc kubenswrapper[4965]: I0219 09:45:16.096001 4965 patch_prober.go:28] interesting pod/downloads-7954f5f757-8b2cq container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 19 09:45:16 crc kubenswrapper[4965]: I0219 09:45:16.096386 4965 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-8b2cq" podUID="efb57d4d-b3d4-42fa-a27b-299bdf135836" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 19 09:45:16 crc kubenswrapper[4965]: I0219 09:45:16.101782 4965 patch_prober.go:28] interesting pod/downloads-7954f5f757-8b2cq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 19 09:45:16 crc kubenswrapper[4965]: I0219 09:45:16.101849 4965 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-8b2cq" podUID="efb57d4d-b3d4-42fa-a27b-299bdf135836" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 19 09:45:16 crc kubenswrapper[4965]: I0219 09:45:16.601084 4965 patch_prober.go:28] interesting pod/machine-config-daemon-7mhh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 09:45:16 crc kubenswrapper[4965]: I0219 09:45:16.601170 4965 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 09:45:17 crc kubenswrapper[4965]: I0219 09:45:17.628568 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-hgzq5" Feb 19 09:45:17 crc kubenswrapper[4965]: I0219 09:45:17.633352 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-hgzq5" Feb 19 09:45:22 crc kubenswrapper[4965]: I0219 09:45:22.264025 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hdbpp"] Feb 19 09:45:22 crc kubenswrapper[4965]: I0219 09:45:22.265756 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-hdbpp" podUID="f2aae678-17fc-4272-be7e-839946082d8b" containerName="controller-manager" containerID="cri-o://8ad95ff9a21fc85a50f044febc9884a5269525300cc69c0697206be2254733aa" gracePeriod=30 Feb 19 09:45:22 crc kubenswrapper[4965]: I0219 09:45:22.270624 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-rttcv"] Feb 19 09:45:22 crc kubenswrapper[4965]: I0219 09:45:22.270913 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rttcv" podUID="6ccda909-983a-43f6-9e98-c46683e6f63f" containerName="route-controller-manager" containerID="cri-o://99414624e9db0435b58e12aabbcfccbd4e47ec3cdf6e81c63f11ad07c6cd2e0b" gracePeriod=30 Feb 19 09:45:23 crc kubenswrapper[4965]: I0219 09:45:23.110164 4965 generic.go:334] "Generic (PLEG): container finished" podID="f2aae678-17fc-4272-be7e-839946082d8b" containerID="8ad95ff9a21fc85a50f044febc9884a5269525300cc69c0697206be2254733aa" exitCode=0 Feb 19 09:45:23 crc kubenswrapper[4965]: I0219 09:45:23.110234 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hdbpp" event={"ID":"f2aae678-17fc-4272-be7e-839946082d8b","Type":"ContainerDied","Data":"8ad95ff9a21fc85a50f044febc9884a5269525300cc69c0697206be2254733aa"} Feb 19 09:45:23 crc kubenswrapper[4965]: I0219 09:45:23.113293 4965 generic.go:334] "Generic (PLEG): container finished" podID="6ccda909-983a-43f6-9e98-c46683e6f63f" containerID="99414624e9db0435b58e12aabbcfccbd4e47ec3cdf6e81c63f11ad07c6cd2e0b" exitCode=0 Feb 19 09:45:23 crc kubenswrapper[4965]: I0219 09:45:23.113324 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rttcv" event={"ID":"6ccda909-983a-43f6-9e98-c46683e6f63f","Type":"ContainerDied","Data":"99414624e9db0435b58e12aabbcfccbd4e47ec3cdf6e81c63f11ad07c6cd2e0b"} Feb 19 09:45:24 crc kubenswrapper[4965]: I0219 09:45:24.333484 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 09:45:25 crc kubenswrapper[4965]: I0219 09:45:25.646940 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rttcv" Feb 19 09:45:25 crc kubenswrapper[4965]: I0219 09:45:25.651967 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 09:45:25 crc kubenswrapper[4965]: I0219 09:45:25.718022 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ccda909-983a-43f6-9e98-c46683e6f63f-serving-cert\") pod \"6ccda909-983a-43f6-9e98-c46683e6f63f\" (UID: \"6ccda909-983a-43f6-9e98-c46683e6f63f\") " Feb 19 09:45:25 crc kubenswrapper[4965]: I0219 09:45:25.718097 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pflm7\" (UniqueName: \"kubernetes.io/projected/6ccda909-983a-43f6-9e98-c46683e6f63f-kube-api-access-pflm7\") pod \"6ccda909-983a-43f6-9e98-c46683e6f63f\" (UID: \"6ccda909-983a-43f6-9e98-c46683e6f63f\") " Feb 19 09:45:25 crc kubenswrapper[4965]: I0219 09:45:25.718128 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ccda909-983a-43f6-9e98-c46683e6f63f-config\") pod \"6ccda909-983a-43f6-9e98-c46683e6f63f\" (UID: \"6ccda909-983a-43f6-9e98-c46683e6f63f\") " Feb 19 09:45:25 crc kubenswrapper[4965]: I0219 09:45:25.718255 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/df0d6633-0476-445b-9e17-1a2108aa7530-kube-api-access\") pod \"df0d6633-0476-445b-9e17-1a2108aa7530\" (UID: \"df0d6633-0476-445b-9e17-1a2108aa7530\") " Feb 19 09:45:25 crc kubenswrapper[4965]: I0219 09:45:25.718303 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/df0d6633-0476-445b-9e17-1a2108aa7530-kubelet-dir\") pod \"df0d6633-0476-445b-9e17-1a2108aa7530\" (UID: \"df0d6633-0476-445b-9e17-1a2108aa7530\") " Feb 19 09:45:25 crc kubenswrapper[4965]: I0219 09:45:25.718361 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ccda909-983a-43f6-9e98-c46683e6f63f-client-ca\") pod \"6ccda909-983a-43f6-9e98-c46683e6f63f\" (UID: \"6ccda909-983a-43f6-9e98-c46683e6f63f\") " Feb 19 09:45:25 crc kubenswrapper[4965]: I0219 09:45:25.720547 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ccda909-983a-43f6-9e98-c46683e6f63f-client-ca" (OuterVolumeSpecName: "client-ca") pod "6ccda909-983a-43f6-9e98-c46683e6f63f" (UID: "6ccda909-983a-43f6-9e98-c46683e6f63f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:25 crc kubenswrapper[4965]: I0219 09:45:25.721307 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df0d6633-0476-445b-9e17-1a2108aa7530-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "df0d6633-0476-445b-9e17-1a2108aa7530" (UID: "df0d6633-0476-445b-9e17-1a2108aa7530"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:45:25 crc kubenswrapper[4965]: I0219 09:45:25.722383 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ccda909-983a-43f6-9e98-c46683e6f63f-config" (OuterVolumeSpecName: "config") pod "6ccda909-983a-43f6-9e98-c46683e6f63f" (UID: "6ccda909-983a-43f6-9e98-c46683e6f63f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:25 crc kubenswrapper[4965]: I0219 09:45:25.723417 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5664c447dc-fq72z"] Feb 19 09:45:25 crc kubenswrapper[4965]: E0219 09:45:25.723867 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ccda909-983a-43f6-9e98-c46683e6f63f" containerName="route-controller-manager" Feb 19 09:45:25 crc kubenswrapper[4965]: I0219 09:45:25.723948 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ccda909-983a-43f6-9e98-c46683e6f63f" containerName="route-controller-manager" Feb 19 09:45:25 crc kubenswrapper[4965]: E0219 09:45:25.724014 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df0d6633-0476-445b-9e17-1a2108aa7530" containerName="pruner" Feb 19 09:45:25 crc kubenswrapper[4965]: I0219 09:45:25.724079 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="df0d6633-0476-445b-9e17-1a2108aa7530" containerName="pruner" Feb 19 09:45:25 crc kubenswrapper[4965]: E0219 09:45:25.724148 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92de66ac-8962-49a2-890a-a5fe5902fe5c" containerName="pruner" Feb 19 09:45:25 crc kubenswrapper[4965]: I0219 09:45:25.724477 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="92de66ac-8962-49a2-890a-a5fe5902fe5c" containerName="pruner" Feb 19 09:45:25 crc kubenswrapper[4965]: I0219 09:45:25.724675 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ccda909-983a-43f6-9e98-c46683e6f63f" containerName="route-controller-manager" Feb 19 09:45:25 crc kubenswrapper[4965]: I0219 09:45:25.724791 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="92de66ac-8962-49a2-890a-a5fe5902fe5c" containerName="pruner" Feb 19 09:45:25 crc kubenswrapper[4965]: I0219 09:45:25.724877 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="df0d6633-0476-445b-9e17-1a2108aa7530" containerName="pruner" Feb 19 09:45:25 crc kubenswrapper[4965]: I0219 09:45:25.725498 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5664c447dc-fq72z" Feb 19 09:45:25 crc kubenswrapper[4965]: I0219 09:45:25.731420 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5664c447dc-fq72z"] Feb 19 09:45:25 crc kubenswrapper[4965]: I0219 09:45:25.732515 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df0d6633-0476-445b-9e17-1a2108aa7530-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "df0d6633-0476-445b-9e17-1a2108aa7530" (UID: "df0d6633-0476-445b-9e17-1a2108aa7530"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:25 crc kubenswrapper[4965]: I0219 09:45:25.738410 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ccda909-983a-43f6-9e98-c46683e6f63f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6ccda909-983a-43f6-9e98-c46683e6f63f" (UID: "6ccda909-983a-43f6-9e98-c46683e6f63f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:25 crc kubenswrapper[4965]: I0219 09:45:25.738393 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ccda909-983a-43f6-9e98-c46683e6f63f-kube-api-access-pflm7" (OuterVolumeSpecName: "kube-api-access-pflm7") pod "6ccda909-983a-43f6-9e98-c46683e6f63f" (UID: "6ccda909-983a-43f6-9e98-c46683e6f63f"). InnerVolumeSpecName "kube-api-access-pflm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:25 crc kubenswrapper[4965]: I0219 09:45:25.820555 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ee97219-32da-4510-84b0-00c9dca87629-config\") pod \"route-controller-manager-5664c447dc-fq72z\" (UID: \"4ee97219-32da-4510-84b0-00c9dca87629\") " pod="openshift-route-controller-manager/route-controller-manager-5664c447dc-fq72z" Feb 19 09:45:25 crc kubenswrapper[4965]: I0219 09:45:25.820665 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fbg2\" (UniqueName: \"kubernetes.io/projected/4ee97219-32da-4510-84b0-00c9dca87629-kube-api-access-5fbg2\") pod \"route-controller-manager-5664c447dc-fq72z\" (UID: \"4ee97219-32da-4510-84b0-00c9dca87629\") " pod="openshift-route-controller-manager/route-controller-manager-5664c447dc-fq72z" Feb 19 09:45:25 crc kubenswrapper[4965]: I0219 09:45:25.820910 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ee97219-32da-4510-84b0-00c9dca87629-serving-cert\") pod \"route-controller-manager-5664c447dc-fq72z\" (UID: \"4ee97219-32da-4510-84b0-00c9dca87629\") " pod="openshift-route-controller-manager/route-controller-manager-5664c447dc-fq72z" Feb 19 09:45:25 crc kubenswrapper[4965]: I0219 09:45:25.821057 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4ee97219-32da-4510-84b0-00c9dca87629-client-ca\") pod \"route-controller-manager-5664c447dc-fq72z\" (UID: \"4ee97219-32da-4510-84b0-00c9dca87629\") " pod="openshift-route-controller-manager/route-controller-manager-5664c447dc-fq72z" Feb 19 09:45:25 crc kubenswrapper[4965]: I0219 09:45:25.821255 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/df0d6633-0476-445b-9e17-1a2108aa7530-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:25 crc kubenswrapper[4965]: I0219 09:45:25.821283 4965 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/df0d6633-0476-445b-9e17-1a2108aa7530-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:25 crc kubenswrapper[4965]: I0219 09:45:25.821304 4965 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ccda909-983a-43f6-9e98-c46683e6f63f-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:25 crc kubenswrapper[4965]: I0219 09:45:25.821321 4965 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ccda909-983a-43f6-9e98-c46683e6f63f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:25 crc kubenswrapper[4965]: I0219 09:45:25.821340 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pflm7\" (UniqueName: \"kubernetes.io/projected/6ccda909-983a-43f6-9e98-c46683e6f63f-kube-api-access-pflm7\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:25 crc kubenswrapper[4965]: I0219 09:45:25.821362 4965 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ccda909-983a-43f6-9e98-c46683e6f63f-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:25 crc kubenswrapper[4965]: I0219 09:45:25.922681 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ee97219-32da-4510-84b0-00c9dca87629-config\") pod \"route-controller-manager-5664c447dc-fq72z\" (UID: \"4ee97219-32da-4510-84b0-00c9dca87629\") " pod="openshift-route-controller-manager/route-controller-manager-5664c447dc-fq72z" Feb 19 09:45:25 crc kubenswrapper[4965]: I0219 09:45:25.922761 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fbg2\" (UniqueName: \"kubernetes.io/projected/4ee97219-32da-4510-84b0-00c9dca87629-kube-api-access-5fbg2\") pod \"route-controller-manager-5664c447dc-fq72z\" (UID: \"4ee97219-32da-4510-84b0-00c9dca87629\") " pod="openshift-route-controller-manager/route-controller-manager-5664c447dc-fq72z" Feb 19 09:45:25 crc kubenswrapper[4965]: I0219 09:45:25.922806 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ee97219-32da-4510-84b0-00c9dca87629-serving-cert\") pod \"route-controller-manager-5664c447dc-fq72z\" (UID: \"4ee97219-32da-4510-84b0-00c9dca87629\") " pod="openshift-route-controller-manager/route-controller-manager-5664c447dc-fq72z" Feb 19 09:45:25 crc kubenswrapper[4965]: I0219 09:45:25.922842 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4ee97219-32da-4510-84b0-00c9dca87629-client-ca\") pod \"route-controller-manager-5664c447dc-fq72z\" (UID: \"4ee97219-32da-4510-84b0-00c9dca87629\") " pod="openshift-route-controller-manager/route-controller-manager-5664c447dc-fq72z" Feb 19 09:45:25 crc kubenswrapper[4965]: I0219 09:45:25.925059 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4ee97219-32da-4510-84b0-00c9dca87629-client-ca\") pod \"route-controller-manager-5664c447dc-fq72z\" (UID: \"4ee97219-32da-4510-84b0-00c9dca87629\") " pod="openshift-route-controller-manager/route-controller-manager-5664c447dc-fq72z" Feb 19 09:45:25 crc kubenswrapper[4965]: I0219 09:45:25.926255 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ee97219-32da-4510-84b0-00c9dca87629-config\") pod \"route-controller-manager-5664c447dc-fq72z\" (UID: \"4ee97219-32da-4510-84b0-00c9dca87629\") " pod="openshift-route-controller-manager/route-controller-manager-5664c447dc-fq72z" Feb 19 09:45:25 crc kubenswrapper[4965]: I0219 09:45:25.932129 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ee97219-32da-4510-84b0-00c9dca87629-serving-cert\") pod \"route-controller-manager-5664c447dc-fq72z\" (UID: \"4ee97219-32da-4510-84b0-00c9dca87629\") " pod="openshift-route-controller-manager/route-controller-manager-5664c447dc-fq72z" Feb 19 09:45:25 crc kubenswrapper[4965]: I0219 09:45:25.942252 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fbg2\" (UniqueName: \"kubernetes.io/projected/4ee97219-32da-4510-84b0-00c9dca87629-kube-api-access-5fbg2\") pod \"route-controller-manager-5664c447dc-fq72z\" (UID: \"4ee97219-32da-4510-84b0-00c9dca87629\") " pod="openshift-route-controller-manager/route-controller-manager-5664c447dc-fq72z" Feb 19 09:45:26 crc kubenswrapper[4965]: I0219 09:45:26.072065 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5664c447dc-fq72z" Feb 19 09:45:26 crc kubenswrapper[4965]: I0219 09:45:26.121778 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-8b2cq" Feb 19 09:45:26 crc kubenswrapper[4965]: I0219 09:45:26.131533 4965 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-hdbpp container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 19 09:45:26 crc kubenswrapper[4965]: I0219 09:45:26.131607 4965 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-hdbpp" podUID="f2aae678-17fc-4272-be7e-839946082d8b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 19 09:45:26 crc kubenswrapper[4965]: I0219 09:45:26.135450 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 09:45:26 crc kubenswrapper[4965]: I0219 09:45:26.135406 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"df0d6633-0476-445b-9e17-1a2108aa7530","Type":"ContainerDied","Data":"738a143a38ae0421e56c354af43b0e1a63d328a2cbe5c1ba3f7024236c3dda9b"} Feb 19 09:45:26 crc kubenswrapper[4965]: I0219 09:45:26.135705 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="738a143a38ae0421e56c354af43b0e1a63d328a2cbe5c1ba3f7024236c3dda9b" Feb 19 09:45:26 crc kubenswrapper[4965]: I0219 09:45:26.137667 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rttcv" event={"ID":"6ccda909-983a-43f6-9e98-c46683e6f63f","Type":"ContainerDied","Data":"290f3f233b34ff58939b593e5492dacd038f0ef5b6fe34e8e58b12f352af98cd"} Feb 19 09:45:26 crc kubenswrapper[4965]: I0219 09:45:26.137746 4965 scope.go:117] "RemoveContainer" containerID="99414624e9db0435b58e12aabbcfccbd4e47ec3cdf6e81c63f11ad07c6cd2e0b" Feb 19 09:45:26 crc kubenswrapper[4965]: I0219 09:45:26.137822 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rttcv" Feb 19 09:45:26 crc kubenswrapper[4965]: I0219 09:45:26.197298 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-rttcv"] Feb 19 09:45:26 crc kubenswrapper[4965]: I0219 09:45:26.202101 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-rttcv"] Feb 19 09:45:27 crc kubenswrapper[4965]: I0219 09:45:27.212094 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ccda909-983a-43f6-9e98-c46683e6f63f" path="/var/lib/kubelet/pods/6ccda909-983a-43f6-9e98-c46683e6f63f/volumes" Feb 19 09:45:28 crc kubenswrapper[4965]: I0219 09:45:28.004137 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:45:31 crc kubenswrapper[4965]: I0219 09:45:31.696970 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hdbpp" Feb 19 09:45:31 crc kubenswrapper[4965]: I0219 09:45:31.800926 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqv8d\" (UniqueName: \"kubernetes.io/projected/f2aae678-17fc-4272-be7e-839946082d8b-kube-api-access-dqv8d\") pod \"f2aae678-17fc-4272-be7e-839946082d8b\" (UID: \"f2aae678-17fc-4272-be7e-839946082d8b\") " Feb 19 09:45:31 crc kubenswrapper[4965]: I0219 09:45:31.801034 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2aae678-17fc-4272-be7e-839946082d8b-config\") pod \"f2aae678-17fc-4272-be7e-839946082d8b\" (UID: \"f2aae678-17fc-4272-be7e-839946082d8b\") " Feb 19 09:45:31 crc kubenswrapper[4965]: I0219 09:45:31.801093 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f2aae678-17fc-4272-be7e-839946082d8b-proxy-ca-bundles\") pod \"f2aae678-17fc-4272-be7e-839946082d8b\" (UID: \"f2aae678-17fc-4272-be7e-839946082d8b\") " Feb 19 09:45:31 crc kubenswrapper[4965]: I0219 09:45:31.801130 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2aae678-17fc-4272-be7e-839946082d8b-serving-cert\") pod \"f2aae678-17fc-4272-be7e-839946082d8b\" (UID: \"f2aae678-17fc-4272-be7e-839946082d8b\") " Feb 19 09:45:31 crc kubenswrapper[4965]: I0219 09:45:31.801176 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f2aae678-17fc-4272-be7e-839946082d8b-client-ca\") pod \"f2aae678-17fc-4272-be7e-839946082d8b\" (UID: \"f2aae678-17fc-4272-be7e-839946082d8b\") " Feb 19 09:45:31 crc kubenswrapper[4965]: I0219 09:45:31.802175 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2aae678-17fc-4272-be7e-839946082d8b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "f2aae678-17fc-4272-be7e-839946082d8b" (UID: "f2aae678-17fc-4272-be7e-839946082d8b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:31 crc kubenswrapper[4965]: I0219 09:45:31.802536 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2aae678-17fc-4272-be7e-839946082d8b-client-ca" (OuterVolumeSpecName: "client-ca") pod "f2aae678-17fc-4272-be7e-839946082d8b" (UID: "f2aae678-17fc-4272-be7e-839946082d8b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:31 crc kubenswrapper[4965]: I0219 09:45:31.802564 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2aae678-17fc-4272-be7e-839946082d8b-config" (OuterVolumeSpecName: "config") pod "f2aae678-17fc-4272-be7e-839946082d8b" (UID: "f2aae678-17fc-4272-be7e-839946082d8b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:31 crc kubenswrapper[4965]: I0219 09:45:31.811420 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2aae678-17fc-4272-be7e-839946082d8b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f2aae678-17fc-4272-be7e-839946082d8b" (UID: "f2aae678-17fc-4272-be7e-839946082d8b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:31 crc kubenswrapper[4965]: I0219 09:45:31.814450 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2aae678-17fc-4272-be7e-839946082d8b-kube-api-access-dqv8d" (OuterVolumeSpecName: "kube-api-access-dqv8d") pod "f2aae678-17fc-4272-be7e-839946082d8b" (UID: "f2aae678-17fc-4272-be7e-839946082d8b"). InnerVolumeSpecName "kube-api-access-dqv8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:31 crc kubenswrapper[4965]: I0219 09:45:31.902910 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqv8d\" (UniqueName: \"kubernetes.io/projected/f2aae678-17fc-4272-be7e-839946082d8b-kube-api-access-dqv8d\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:31 crc kubenswrapper[4965]: I0219 09:45:31.902944 4965 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2aae678-17fc-4272-be7e-839946082d8b-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:31 crc kubenswrapper[4965]: I0219 09:45:31.902956 4965 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2aae678-17fc-4272-be7e-839946082d8b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:31 crc kubenswrapper[4965]: I0219 09:45:31.902967 4965 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f2aae678-17fc-4272-be7e-839946082d8b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:31 crc kubenswrapper[4965]: I0219 09:45:31.902978 4965 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f2aae678-17fc-4272-be7e-839946082d8b-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:32 crc kubenswrapper[4965]: I0219 09:45:32.175911 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hdbpp" event={"ID":"f2aae678-17fc-4272-be7e-839946082d8b","Type":"ContainerDied","Data":"123292214a789aff1bb137cb98d0d600c2bc9e7082288ec90dc9f88a8ac801c3"} Feb 19 09:45:32 crc kubenswrapper[4965]: I0219 09:45:32.175980 4965 scope.go:117] "RemoveContainer" containerID="8ad95ff9a21fc85a50f044febc9884a5269525300cc69c0697206be2254733aa" Feb 19 09:45:32 crc kubenswrapper[4965]: I0219 09:45:32.176090 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hdbpp" Feb 19 09:45:32 crc kubenswrapper[4965]: I0219 09:45:32.214327 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hdbpp"] Feb 19 09:45:32 crc kubenswrapper[4965]: I0219 09:45:32.218305 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hdbpp"] Feb 19 09:45:32 crc kubenswrapper[4965]: I0219 09:45:32.362596 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-784c4bbc6f-g2f6x"] Feb 19 09:45:32 crc kubenswrapper[4965]: E0219 09:45:32.362858 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2aae678-17fc-4272-be7e-839946082d8b" containerName="controller-manager" Feb 19 09:45:32 crc kubenswrapper[4965]: I0219 09:45:32.362879 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2aae678-17fc-4272-be7e-839946082d8b" containerName="controller-manager" Feb 19 09:45:32 crc kubenswrapper[4965]: I0219 09:45:32.362988 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2aae678-17fc-4272-be7e-839946082d8b" containerName="controller-manager" Feb 19 09:45:32 crc kubenswrapper[4965]: I0219 09:45:32.363466 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-784c4bbc6f-g2f6x" Feb 19 09:45:32 crc kubenswrapper[4965]: I0219 09:45:32.366409 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 09:45:32 crc kubenswrapper[4965]: I0219 09:45:32.368493 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 09:45:32 crc kubenswrapper[4965]: I0219 09:45:32.368884 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 09:45:32 crc kubenswrapper[4965]: I0219 09:45:32.369253 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 09:45:32 crc kubenswrapper[4965]: I0219 09:45:32.369851 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 09:45:32 crc kubenswrapper[4965]: I0219 09:45:32.371851 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-784c4bbc6f-g2f6x"] Feb 19 09:45:32 crc kubenswrapper[4965]: I0219 09:45:32.373138 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 09:45:32 crc kubenswrapper[4965]: I0219 09:45:32.374454 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 09:45:32 crc kubenswrapper[4965]: I0219 09:45:32.510924 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2fe3f117-83d8-436e-90ed-37728223eb61-proxy-ca-bundles\") pod \"controller-manager-784c4bbc6f-g2f6x\" (UID: \"2fe3f117-83d8-436e-90ed-37728223eb61\") " pod="openshift-controller-manager/controller-manager-784c4bbc6f-g2f6x" Feb 19 09:45:32 crc kubenswrapper[4965]: I0219 09:45:32.511568 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2fe3f117-83d8-436e-90ed-37728223eb61-client-ca\") pod \"controller-manager-784c4bbc6f-g2f6x\" (UID: \"2fe3f117-83d8-436e-90ed-37728223eb61\") " pod="openshift-controller-manager/controller-manager-784c4bbc6f-g2f6x" Feb 19 09:45:32 crc kubenswrapper[4965]: I0219 09:45:32.511599 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdwj4\" (UniqueName: \"kubernetes.io/projected/2fe3f117-83d8-436e-90ed-37728223eb61-kube-api-access-tdwj4\") pod \"controller-manager-784c4bbc6f-g2f6x\" (UID: \"2fe3f117-83d8-436e-90ed-37728223eb61\") " pod="openshift-controller-manager/controller-manager-784c4bbc6f-g2f6x" Feb 19 09:45:32 crc kubenswrapper[4965]: I0219 09:45:32.511631 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2fe3f117-83d8-436e-90ed-37728223eb61-serving-cert\") pod \"controller-manager-784c4bbc6f-g2f6x\" (UID: \"2fe3f117-83d8-436e-90ed-37728223eb61\") " pod="openshift-controller-manager/controller-manager-784c4bbc6f-g2f6x" Feb 19 09:45:32 crc kubenswrapper[4965]: I0219 09:45:32.511738 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fe3f117-83d8-436e-90ed-37728223eb61-config\") pod \"controller-manager-784c4bbc6f-g2f6x\" (UID: \"2fe3f117-83d8-436e-90ed-37728223eb61\") " pod="openshift-controller-manager/controller-manager-784c4bbc6f-g2f6x" Feb 19 09:45:32 crc kubenswrapper[4965]: I0219 09:45:32.612799 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2fe3f117-83d8-436e-90ed-37728223eb61-client-ca\") pod \"controller-manager-784c4bbc6f-g2f6x\" (UID: \"2fe3f117-83d8-436e-90ed-37728223eb61\") " pod="openshift-controller-manager/controller-manager-784c4bbc6f-g2f6x" Feb 19 09:45:32 crc kubenswrapper[4965]: I0219 09:45:32.612867 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdwj4\" (UniqueName: \"kubernetes.io/projected/2fe3f117-83d8-436e-90ed-37728223eb61-kube-api-access-tdwj4\") pod \"controller-manager-784c4bbc6f-g2f6x\" (UID: \"2fe3f117-83d8-436e-90ed-37728223eb61\") " pod="openshift-controller-manager/controller-manager-784c4bbc6f-g2f6x" Feb 19 09:45:32 crc kubenswrapper[4965]: I0219 09:45:32.612897 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2fe3f117-83d8-436e-90ed-37728223eb61-serving-cert\") pod \"controller-manager-784c4bbc6f-g2f6x\" (UID: \"2fe3f117-83d8-436e-90ed-37728223eb61\") " pod="openshift-controller-manager/controller-manager-784c4bbc6f-g2f6x" Feb 19 09:45:32 crc kubenswrapper[4965]: I0219 09:45:32.612933 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fe3f117-83d8-436e-90ed-37728223eb61-config\") pod \"controller-manager-784c4bbc6f-g2f6x\" (UID: \"2fe3f117-83d8-436e-90ed-37728223eb61\") " pod="openshift-controller-manager/controller-manager-784c4bbc6f-g2f6x" Feb 19 09:45:32 crc kubenswrapper[4965]: I0219 09:45:32.613003 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2fe3f117-83d8-436e-90ed-37728223eb61-proxy-ca-bundles\") pod \"controller-manager-784c4bbc6f-g2f6x\" (UID: \"2fe3f117-83d8-436e-90ed-37728223eb61\") " pod="openshift-controller-manager/controller-manager-784c4bbc6f-g2f6x" Feb 19 09:45:32 crc kubenswrapper[4965]: I0219 09:45:32.614266 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2fe3f117-83d8-436e-90ed-37728223eb61-client-ca\") pod \"controller-manager-784c4bbc6f-g2f6x\" (UID: \"2fe3f117-83d8-436e-90ed-37728223eb61\") " pod="openshift-controller-manager/controller-manager-784c4bbc6f-g2f6x" Feb 19 09:45:32 crc kubenswrapper[4965]: I0219 09:45:32.614705 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2fe3f117-83d8-436e-90ed-37728223eb61-proxy-ca-bundles\") pod \"controller-manager-784c4bbc6f-g2f6x\" (UID: \"2fe3f117-83d8-436e-90ed-37728223eb61\") " pod="openshift-controller-manager/controller-manager-784c4bbc6f-g2f6x" Feb 19 09:45:32 crc kubenswrapper[4965]: I0219 09:45:32.614747 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fe3f117-83d8-436e-90ed-37728223eb61-config\") pod \"controller-manager-784c4bbc6f-g2f6x\" (UID: \"2fe3f117-83d8-436e-90ed-37728223eb61\") " pod="openshift-controller-manager/controller-manager-784c4bbc6f-g2f6x" Feb 19 09:45:32 crc kubenswrapper[4965]: I0219 09:45:32.619061 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2fe3f117-83d8-436e-90ed-37728223eb61-serving-cert\") pod \"controller-manager-784c4bbc6f-g2f6x\" (UID: \"2fe3f117-83d8-436e-90ed-37728223eb61\") " pod="openshift-controller-manager/controller-manager-784c4bbc6f-g2f6x" Feb 19 09:45:32 crc kubenswrapper[4965]: I0219 09:45:32.631140 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdwj4\" (UniqueName: \"kubernetes.io/projected/2fe3f117-83d8-436e-90ed-37728223eb61-kube-api-access-tdwj4\") pod \"controller-manager-784c4bbc6f-g2f6x\" (UID: \"2fe3f117-83d8-436e-90ed-37728223eb61\") " pod="openshift-controller-manager/controller-manager-784c4bbc6f-g2f6x" Feb 19 09:45:32 crc kubenswrapper[4965]: I0219 09:45:32.703809 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-784c4bbc6f-g2f6x" Feb 19 09:45:33 crc kubenswrapper[4965]: I0219 09:45:33.205586 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2aae678-17fc-4272-be7e-839946082d8b" path="/var/lib/kubelet/pods/f2aae678-17fc-4272-be7e-839946082d8b/volumes" Feb 19 09:45:37 crc kubenswrapper[4965]: I0219 09:45:37.979801 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bdvzx" Feb 19 09:45:41 crc kubenswrapper[4965]: E0219 09:45:41.841444 4965 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 19 09:45:41 crc kubenswrapper[4965]: E0219 09:45:41.843812 4965 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lfljm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-s8pfp_openshift-marketplace(e428e472-401e-45b3-b70b-d2e0f19b52f9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 09:45:41 crc kubenswrapper[4965]: E0219 09:45:41.845521 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-s8pfp" podUID="e428e472-401e-45b3-b70b-d2e0f19b52f9" Feb 19 09:45:41 crc kubenswrapper[4965]: E0219 09:45:41.864463 4965 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 19 09:45:41 crc kubenswrapper[4965]: E0219 09:45:41.865246 4965 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5p7n4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-8tjpx_openshift-marketplace(7296417d-9dfd-4ca9-8ad7-c0016daa9b53): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 09:45:41 crc kubenswrapper[4965]: E0219 09:45:41.866502 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-8tjpx" podUID="7296417d-9dfd-4ca9-8ad7-c0016daa9b53" Feb 19 09:45:41 crc kubenswrapper[4965]: E0219 09:45:41.881220 4965 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 19 09:45:41 crc kubenswrapper[4965]: E0219 09:45:41.881418 4965 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4pbqz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-c55hf_openshift-marketplace(b1832525-d3f5-47bc-879b-4d4e4f3c14bd): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 09:45:41 crc kubenswrapper[4965]: E0219 09:45:41.883250 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-c55hf" podUID="b1832525-d3f5-47bc-879b-4d4e4f3c14bd" Feb 19 09:45:41 crc kubenswrapper[4965]: E0219 09:45:41.907723 4965 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 19 09:45:41 crc kubenswrapper[4965]: E0219 09:45:41.908033 4965 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-db2vx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-tlmst_openshift-marketplace(badd7c24-44c3-4853-9611-aeb49c3df0ab): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 09:45:41 crc kubenswrapper[4965]: E0219 09:45:41.909365 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-tlmst" podUID="badd7c24-44c3-4853-9611-aeb49c3df0ab" Feb 19 09:45:41 crc kubenswrapper[4965]: E0219 09:45:41.924866 4965 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 19 09:45:41 crc kubenswrapper[4965]: E0219 09:45:41.925046 4965 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hbdk8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-fxnw5_openshift-marketplace(c2ea1b40-1bc8-462a-a2a2-218c24c27584): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 09:45:41 crc kubenswrapper[4965]: E0219 09:45:41.926334 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-fxnw5" podUID="c2ea1b40-1bc8-462a-a2a2-218c24c27584" Feb 19 09:45:42 crc kubenswrapper[4965]: I0219 09:45:42.037279 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5664c447dc-fq72z"] Feb 19 09:45:42 crc kubenswrapper[4965]: I0219 09:45:42.082566 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-784c4bbc6f-g2f6x"] Feb 19 09:45:42 crc kubenswrapper[4965]: W0219 09:45:42.099213 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2fe3f117_83d8_436e_90ed_37728223eb61.slice/crio-6720008ec0df243c334d0fbf8dea7052853f637af1db6f4e471dd77d8684027d WatchSource:0}: Error finding container 6720008ec0df243c334d0fbf8dea7052853f637af1db6f4e471dd77d8684027d: Status 404 returned error can't find the container with id 6720008ec0df243c334d0fbf8dea7052853f637af1db6f4e471dd77d8684027d Feb 19 09:45:42 crc kubenswrapper[4965]: I0219 09:45:42.154932 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-784c4bbc6f-g2f6x"] Feb 19 09:45:42 crc kubenswrapper[4965]: I0219 09:45:42.257235 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5664c447dc-fq72z"] Feb 19 09:45:42 crc kubenswrapper[4965]: I0219 09:45:42.271679 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5667c" event={"ID":"5e1e7158-ad23-4414-9858-0c1056a71f56","Type":"ContainerStarted","Data":"df9349b40801de7a33659f4da34ffba8824b21bb84fe071f573dede5fe655566"} Feb 19 09:45:42 crc kubenswrapper[4965]: I0219 09:45:42.277056 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-std27" event={"ID":"20180c3a-aa7a-4263-9057-c85c636bfc48","Type":"ContainerStarted","Data":"b699e697e01e0301e194c248be1ff9abd9c81a9a0a4390a8276b9a222feb9990"} Feb 19 09:45:42 crc kubenswrapper[4965]: I0219 09:45:42.280868 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5664c447dc-fq72z" event={"ID":"4ee97219-32da-4510-84b0-00c9dca87629","Type":"ContainerStarted","Data":"637e74bcd11dce389240e94cb96644ce24f4b8fd537f8b289aa5037ba24175d6"} Feb 19 09:45:42 crc kubenswrapper[4965]: I0219 09:45:42.280908 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5664c447dc-fq72z" event={"ID":"4ee97219-32da-4510-84b0-00c9dca87629","Type":"ContainerStarted","Data":"316fd3815ea21d4122b62012a848e89ab9fd15789176b67cfc9c07e0c841a814"} Feb 19 09:45:42 crc kubenswrapper[4965]: I0219 09:45:42.281646 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5664c447dc-fq72z" Feb 19 09:45:42 crc kubenswrapper[4965]: I0219 09:45:42.283876 4965 patch_prober.go:28] interesting pod/route-controller-manager-5664c447dc-fq72z container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" start-of-body= Feb 19 09:45:42 crc kubenswrapper[4965]: I0219 09:45:42.283911 4965 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5664c447dc-fq72z" podUID="4ee97219-32da-4510-84b0-00c9dca87629" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" Feb 19 09:45:42 crc kubenswrapper[4965]: I0219 09:45:42.284056 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-784c4bbc6f-g2f6x" event={"ID":"2fe3f117-83d8-436e-90ed-37728223eb61","Type":"ContainerStarted","Data":"548ac61ed93abddef97afbe570ce182fef0b4c7d5c89b18be08c44b3527eeba8"} Feb 19 09:45:42 crc kubenswrapper[4965]: I0219 09:45:42.284079 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-784c4bbc6f-g2f6x" event={"ID":"2fe3f117-83d8-436e-90ed-37728223eb61","Type":"ContainerStarted","Data":"6720008ec0df243c334d0fbf8dea7052853f637af1db6f4e471dd77d8684027d"} Feb 19 09:45:42 crc kubenswrapper[4965]: I0219 09:45:42.284159 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-784c4bbc6f-g2f6x" podUID="2fe3f117-83d8-436e-90ed-37728223eb61" containerName="controller-manager" containerID="cri-o://548ac61ed93abddef97afbe570ce182fef0b4c7d5c89b18be08c44b3527eeba8" gracePeriod=30 Feb 19 09:45:42 crc kubenswrapper[4965]: I0219 09:45:42.284396 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-784c4bbc6f-g2f6x" Feb 19 09:45:42 crc kubenswrapper[4965]: I0219 09:45:42.286341 4965 patch_prober.go:28] interesting pod/controller-manager-784c4bbc6f-g2f6x container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.56:8443/healthz\": dial tcp 10.217.0.56:8443: connect: connection refused" start-of-body= Feb 19 09:45:42 crc kubenswrapper[4965]: I0219 09:45:42.286381 4965 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-784c4bbc6f-g2f6x" podUID="2fe3f117-83d8-436e-90ed-37728223eb61" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.56:8443/healthz\": dial tcp 10.217.0.56:8443: connect: connection refused" Feb 19 09:45:42 crc kubenswrapper[4965]: I0219 09:45:42.293054 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-shcvk" event={"ID":"98f1b66c-456a-415c-b093-20ab1fa33b9b","Type":"ContainerStarted","Data":"05052189d3e724d700820cf98908bb07ddce6850c29bc56b75b5012f6ea8db45"} Feb 19 09:45:42 crc kubenswrapper[4965]: E0219 09:45:42.294927 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-s8pfp" podUID="e428e472-401e-45b3-b70b-d2e0f19b52f9" Feb 19 09:45:42 crc kubenswrapper[4965]: E0219 09:45:42.295006 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-8tjpx" podUID="7296417d-9dfd-4ca9-8ad7-c0016daa9b53" Feb 19 09:45:42 crc kubenswrapper[4965]: E0219 09:45:42.296620 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-c55hf" podUID="b1832525-d3f5-47bc-879b-4d4e4f3c14bd" Feb 19 09:45:42 crc kubenswrapper[4965]: E0219 09:45:42.296712 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-tlmst" podUID="badd7c24-44c3-4853-9611-aeb49c3df0ab" Feb 19 09:45:42 crc kubenswrapper[4965]: E0219 09:45:42.297019 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-fxnw5" podUID="c2ea1b40-1bc8-462a-a2a2-218c24c27584" Feb 19 09:45:42 crc kubenswrapper[4965]: I0219 09:45:42.326987 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-784c4bbc6f-g2f6x" podStartSLOduration=20.326965691 podStartE2EDuration="20.326965691s" podCreationTimestamp="2026-02-19 09:45:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:45:42.325282588 +0000 UTC m=+197.946603908" watchObservedRunningTime="2026-02-19 09:45:42.326965691 +0000 UTC m=+197.948286991" Feb 19 09:45:42 crc kubenswrapper[4965]: I0219 09:45:42.397277 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5664c447dc-fq72z" podStartSLOduration=20.397248556 podStartE2EDuration="20.397248556s" podCreationTimestamp="2026-02-19 09:45:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:45:42.378795352 +0000 UTC m=+198.000116662" watchObservedRunningTime="2026-02-19 09:45:42.397248556 +0000 UTC m=+198.018569856" Feb 19 09:45:42 crc kubenswrapper[4965]: I0219 09:45:42.712178 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-784c4bbc6f-g2f6x" Feb 19 09:45:43 crc kubenswrapper[4965]: I0219 09:45:43.301592 4965 generic.go:334] "Generic (PLEG): container finished" podID="98f1b66c-456a-415c-b093-20ab1fa33b9b" containerID="05052189d3e724d700820cf98908bb07ddce6850c29bc56b75b5012f6ea8db45" exitCode=0 Feb 19 09:45:43 crc kubenswrapper[4965]: I0219 09:45:43.301718 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-shcvk" event={"ID":"98f1b66c-456a-415c-b093-20ab1fa33b9b","Type":"ContainerDied","Data":"05052189d3e724d700820cf98908bb07ddce6850c29bc56b75b5012f6ea8db45"} Feb 19 09:45:43 crc kubenswrapper[4965]: I0219 09:45:43.304630 4965 generic.go:334] "Generic (PLEG): container finished" podID="5e1e7158-ad23-4414-9858-0c1056a71f56" containerID="df9349b40801de7a33659f4da34ffba8824b21bb84fe071f573dede5fe655566" exitCode=0 Feb 19 09:45:43 crc kubenswrapper[4965]: I0219 09:45:43.304695 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5667c" event={"ID":"5e1e7158-ad23-4414-9858-0c1056a71f56","Type":"ContainerDied","Data":"df9349b40801de7a33659f4da34ffba8824b21bb84fe071f573dede5fe655566"} Feb 19 09:45:43 crc kubenswrapper[4965]: I0219 09:45:43.307668 4965 generic.go:334] "Generic (PLEG): container finished" podID="20180c3a-aa7a-4263-9057-c85c636bfc48" containerID="b699e697e01e0301e194c248be1ff9abd9c81a9a0a4390a8276b9a222feb9990" exitCode=0 Feb 19 09:45:43 crc kubenswrapper[4965]: I0219 09:45:43.307719 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-std27" event={"ID":"20180c3a-aa7a-4263-9057-c85c636bfc48","Type":"ContainerDied","Data":"b699e697e01e0301e194c248be1ff9abd9c81a9a0a4390a8276b9a222feb9990"} Feb 19 09:45:43 crc kubenswrapper[4965]: I0219 09:45:43.307998 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5664c447dc-fq72z" podUID="4ee97219-32da-4510-84b0-00c9dca87629" containerName="route-controller-manager" containerID="cri-o://637e74bcd11dce389240e94cb96644ce24f4b8fd537f8b289aa5037ba24175d6" gracePeriod=30 Feb 19 09:45:43 crc kubenswrapper[4965]: I0219 09:45:43.315391 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5664c447dc-fq72z" Feb 19 09:45:43 crc kubenswrapper[4965]: I0219 09:45:43.704081 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5664c447dc-fq72z" Feb 19 09:45:43 crc kubenswrapper[4965]: I0219 09:45:43.741404 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b769dd84-5nhwl"] Feb 19 09:45:43 crc kubenswrapper[4965]: E0219 09:45:43.741781 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ee97219-32da-4510-84b0-00c9dca87629" containerName="route-controller-manager" Feb 19 09:45:43 crc kubenswrapper[4965]: I0219 09:45:43.741802 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ee97219-32da-4510-84b0-00c9dca87629" containerName="route-controller-manager" Feb 19 09:45:43 crc kubenswrapper[4965]: I0219 09:45:43.741953 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ee97219-32da-4510-84b0-00c9dca87629" containerName="route-controller-manager" Feb 19 09:45:43 crc kubenswrapper[4965]: I0219 09:45:43.742517 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b769dd84-5nhwl" Feb 19 09:45:43 crc kubenswrapper[4965]: I0219 09:45:43.759204 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b769dd84-5nhwl"] Feb 19 09:45:43 crc kubenswrapper[4965]: I0219 09:45:43.787343 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4ee97219-32da-4510-84b0-00c9dca87629-client-ca\") pod \"4ee97219-32da-4510-84b0-00c9dca87629\" (UID: \"4ee97219-32da-4510-84b0-00c9dca87629\") " Feb 19 09:45:43 crc kubenswrapper[4965]: I0219 09:45:43.787429 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ee97219-32da-4510-84b0-00c9dca87629-config\") pod \"4ee97219-32da-4510-84b0-00c9dca87629\" (UID: \"4ee97219-32da-4510-84b0-00c9dca87629\") " Feb 19 09:45:43 crc kubenswrapper[4965]: I0219 09:45:43.787464 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fbg2\" (UniqueName: \"kubernetes.io/projected/4ee97219-32da-4510-84b0-00c9dca87629-kube-api-access-5fbg2\") pod \"4ee97219-32da-4510-84b0-00c9dca87629\" (UID: \"4ee97219-32da-4510-84b0-00c9dca87629\") " Feb 19 09:45:43 crc kubenswrapper[4965]: I0219 09:45:43.787489 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ee97219-32da-4510-84b0-00c9dca87629-serving-cert\") pod \"4ee97219-32da-4510-84b0-00c9dca87629\" (UID: \"4ee97219-32da-4510-84b0-00c9dca87629\") " Feb 19 09:45:43 crc kubenswrapper[4965]: I0219 09:45:43.787565 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9abcf516-ebb9-47b7-af5d-6ae08a3d1d32-config\") pod \"route-controller-manager-b769dd84-5nhwl\" (UID: \"9abcf516-ebb9-47b7-af5d-6ae08a3d1d32\") " pod="openshift-route-controller-manager/route-controller-manager-b769dd84-5nhwl" Feb 19 09:45:43 crc kubenswrapper[4965]: I0219 09:45:43.787592 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9abcf516-ebb9-47b7-af5d-6ae08a3d1d32-serving-cert\") pod \"route-controller-manager-b769dd84-5nhwl\" (UID: \"9abcf516-ebb9-47b7-af5d-6ae08a3d1d32\") " pod="openshift-route-controller-manager/route-controller-manager-b769dd84-5nhwl" Feb 19 09:45:43 crc kubenswrapper[4965]: I0219 09:45:43.787617 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcc89\" (UniqueName: \"kubernetes.io/projected/9abcf516-ebb9-47b7-af5d-6ae08a3d1d32-kube-api-access-dcc89\") pod \"route-controller-manager-b769dd84-5nhwl\" (UID: \"9abcf516-ebb9-47b7-af5d-6ae08a3d1d32\") " pod="openshift-route-controller-manager/route-controller-manager-b769dd84-5nhwl" Feb 19 09:45:43 crc kubenswrapper[4965]: I0219 09:45:43.787644 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9abcf516-ebb9-47b7-af5d-6ae08a3d1d32-client-ca\") pod \"route-controller-manager-b769dd84-5nhwl\" (UID: \"9abcf516-ebb9-47b7-af5d-6ae08a3d1d32\") " pod="openshift-route-controller-manager/route-controller-manager-b769dd84-5nhwl" Feb 19 09:45:43 crc kubenswrapper[4965]: I0219 09:45:43.788719 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ee97219-32da-4510-84b0-00c9dca87629-client-ca" (OuterVolumeSpecName: "client-ca") pod "4ee97219-32da-4510-84b0-00c9dca87629" (UID: "4ee97219-32da-4510-84b0-00c9dca87629"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:43 crc kubenswrapper[4965]: I0219 09:45:43.788889 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ee97219-32da-4510-84b0-00c9dca87629-config" (OuterVolumeSpecName: "config") pod "4ee97219-32da-4510-84b0-00c9dca87629" (UID: "4ee97219-32da-4510-84b0-00c9dca87629"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:45:43 crc kubenswrapper[4965]: I0219 09:45:43.796389 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ee97219-32da-4510-84b0-00c9dca87629-kube-api-access-5fbg2" (OuterVolumeSpecName: "kube-api-access-5fbg2") pod "4ee97219-32da-4510-84b0-00c9dca87629" (UID: "4ee97219-32da-4510-84b0-00c9dca87629"). InnerVolumeSpecName "kube-api-access-5fbg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:43 crc kubenswrapper[4965]: I0219 09:45:43.809549 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ee97219-32da-4510-84b0-00c9dca87629-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4ee97219-32da-4510-84b0-00c9dca87629" (UID: "4ee97219-32da-4510-84b0-00c9dca87629"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:45:43 crc kubenswrapper[4965]: I0219 09:45:43.894229 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9abcf516-ebb9-47b7-af5d-6ae08a3d1d32-client-ca\") pod \"route-controller-manager-b769dd84-5nhwl\" (UID: \"9abcf516-ebb9-47b7-af5d-6ae08a3d1d32\") " pod="openshift-route-controller-manager/route-controller-manager-b769dd84-5nhwl" Feb 19 09:45:43 crc kubenswrapper[4965]: I0219 09:45:43.894362 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9abcf516-ebb9-47b7-af5d-6ae08a3d1d32-config\") pod \"route-controller-manager-b769dd84-5nhwl\" (UID: \"9abcf516-ebb9-47b7-af5d-6ae08a3d1d32\") " pod="openshift-route-controller-manager/route-controller-manager-b769dd84-5nhwl" Feb 19 09:45:43 crc kubenswrapper[4965]: I0219 09:45:43.894409 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9abcf516-ebb9-47b7-af5d-6ae08a3d1d32-serving-cert\") pod \"route-controller-manager-b769dd84-5nhwl\" (UID: \"9abcf516-ebb9-47b7-af5d-6ae08a3d1d32\") " pod="openshift-route-controller-manager/route-controller-manager-b769dd84-5nhwl" Feb 19 09:45:43 crc kubenswrapper[4965]: I0219 09:45:43.894459 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcc89\" (UniqueName: \"kubernetes.io/projected/9abcf516-ebb9-47b7-af5d-6ae08a3d1d32-kube-api-access-dcc89\") pod \"route-controller-manager-b769dd84-5nhwl\" (UID: \"9abcf516-ebb9-47b7-af5d-6ae08a3d1d32\") " pod="openshift-route-controller-manager/route-controller-manager-b769dd84-5nhwl" Feb 19 09:45:43 crc kubenswrapper[4965]: I0219 09:45:43.894530 4965 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ee97219-32da-4510-84b0-00c9dca87629-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:43 crc kubenswrapper[4965]: I0219 09:45:43.894545 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fbg2\" (UniqueName: \"kubernetes.io/projected/4ee97219-32da-4510-84b0-00c9dca87629-kube-api-access-5fbg2\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:43 crc kubenswrapper[4965]: I0219 09:45:43.894562 4965 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ee97219-32da-4510-84b0-00c9dca87629-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:43 crc kubenswrapper[4965]: I0219 09:45:43.894573 4965 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4ee97219-32da-4510-84b0-00c9dca87629-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:43 crc kubenswrapper[4965]: I0219 09:45:43.895562 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9abcf516-ebb9-47b7-af5d-6ae08a3d1d32-client-ca\") pod \"route-controller-manager-b769dd84-5nhwl\" (UID: \"9abcf516-ebb9-47b7-af5d-6ae08a3d1d32\") " pod="openshift-route-controller-manager/route-controller-manager-b769dd84-5nhwl" Feb 19 09:45:43 crc kubenswrapper[4965]: I0219 09:45:43.897018 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9abcf516-ebb9-47b7-af5d-6ae08a3d1d32-config\") pod \"route-controller-manager-b769dd84-5nhwl\" (UID: \"9abcf516-ebb9-47b7-af5d-6ae08a3d1d32\") " pod="openshift-route-controller-manager/route-controller-manager-b769dd84-5nhwl" Feb 19 09:45:43 crc kubenswrapper[4965]: I0219 09:45:43.899856 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9abcf516-ebb9-47b7-af5d-6ae08a3d1d32-serving-cert\") pod \"route-controller-manager-b769dd84-5nhwl\" (UID: \"9abcf516-ebb9-47b7-af5d-6ae08a3d1d32\") " pod="openshift-route-controller-manager/route-controller-manager-b769dd84-5nhwl" Feb 19 09:45:43 crc kubenswrapper[4965]: I0219 09:45:43.920411 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcc89\" (UniqueName: \"kubernetes.io/projected/9abcf516-ebb9-47b7-af5d-6ae08a3d1d32-kube-api-access-dcc89\") pod \"route-controller-manager-b769dd84-5nhwl\" (UID: \"9abcf516-ebb9-47b7-af5d-6ae08a3d1d32\") " pod="openshift-route-controller-manager/route-controller-manager-b769dd84-5nhwl" Feb 19 09:45:44 crc kubenswrapper[4965]: I0219 09:45:44.071674 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b769dd84-5nhwl" Feb 19 09:45:44 crc kubenswrapper[4965]: I0219 09:45:44.318462 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b769dd84-5nhwl"] Feb 19 09:45:44 crc kubenswrapper[4965]: I0219 09:45:44.320653 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-shcvk" event={"ID":"98f1b66c-456a-415c-b093-20ab1fa33b9b","Type":"ContainerStarted","Data":"cfb0a480b40f7e52f180babc8283702e8e9b2b99b1d7249a595587152fcf9150"} Feb 19 09:45:44 crc kubenswrapper[4965]: I0219 09:45:44.327525 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5667c" event={"ID":"5e1e7158-ad23-4414-9858-0c1056a71f56","Type":"ContainerStarted","Data":"29c4d915bea09f5fd5887e3598a860f9bd8a51f7be6891477dd557c1777e35f9"} Feb 19 09:45:44 crc kubenswrapper[4965]: W0219 09:45:44.328157 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9abcf516_ebb9_47b7_af5d_6ae08a3d1d32.slice/crio-a89b073255fcb7cd3deac7acde84d4440ac189d2266916ad9dac04d4cf18dbc2 WatchSource:0}: Error finding container a89b073255fcb7cd3deac7acde84d4440ac189d2266916ad9dac04d4cf18dbc2: Status 404 returned error can't find the container with id a89b073255fcb7cd3deac7acde84d4440ac189d2266916ad9dac04d4cf18dbc2 Feb 19 09:45:44 crc kubenswrapper[4965]: I0219 09:45:44.331035 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-std27" event={"ID":"20180c3a-aa7a-4263-9057-c85c636bfc48","Type":"ContainerStarted","Data":"383848b0c91cb78afc820522c9df2e7493948795ce54de3f5a730bee0a6a0b86"} Feb 19 09:45:44 crc kubenswrapper[4965]: I0219 09:45:44.345244 4965 generic.go:334] "Generic (PLEG): container finished" podID="4ee97219-32da-4510-84b0-00c9dca87629" containerID="637e74bcd11dce389240e94cb96644ce24f4b8fd537f8b289aa5037ba24175d6" exitCode=0 Feb 19 09:45:44 crc kubenswrapper[4965]: I0219 09:45:44.345387 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5664c447dc-fq72z" event={"ID":"4ee97219-32da-4510-84b0-00c9dca87629","Type":"ContainerDied","Data":"637e74bcd11dce389240e94cb96644ce24f4b8fd537f8b289aa5037ba24175d6"} Feb 19 09:45:44 crc kubenswrapper[4965]: I0219 09:45:44.345455 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5664c447dc-fq72z" event={"ID":"4ee97219-32da-4510-84b0-00c9dca87629","Type":"ContainerDied","Data":"316fd3815ea21d4122b62012a848e89ab9fd15789176b67cfc9c07e0c841a814"} Feb 19 09:45:44 crc kubenswrapper[4965]: I0219 09:45:44.345485 4965 scope.go:117] "RemoveContainer" containerID="637e74bcd11dce389240e94cb96644ce24f4b8fd537f8b289aa5037ba24175d6" Feb 19 09:45:44 crc kubenswrapper[4965]: I0219 09:45:44.345880 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5664c447dc-fq72z" Feb 19 09:45:44 crc kubenswrapper[4965]: I0219 09:45:44.359697 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-shcvk" podStartSLOduration=3.614739057 podStartE2EDuration="42.359665233s" podCreationTimestamp="2026-02-19 09:45:02 +0000 UTC" firstStartedPulling="2026-02-19 09:45:05.189441411 +0000 UTC m=+160.810762721" lastFinishedPulling="2026-02-19 09:45:43.934367587 +0000 UTC m=+199.555688897" observedRunningTime="2026-02-19 09:45:44.351978895 +0000 UTC m=+199.973300235" watchObservedRunningTime="2026-02-19 09:45:44.359665233 +0000 UTC m=+199.980986543" Feb 19 09:45:44 crc kubenswrapper[4965]: I0219 09:45:44.381118 4965 scope.go:117] "RemoveContainer" containerID="637e74bcd11dce389240e94cb96644ce24f4b8fd537f8b289aa5037ba24175d6" Feb 19 09:45:44 crc kubenswrapper[4965]: I0219 09:45:44.386154 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5667c" podStartSLOduration=2.950503449 podStartE2EDuration="40.386131582s" podCreationTimestamp="2026-02-19 09:45:04 +0000 UTC" firstStartedPulling="2026-02-19 09:45:06.380324871 +0000 UTC m=+162.001646181" lastFinishedPulling="2026-02-19 09:45:43.815952984 +0000 UTC m=+199.437274314" observedRunningTime="2026-02-19 09:45:44.382875699 +0000 UTC m=+200.004197029" watchObservedRunningTime="2026-02-19 09:45:44.386131582 +0000 UTC m=+200.007452892" Feb 19 09:45:44 crc kubenswrapper[4965]: E0219 09:45:44.410559 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"637e74bcd11dce389240e94cb96644ce24f4b8fd537f8b289aa5037ba24175d6\": container with ID starting with 637e74bcd11dce389240e94cb96644ce24f4b8fd537f8b289aa5037ba24175d6 not found: ID does not exist" containerID="637e74bcd11dce389240e94cb96644ce24f4b8fd537f8b289aa5037ba24175d6" Feb 19 09:45:44 crc kubenswrapper[4965]: I0219 09:45:44.411071 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"637e74bcd11dce389240e94cb96644ce24f4b8fd537f8b289aa5037ba24175d6"} err="failed to get container status \"637e74bcd11dce389240e94cb96644ce24f4b8fd537f8b289aa5037ba24175d6\": rpc error: code = NotFound desc = could not find container \"637e74bcd11dce389240e94cb96644ce24f4b8fd537f8b289aa5037ba24175d6\": container with ID starting with 637e74bcd11dce389240e94cb96644ce24f4b8fd537f8b289aa5037ba24175d6 not found: ID does not exist" Feb 19 09:45:44 crc kubenswrapper[4965]: I0219 09:45:44.414669 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-std27" podStartSLOduration=5.084753645 podStartE2EDuration="42.414644646s" podCreationTimestamp="2026-02-19 09:45:02 +0000 UTC" firstStartedPulling="2026-02-19 09:45:06.390541489 +0000 UTC m=+162.011862799" lastFinishedPulling="2026-02-19 09:45:43.72043249 +0000 UTC m=+199.341753800" observedRunningTime="2026-02-19 09:45:44.411006922 +0000 UTC m=+200.032328242" watchObservedRunningTime="2026-02-19 09:45:44.414644646 +0000 UTC m=+200.035965956" Feb 19 09:45:44 crc kubenswrapper[4965]: I0219 09:45:44.445960 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5664c447dc-fq72z"] Feb 19 09:45:44 crc kubenswrapper[4965]: I0219 09:45:44.451592 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5664c447dc-fq72z"] Feb 19 09:45:45 crc kubenswrapper[4965]: I0219 09:45:45.190883 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5667c" Feb 19 09:45:45 crc kubenswrapper[4965]: I0219 09:45:45.190931 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5667c" Feb 19 09:45:45 crc kubenswrapper[4965]: I0219 09:45:45.208579 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ee97219-32da-4510-84b0-00c9dca87629" path="/var/lib/kubelet/pods/4ee97219-32da-4510-84b0-00c9dca87629/volumes" Feb 19 09:45:45 crc kubenswrapper[4965]: I0219 09:45:45.351724 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b769dd84-5nhwl" event={"ID":"9abcf516-ebb9-47b7-af5d-6ae08a3d1d32","Type":"ContainerStarted","Data":"8877f0f03c3781eafe62b6786d2c4afdfb26d80d5cd3e98ab77760013c6859fa"} Feb 19 09:45:45 crc kubenswrapper[4965]: I0219 09:45:45.351793 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b769dd84-5nhwl" event={"ID":"9abcf516-ebb9-47b7-af5d-6ae08a3d1d32","Type":"ContainerStarted","Data":"a89b073255fcb7cd3deac7acde84d4440ac189d2266916ad9dac04d4cf18dbc2"} Feb 19 09:45:45 crc kubenswrapper[4965]: I0219 09:45:45.352030 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-b769dd84-5nhwl" Feb 19 09:45:45 crc kubenswrapper[4965]: I0219 09:45:45.359488 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-b769dd84-5nhwl" Feb 19 09:45:45 crc kubenswrapper[4965]: I0219 09:45:45.379177 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-b769dd84-5nhwl" podStartSLOduration=3.379148035 podStartE2EDuration="3.379148035s" podCreationTimestamp="2026-02-19 09:45:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:45:45.378036546 +0000 UTC m=+200.999357856" watchObservedRunningTime="2026-02-19 09:45:45.379148035 +0000 UTC m=+201.000469345" Feb 19 09:45:46 crc kubenswrapper[4965]: I0219 09:45:46.385036 4965 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-5667c" podUID="5e1e7158-ad23-4414-9858-0c1056a71f56" containerName="registry-server" probeResult="failure" output=< Feb 19 09:45:46 crc kubenswrapper[4965]: timeout: failed to connect service ":50051" within 1s Feb 19 09:45:46 crc kubenswrapper[4965]: > Feb 19 09:45:46 crc kubenswrapper[4965]: I0219 09:45:46.601513 4965 patch_prober.go:28] interesting pod/machine-config-daemon-7mhh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 09:45:46 crc kubenswrapper[4965]: I0219 09:45:46.601579 4965 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 09:45:47 crc kubenswrapper[4965]: I0219 09:45:47.593206 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-88wlz"] Feb 19 09:45:48 crc kubenswrapper[4965]: I0219 09:45:48.047497 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 19 09:45:48 crc kubenswrapper[4965]: I0219 09:45:48.049051 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 09:45:48 crc kubenswrapper[4965]: I0219 09:45:48.052812 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 19 09:45:48 crc kubenswrapper[4965]: I0219 09:45:48.053024 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 19 09:45:48 crc kubenswrapper[4965]: I0219 09:45:48.068800 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 19 09:45:48 crc kubenswrapper[4965]: I0219 09:45:48.242427 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f22d0799-00ab-404f-840d-ca40d1527566-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f22d0799-00ab-404f-840d-ca40d1527566\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 09:45:48 crc kubenswrapper[4965]: I0219 09:45:48.242649 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f22d0799-00ab-404f-840d-ca40d1527566-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f22d0799-00ab-404f-840d-ca40d1527566\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 09:45:48 crc kubenswrapper[4965]: I0219 09:45:48.343619 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f22d0799-00ab-404f-840d-ca40d1527566-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f22d0799-00ab-404f-840d-ca40d1527566\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 09:45:48 crc kubenswrapper[4965]: I0219 09:45:48.343718 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f22d0799-00ab-404f-840d-ca40d1527566-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f22d0799-00ab-404f-840d-ca40d1527566\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 09:45:48 crc kubenswrapper[4965]: I0219 09:45:48.343764 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f22d0799-00ab-404f-840d-ca40d1527566-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f22d0799-00ab-404f-840d-ca40d1527566\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 09:45:48 crc kubenswrapper[4965]: I0219 09:45:48.363826 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f22d0799-00ab-404f-840d-ca40d1527566-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f22d0799-00ab-404f-840d-ca40d1527566\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 09:45:48 crc kubenswrapper[4965]: I0219 09:45:48.374492 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 09:45:48 crc kubenswrapper[4965]: I0219 09:45:48.938863 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 19 09:45:48 crc kubenswrapper[4965]: W0219 09:45:48.949984 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podf22d0799_00ab_404f_840d_ca40d1527566.slice/crio-6a6362cbe203a62b5211abbd8aa619716f5711e91228ee67467ad8cdb294e350 WatchSource:0}: Error finding container 6a6362cbe203a62b5211abbd8aa619716f5711e91228ee67467ad8cdb294e350: Status 404 returned error can't find the container with id 6a6362cbe203a62b5211abbd8aa619716f5711e91228ee67467ad8cdb294e350 Feb 19 09:45:49 crc kubenswrapper[4965]: I0219 09:45:49.566287 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"f22d0799-00ab-404f-840d-ca40d1527566","Type":"ContainerStarted","Data":"6a6362cbe203a62b5211abbd8aa619716f5711e91228ee67467ad8cdb294e350"} Feb 19 09:45:50 crc kubenswrapper[4965]: I0219 09:45:50.730731 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"f22d0799-00ab-404f-840d-ca40d1527566","Type":"ContainerStarted","Data":"ae6e5a36e007cb89f523d85f96f1c639934bc4e2f35ab09992dde8212fff8caf"} Feb 19 09:45:50 crc kubenswrapper[4965]: I0219 09:45:50.747609 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=2.747578936 podStartE2EDuration="2.747578936s" podCreationTimestamp="2026-02-19 09:45:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:45:50.746879277 +0000 UTC m=+206.368200597" watchObservedRunningTime="2026-02-19 09:45:50.747578936 +0000 UTC m=+206.368900246" Feb 19 09:45:51 crc kubenswrapper[4965]: I0219 09:45:51.738662 4965 generic.go:334] "Generic (PLEG): container finished" podID="f22d0799-00ab-404f-840d-ca40d1527566" containerID="ae6e5a36e007cb89f523d85f96f1c639934bc4e2f35ab09992dde8212fff8caf" exitCode=0 Feb 19 09:45:51 crc kubenswrapper[4965]: I0219 09:45:51.738798 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"f22d0799-00ab-404f-840d-ca40d1527566","Type":"ContainerDied","Data":"ae6e5a36e007cb89f523d85f96f1c639934bc4e2f35ab09992dde8212fff8caf"} Feb 19 09:45:53 crc kubenswrapper[4965]: I0219 09:45:53.080310 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 09:45:53 crc kubenswrapper[4965]: I0219 09:45:53.211882 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-shcvk" Feb 19 09:45:53 crc kubenswrapper[4965]: I0219 09:45:53.211967 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-shcvk" Feb 19 09:45:53 crc kubenswrapper[4965]: I0219 09:45:53.253286 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f22d0799-00ab-404f-840d-ca40d1527566-kube-api-access\") pod \"f22d0799-00ab-404f-840d-ca40d1527566\" (UID: \"f22d0799-00ab-404f-840d-ca40d1527566\") " Feb 19 09:45:53 crc kubenswrapper[4965]: I0219 09:45:53.253816 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f22d0799-00ab-404f-840d-ca40d1527566-kubelet-dir\") pod \"f22d0799-00ab-404f-840d-ca40d1527566\" (UID: \"f22d0799-00ab-404f-840d-ca40d1527566\") " Feb 19 09:45:53 crc kubenswrapper[4965]: I0219 09:45:53.253984 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f22d0799-00ab-404f-840d-ca40d1527566-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f22d0799-00ab-404f-840d-ca40d1527566" (UID: "f22d0799-00ab-404f-840d-ca40d1527566"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:45:53 crc kubenswrapper[4965]: I0219 09:45:53.262941 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f22d0799-00ab-404f-840d-ca40d1527566-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f22d0799-00ab-404f-840d-ca40d1527566" (UID: "f22d0799-00ab-404f-840d-ca40d1527566"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:53 crc kubenswrapper[4965]: I0219 09:45:53.281861 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-shcvk" Feb 19 09:45:53 crc kubenswrapper[4965]: I0219 09:45:53.355651 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f22d0799-00ab-404f-840d-ca40d1527566-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:53 crc kubenswrapper[4965]: I0219 09:45:53.355696 4965 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f22d0799-00ab-404f-840d-ca40d1527566-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:53 crc kubenswrapper[4965]: I0219 09:45:53.647511 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-std27" Feb 19 09:45:53 crc kubenswrapper[4965]: I0219 09:45:53.647576 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-std27" Feb 19 09:45:53 crc kubenswrapper[4965]: I0219 09:45:53.695473 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-std27" Feb 19 09:45:53 crc kubenswrapper[4965]: I0219 09:45:53.752336 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 09:45:53 crc kubenswrapper[4965]: I0219 09:45:53.752329 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"f22d0799-00ab-404f-840d-ca40d1527566","Type":"ContainerDied","Data":"6a6362cbe203a62b5211abbd8aa619716f5711e91228ee67467ad8cdb294e350"} Feb 19 09:45:53 crc kubenswrapper[4965]: I0219 09:45:53.752409 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a6362cbe203a62b5211abbd8aa619716f5711e91228ee67467ad8cdb294e350" Feb 19 09:45:53 crc kubenswrapper[4965]: I0219 09:45:53.790669 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-std27" Feb 19 09:45:53 crc kubenswrapper[4965]: I0219 09:45:53.794326 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-shcvk" Feb 19 09:45:54 crc kubenswrapper[4965]: I0219 09:45:54.449177 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 19 09:45:54 crc kubenswrapper[4965]: E0219 09:45:54.449514 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f22d0799-00ab-404f-840d-ca40d1527566" containerName="pruner" Feb 19 09:45:54 crc kubenswrapper[4965]: I0219 09:45:54.449529 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="f22d0799-00ab-404f-840d-ca40d1527566" containerName="pruner" Feb 19 09:45:54 crc kubenswrapper[4965]: I0219 09:45:54.449643 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="f22d0799-00ab-404f-840d-ca40d1527566" containerName="pruner" Feb 19 09:45:54 crc kubenswrapper[4965]: I0219 09:45:54.450139 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 19 09:45:54 crc kubenswrapper[4965]: I0219 09:45:54.452363 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 19 09:45:54 crc kubenswrapper[4965]: I0219 09:45:54.452899 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 19 09:45:54 crc kubenswrapper[4965]: I0219 09:45:54.495415 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 19 09:45:54 crc kubenswrapper[4965]: I0219 09:45:54.572734 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cce58c03-973d-4b4b-8854-cf6d27c71d28-kube-api-access\") pod \"installer-9-crc\" (UID: \"cce58c03-973d-4b4b-8854-cf6d27c71d28\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 09:45:54 crc kubenswrapper[4965]: I0219 09:45:54.572817 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cce58c03-973d-4b4b-8854-cf6d27c71d28-var-lock\") pod \"installer-9-crc\" (UID: \"cce58c03-973d-4b4b-8854-cf6d27c71d28\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 09:45:54 crc kubenswrapper[4965]: I0219 09:45:54.572838 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cce58c03-973d-4b4b-8854-cf6d27c71d28-kubelet-dir\") pod \"installer-9-crc\" (UID: \"cce58c03-973d-4b4b-8854-cf6d27c71d28\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 09:45:54 crc kubenswrapper[4965]: I0219 09:45:54.674749 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cce58c03-973d-4b4b-8854-cf6d27c71d28-kube-api-access\") pod \"installer-9-crc\" (UID: \"cce58c03-973d-4b4b-8854-cf6d27c71d28\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 09:45:54 crc kubenswrapper[4965]: I0219 09:45:54.674845 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cce58c03-973d-4b4b-8854-cf6d27c71d28-kubelet-dir\") pod \"installer-9-crc\" (UID: \"cce58c03-973d-4b4b-8854-cf6d27c71d28\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 09:45:54 crc kubenswrapper[4965]: I0219 09:45:54.674878 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cce58c03-973d-4b4b-8854-cf6d27c71d28-var-lock\") pod \"installer-9-crc\" (UID: \"cce58c03-973d-4b4b-8854-cf6d27c71d28\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 09:45:54 crc kubenswrapper[4965]: I0219 09:45:54.674993 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cce58c03-973d-4b4b-8854-cf6d27c71d28-var-lock\") pod \"installer-9-crc\" (UID: \"cce58c03-973d-4b4b-8854-cf6d27c71d28\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 09:45:54 crc kubenswrapper[4965]: I0219 09:45:54.675059 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cce58c03-973d-4b4b-8854-cf6d27c71d28-kubelet-dir\") pod \"installer-9-crc\" (UID: \"cce58c03-973d-4b4b-8854-cf6d27c71d28\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 09:45:54 crc kubenswrapper[4965]: I0219 09:45:54.697690 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cce58c03-973d-4b4b-8854-cf6d27c71d28-kube-api-access\") pod \"installer-9-crc\" (UID: \"cce58c03-973d-4b4b-8854-cf6d27c71d28\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 09:45:54 crc kubenswrapper[4965]: I0219 09:45:54.789252 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 19 09:45:55 crc kubenswrapper[4965]: I0219 09:45:55.115474 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-std27"] Feb 19 09:45:55 crc kubenswrapper[4965]: I0219 09:45:55.211497 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 19 09:45:55 crc kubenswrapper[4965]: I0219 09:45:55.254172 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5667c" Feb 19 09:45:55 crc kubenswrapper[4965]: I0219 09:45:55.303857 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5667c" Feb 19 09:45:55 crc kubenswrapper[4965]: I0219 09:45:55.768351 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-std27" podUID="20180c3a-aa7a-4263-9057-c85c636bfc48" containerName="registry-server" containerID="cri-o://383848b0c91cb78afc820522c9df2e7493948795ce54de3f5a730bee0a6a0b86" gracePeriod=2 Feb 19 09:45:55 crc kubenswrapper[4965]: I0219 09:45:55.769554 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"cce58c03-973d-4b4b-8854-cf6d27c71d28","Type":"ContainerStarted","Data":"e1f96256fc4566f30ddeeed4acea99d14d2fd9c3fcbad319edca036182c83870"} Feb 19 09:45:55 crc kubenswrapper[4965]: I0219 09:45:55.771138 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"cce58c03-973d-4b4b-8854-cf6d27c71d28","Type":"ContainerStarted","Data":"2c98190457d2258c2fbc77d9a59b7518701919ce4c9b1c1dbadb9325699a9687"} Feb 19 09:45:55 crc kubenswrapper[4965]: I0219 09:45:55.807290 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=1.807259725 podStartE2EDuration="1.807259725s" podCreationTimestamp="2026-02-19 09:45:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:45:55.802504072 +0000 UTC m=+211.423825392" watchObservedRunningTime="2026-02-19 09:45:55.807259725 +0000 UTC m=+211.428581055" Feb 19 09:45:56 crc kubenswrapper[4965]: I0219 09:45:56.116849 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-shcvk"] Feb 19 09:45:56 crc kubenswrapper[4965]: I0219 09:45:56.117109 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-shcvk" podUID="98f1b66c-456a-415c-b093-20ab1fa33b9b" containerName="registry-server" containerID="cri-o://cfb0a480b40f7e52f180babc8283702e8e9b2b99b1d7249a595587152fcf9150" gracePeriod=2 Feb 19 09:45:56 crc kubenswrapper[4965]: I0219 09:45:56.466559 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-std27" Feb 19 09:45:56 crc kubenswrapper[4965]: I0219 09:45:56.527243 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20180c3a-aa7a-4263-9057-c85c636bfc48-utilities\") pod \"20180c3a-aa7a-4263-9057-c85c636bfc48\" (UID: \"20180c3a-aa7a-4263-9057-c85c636bfc48\") " Feb 19 09:45:56 crc kubenswrapper[4965]: I0219 09:45:56.527333 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42dwh\" (UniqueName: \"kubernetes.io/projected/20180c3a-aa7a-4263-9057-c85c636bfc48-kube-api-access-42dwh\") pod \"20180c3a-aa7a-4263-9057-c85c636bfc48\" (UID: \"20180c3a-aa7a-4263-9057-c85c636bfc48\") " Feb 19 09:45:56 crc kubenswrapper[4965]: I0219 09:45:56.527375 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20180c3a-aa7a-4263-9057-c85c636bfc48-catalog-content\") pod \"20180c3a-aa7a-4263-9057-c85c636bfc48\" (UID: \"20180c3a-aa7a-4263-9057-c85c636bfc48\") " Feb 19 09:45:56 crc kubenswrapper[4965]: I0219 09:45:56.528149 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20180c3a-aa7a-4263-9057-c85c636bfc48-utilities" (OuterVolumeSpecName: "utilities") pod "20180c3a-aa7a-4263-9057-c85c636bfc48" (UID: "20180c3a-aa7a-4263-9057-c85c636bfc48"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:45:56 crc kubenswrapper[4965]: I0219 09:45:56.533856 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20180c3a-aa7a-4263-9057-c85c636bfc48-kube-api-access-42dwh" (OuterVolumeSpecName: "kube-api-access-42dwh") pod "20180c3a-aa7a-4263-9057-c85c636bfc48" (UID: "20180c3a-aa7a-4263-9057-c85c636bfc48"). InnerVolumeSpecName "kube-api-access-42dwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:56 crc kubenswrapper[4965]: I0219 09:45:56.590343 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20180c3a-aa7a-4263-9057-c85c636bfc48-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "20180c3a-aa7a-4263-9057-c85c636bfc48" (UID: "20180c3a-aa7a-4263-9057-c85c636bfc48"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:45:56 crc kubenswrapper[4965]: I0219 09:45:56.629415 4965 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20180c3a-aa7a-4263-9057-c85c636bfc48-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:56 crc kubenswrapper[4965]: I0219 09:45:56.629457 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42dwh\" (UniqueName: \"kubernetes.io/projected/20180c3a-aa7a-4263-9057-c85c636bfc48-kube-api-access-42dwh\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:56 crc kubenswrapper[4965]: I0219 09:45:56.629471 4965 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20180c3a-aa7a-4263-9057-c85c636bfc48-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:56 crc kubenswrapper[4965]: I0219 09:45:56.784422 4965 generic.go:334] "Generic (PLEG): container finished" podID="98f1b66c-456a-415c-b093-20ab1fa33b9b" containerID="cfb0a480b40f7e52f180babc8283702e8e9b2b99b1d7249a595587152fcf9150" exitCode=0 Feb 19 09:45:56 crc kubenswrapper[4965]: I0219 09:45:56.784546 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-shcvk" event={"ID":"98f1b66c-456a-415c-b093-20ab1fa33b9b","Type":"ContainerDied","Data":"cfb0a480b40f7e52f180babc8283702e8e9b2b99b1d7249a595587152fcf9150"} Feb 19 09:45:56 crc kubenswrapper[4965]: I0219 09:45:56.786203 4965 generic.go:334] "Generic (PLEG): container finished" podID="b1832525-d3f5-47bc-879b-4d4e4f3c14bd" containerID="444591c8dd8cbddf28839dc5a52d988ad48ea2f9cbec9153f472bf50ed5b05e4" exitCode=0 Feb 19 09:45:56 crc kubenswrapper[4965]: I0219 09:45:56.786249 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c55hf" event={"ID":"b1832525-d3f5-47bc-879b-4d4e4f3c14bd","Type":"ContainerDied","Data":"444591c8dd8cbddf28839dc5a52d988ad48ea2f9cbec9153f472bf50ed5b05e4"} Feb 19 09:45:56 crc kubenswrapper[4965]: I0219 09:45:56.800313 4965 generic.go:334] "Generic (PLEG): container finished" podID="20180c3a-aa7a-4263-9057-c85c636bfc48" containerID="383848b0c91cb78afc820522c9df2e7493948795ce54de3f5a730bee0a6a0b86" exitCode=0 Feb 19 09:45:56 crc kubenswrapper[4965]: I0219 09:45:56.800495 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-std27" Feb 19 09:45:56 crc kubenswrapper[4965]: I0219 09:45:56.800434 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-std27" event={"ID":"20180c3a-aa7a-4263-9057-c85c636bfc48","Type":"ContainerDied","Data":"383848b0c91cb78afc820522c9df2e7493948795ce54de3f5a730bee0a6a0b86"} Feb 19 09:45:56 crc kubenswrapper[4965]: I0219 09:45:56.800750 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-std27" event={"ID":"20180c3a-aa7a-4263-9057-c85c636bfc48","Type":"ContainerDied","Data":"c887238d8ac72033f8640766d1d0c76c32255b85797c323f404e12da619a2224"} Feb 19 09:45:56 crc kubenswrapper[4965]: I0219 09:45:56.800829 4965 scope.go:117] "RemoveContainer" containerID="383848b0c91cb78afc820522c9df2e7493948795ce54de3f5a730bee0a6a0b86" Feb 19 09:45:56 crc kubenswrapper[4965]: I0219 09:45:56.836771 4965 scope.go:117] "RemoveContainer" containerID="b699e697e01e0301e194c248be1ff9abd9c81a9a0a4390a8276b9a222feb9990" Feb 19 09:45:56 crc kubenswrapper[4965]: I0219 09:45:56.872353 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-std27"] Feb 19 09:45:56 crc kubenswrapper[4965]: I0219 09:45:56.878862 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-std27"] Feb 19 09:45:56 crc kubenswrapper[4965]: I0219 09:45:56.990136 4965 scope.go:117] "RemoveContainer" containerID="10de4366a806f9e7d262c223f984ee3cf7838ba2b1ce8d35e0aaffedecfcd699" Feb 19 09:45:57 crc kubenswrapper[4965]: I0219 09:45:57.027963 4965 scope.go:117] "RemoveContainer" containerID="383848b0c91cb78afc820522c9df2e7493948795ce54de3f5a730bee0a6a0b86" Feb 19 09:45:57 crc kubenswrapper[4965]: E0219 09:45:57.028870 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"383848b0c91cb78afc820522c9df2e7493948795ce54de3f5a730bee0a6a0b86\": container with ID starting with 383848b0c91cb78afc820522c9df2e7493948795ce54de3f5a730bee0a6a0b86 not found: ID does not exist" containerID="383848b0c91cb78afc820522c9df2e7493948795ce54de3f5a730bee0a6a0b86" Feb 19 09:45:57 crc kubenswrapper[4965]: I0219 09:45:57.028927 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"383848b0c91cb78afc820522c9df2e7493948795ce54de3f5a730bee0a6a0b86"} err="failed to get container status \"383848b0c91cb78afc820522c9df2e7493948795ce54de3f5a730bee0a6a0b86\": rpc error: code = NotFound desc = could not find container \"383848b0c91cb78afc820522c9df2e7493948795ce54de3f5a730bee0a6a0b86\": container with ID starting with 383848b0c91cb78afc820522c9df2e7493948795ce54de3f5a730bee0a6a0b86 not found: ID does not exist" Feb 19 09:45:57 crc kubenswrapper[4965]: I0219 09:45:57.028960 4965 scope.go:117] "RemoveContainer" containerID="b699e697e01e0301e194c248be1ff9abd9c81a9a0a4390a8276b9a222feb9990" Feb 19 09:45:57 crc kubenswrapper[4965]: E0219 09:45:57.030618 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b699e697e01e0301e194c248be1ff9abd9c81a9a0a4390a8276b9a222feb9990\": container with ID starting with b699e697e01e0301e194c248be1ff9abd9c81a9a0a4390a8276b9a222feb9990 not found: ID does not exist" containerID="b699e697e01e0301e194c248be1ff9abd9c81a9a0a4390a8276b9a222feb9990" Feb 19 09:45:57 crc kubenswrapper[4965]: I0219 09:45:57.030645 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b699e697e01e0301e194c248be1ff9abd9c81a9a0a4390a8276b9a222feb9990"} err="failed to get container status \"b699e697e01e0301e194c248be1ff9abd9c81a9a0a4390a8276b9a222feb9990\": rpc error: code = NotFound desc = could not find container \"b699e697e01e0301e194c248be1ff9abd9c81a9a0a4390a8276b9a222feb9990\": container with ID starting with b699e697e01e0301e194c248be1ff9abd9c81a9a0a4390a8276b9a222feb9990 not found: ID does not exist" Feb 19 09:45:57 crc kubenswrapper[4965]: I0219 09:45:57.030677 4965 scope.go:117] "RemoveContainer" containerID="10de4366a806f9e7d262c223f984ee3cf7838ba2b1ce8d35e0aaffedecfcd699" Feb 19 09:45:57 crc kubenswrapper[4965]: E0219 09:45:57.031456 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10de4366a806f9e7d262c223f984ee3cf7838ba2b1ce8d35e0aaffedecfcd699\": container with ID starting with 10de4366a806f9e7d262c223f984ee3cf7838ba2b1ce8d35e0aaffedecfcd699 not found: ID does not exist" containerID="10de4366a806f9e7d262c223f984ee3cf7838ba2b1ce8d35e0aaffedecfcd699" Feb 19 09:45:57 crc kubenswrapper[4965]: I0219 09:45:57.031477 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10de4366a806f9e7d262c223f984ee3cf7838ba2b1ce8d35e0aaffedecfcd699"} err="failed to get container status \"10de4366a806f9e7d262c223f984ee3cf7838ba2b1ce8d35e0aaffedecfcd699\": rpc error: code = NotFound desc = could not find container \"10de4366a806f9e7d262c223f984ee3cf7838ba2b1ce8d35e0aaffedecfcd699\": container with ID starting with 10de4366a806f9e7d262c223f984ee3cf7838ba2b1ce8d35e0aaffedecfcd699 not found: ID does not exist" Feb 19 09:45:57 crc kubenswrapper[4965]: I0219 09:45:57.163362 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-shcvk" Feb 19 09:45:57 crc kubenswrapper[4965]: I0219 09:45:57.209002 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20180c3a-aa7a-4263-9057-c85c636bfc48" path="/var/lib/kubelet/pods/20180c3a-aa7a-4263-9057-c85c636bfc48/volumes" Feb 19 09:45:57 crc kubenswrapper[4965]: I0219 09:45:57.359684 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nch5g\" (UniqueName: \"kubernetes.io/projected/98f1b66c-456a-415c-b093-20ab1fa33b9b-kube-api-access-nch5g\") pod \"98f1b66c-456a-415c-b093-20ab1fa33b9b\" (UID: \"98f1b66c-456a-415c-b093-20ab1fa33b9b\") " Feb 19 09:45:57 crc kubenswrapper[4965]: I0219 09:45:57.360006 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98f1b66c-456a-415c-b093-20ab1fa33b9b-catalog-content\") pod \"98f1b66c-456a-415c-b093-20ab1fa33b9b\" (UID: \"98f1b66c-456a-415c-b093-20ab1fa33b9b\") " Feb 19 09:45:57 crc kubenswrapper[4965]: I0219 09:45:57.360152 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98f1b66c-456a-415c-b093-20ab1fa33b9b-utilities\") pod \"98f1b66c-456a-415c-b093-20ab1fa33b9b\" (UID: \"98f1b66c-456a-415c-b093-20ab1fa33b9b\") " Feb 19 09:45:57 crc kubenswrapper[4965]: I0219 09:45:57.360810 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98f1b66c-456a-415c-b093-20ab1fa33b9b-utilities" (OuterVolumeSpecName: "utilities") pod "98f1b66c-456a-415c-b093-20ab1fa33b9b" (UID: "98f1b66c-456a-415c-b093-20ab1fa33b9b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:45:57 crc kubenswrapper[4965]: I0219 09:45:57.365529 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98f1b66c-456a-415c-b093-20ab1fa33b9b-kube-api-access-nch5g" (OuterVolumeSpecName: "kube-api-access-nch5g") pod "98f1b66c-456a-415c-b093-20ab1fa33b9b" (UID: "98f1b66c-456a-415c-b093-20ab1fa33b9b"). InnerVolumeSpecName "kube-api-access-nch5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:57 crc kubenswrapper[4965]: I0219 09:45:57.405864 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98f1b66c-456a-415c-b093-20ab1fa33b9b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "98f1b66c-456a-415c-b093-20ab1fa33b9b" (UID: "98f1b66c-456a-415c-b093-20ab1fa33b9b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:45:57 crc kubenswrapper[4965]: I0219 09:45:57.461580 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nch5g\" (UniqueName: \"kubernetes.io/projected/98f1b66c-456a-415c-b093-20ab1fa33b9b-kube-api-access-nch5g\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:57 crc kubenswrapper[4965]: I0219 09:45:57.461646 4965 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98f1b66c-456a-415c-b093-20ab1fa33b9b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:57 crc kubenswrapper[4965]: I0219 09:45:57.461666 4965 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98f1b66c-456a-415c-b093-20ab1fa33b9b-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:57 crc kubenswrapper[4965]: I0219 09:45:57.813483 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-shcvk" event={"ID":"98f1b66c-456a-415c-b093-20ab1fa33b9b","Type":"ContainerDied","Data":"0f41403e0e617b30b30c5f8e46855ac16bb25da788b2dbb246fc4687c7586506"} Feb 19 09:45:57 crc kubenswrapper[4965]: I0219 09:45:57.813545 4965 scope.go:117] "RemoveContainer" containerID="cfb0a480b40f7e52f180babc8283702e8e9b2b99b1d7249a595587152fcf9150" Feb 19 09:45:57 crc kubenswrapper[4965]: I0219 09:45:57.813859 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-shcvk" Feb 19 09:45:57 crc kubenswrapper[4965]: I0219 09:45:57.825017 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c55hf" event={"ID":"b1832525-d3f5-47bc-879b-4d4e4f3c14bd","Type":"ContainerStarted","Data":"b6ced0454e0dc5fb5ccbcc5e5378ccb5ff7f72fcdcfb20212bcbdb2d3d7e2809"} Feb 19 09:45:57 crc kubenswrapper[4965]: I0219 09:45:57.829212 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8tjpx" event={"ID":"7296417d-9dfd-4ca9-8ad7-c0016daa9b53","Type":"ContainerStarted","Data":"98ddda2a71f22cc5fb88701bd7f1acb311632e3069697f7ffa1af4f3d88f5f2a"} Feb 19 09:45:57 crc kubenswrapper[4965]: I0219 09:45:57.839039 4965 scope.go:117] "RemoveContainer" containerID="05052189d3e724d700820cf98908bb07ddce6850c29bc56b75b5012f6ea8db45" Feb 19 09:45:57 crc kubenswrapper[4965]: I0219 09:45:57.849600 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-c55hf" podStartSLOduration=2.669772246 podStartE2EDuration="53.849581274s" podCreationTimestamp="2026-02-19 09:45:04 +0000 UTC" firstStartedPulling="2026-02-19 09:45:06.38073082 +0000 UTC m=+162.002052130" lastFinishedPulling="2026-02-19 09:45:57.560539848 +0000 UTC m=+213.181861158" observedRunningTime="2026-02-19 09:45:57.848070905 +0000 UTC m=+213.469392215" watchObservedRunningTime="2026-02-19 09:45:57.849581274 +0000 UTC m=+213.470902584" Feb 19 09:45:57 crc kubenswrapper[4965]: I0219 09:45:57.888655 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-shcvk"] Feb 19 09:45:57 crc kubenswrapper[4965]: I0219 09:45:57.889159 4965 scope.go:117] "RemoveContainer" containerID="514f8c61e959ed459a09302a32819b44280cdb216c8943f448872c8e27fd5501" Feb 19 09:45:57 crc kubenswrapper[4965]: I0219 09:45:57.894500 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-shcvk"] Feb 19 09:45:57 crc kubenswrapper[4965]: E0219 09:45:57.961044 4965 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98f1b66c_456a_415c_b093_20ab1fa33b9b.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98f1b66c_456a_415c_b093_20ab1fa33b9b.slice/crio-0f41403e0e617b30b30c5f8e46855ac16bb25da788b2dbb246fc4687c7586506\": RecentStats: unable to find data in memory cache]" Feb 19 09:45:58 crc kubenswrapper[4965]: I0219 09:45:58.515793 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5667c"] Feb 19 09:45:58 crc kubenswrapper[4965]: I0219 09:45:58.516095 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5667c" podUID="5e1e7158-ad23-4414-9858-0c1056a71f56" containerName="registry-server" containerID="cri-o://29c4d915bea09f5fd5887e3598a860f9bd8a51f7be6891477dd557c1777e35f9" gracePeriod=2 Feb 19 09:45:58 crc kubenswrapper[4965]: I0219 09:45:58.842961 4965 generic.go:334] "Generic (PLEG): container finished" podID="5e1e7158-ad23-4414-9858-0c1056a71f56" containerID="29c4d915bea09f5fd5887e3598a860f9bd8a51f7be6891477dd557c1777e35f9" exitCode=0 Feb 19 09:45:58 crc kubenswrapper[4965]: I0219 09:45:58.843069 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5667c" event={"ID":"5e1e7158-ad23-4414-9858-0c1056a71f56","Type":"ContainerDied","Data":"29c4d915bea09f5fd5887e3598a860f9bd8a51f7be6891477dd557c1777e35f9"} Feb 19 09:45:58 crc kubenswrapper[4965]: I0219 09:45:58.851415 4965 generic.go:334] "Generic (PLEG): container finished" podID="c2ea1b40-1bc8-462a-a2a2-218c24c27584" containerID="b2c72d57f947aa51bc030460dee6b59f0d877cf0cf098a9ba26312ee10fe00f5" exitCode=0 Feb 19 09:45:58 crc kubenswrapper[4965]: I0219 09:45:58.851588 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fxnw5" event={"ID":"c2ea1b40-1bc8-462a-a2a2-218c24c27584","Type":"ContainerDied","Data":"b2c72d57f947aa51bc030460dee6b59f0d877cf0cf098a9ba26312ee10fe00f5"} Feb 19 09:45:58 crc kubenswrapper[4965]: I0219 09:45:58.858399 4965 generic.go:334] "Generic (PLEG): container finished" podID="7296417d-9dfd-4ca9-8ad7-c0016daa9b53" containerID="98ddda2a71f22cc5fb88701bd7f1acb311632e3069697f7ffa1af4f3d88f5f2a" exitCode=0 Feb 19 09:45:58 crc kubenswrapper[4965]: I0219 09:45:58.858473 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8tjpx" event={"ID":"7296417d-9dfd-4ca9-8ad7-c0016daa9b53","Type":"ContainerDied","Data":"98ddda2a71f22cc5fb88701bd7f1acb311632e3069697f7ffa1af4f3d88f5f2a"} Feb 19 09:45:58 crc kubenswrapper[4965]: I0219 09:45:58.863834 4965 generic.go:334] "Generic (PLEG): container finished" podID="badd7c24-44c3-4853-9611-aeb49c3df0ab" containerID="b80a4fe5ec583fd45130619237e34ff8c7890bbf1ace972a3ef2884ab2dc17f8" exitCode=0 Feb 19 09:45:58 crc kubenswrapper[4965]: I0219 09:45:58.863908 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tlmst" event={"ID":"badd7c24-44c3-4853-9611-aeb49c3df0ab","Type":"ContainerDied","Data":"b80a4fe5ec583fd45130619237e34ff8c7890bbf1ace972a3ef2884ab2dc17f8"} Feb 19 09:45:58 crc kubenswrapper[4965]: I0219 09:45:58.875636 4965 generic.go:334] "Generic (PLEG): container finished" podID="e428e472-401e-45b3-b70b-d2e0f19b52f9" containerID="1f344d4aee174b55e3016e7c4c616414d8c4772f71de32818708fed004947cb0" exitCode=0 Feb 19 09:45:58 crc kubenswrapper[4965]: I0219 09:45:58.875755 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s8pfp" event={"ID":"e428e472-401e-45b3-b70b-d2e0f19b52f9","Type":"ContainerDied","Data":"1f344d4aee174b55e3016e7c4c616414d8c4772f71de32818708fed004947cb0"} Feb 19 09:45:59 crc kubenswrapper[4965]: I0219 09:45:59.047055 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5667c" Feb 19 09:45:59 crc kubenswrapper[4965]: I0219 09:45:59.191628 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gwp5\" (UniqueName: \"kubernetes.io/projected/5e1e7158-ad23-4414-9858-0c1056a71f56-kube-api-access-6gwp5\") pod \"5e1e7158-ad23-4414-9858-0c1056a71f56\" (UID: \"5e1e7158-ad23-4414-9858-0c1056a71f56\") " Feb 19 09:45:59 crc kubenswrapper[4965]: I0219 09:45:59.191956 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e1e7158-ad23-4414-9858-0c1056a71f56-utilities\") pod \"5e1e7158-ad23-4414-9858-0c1056a71f56\" (UID: \"5e1e7158-ad23-4414-9858-0c1056a71f56\") " Feb 19 09:45:59 crc kubenswrapper[4965]: I0219 09:45:59.192089 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e1e7158-ad23-4414-9858-0c1056a71f56-catalog-content\") pod \"5e1e7158-ad23-4414-9858-0c1056a71f56\" (UID: \"5e1e7158-ad23-4414-9858-0c1056a71f56\") " Feb 19 09:45:59 crc kubenswrapper[4965]: I0219 09:45:59.192946 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e1e7158-ad23-4414-9858-0c1056a71f56-utilities" (OuterVolumeSpecName: "utilities") pod "5e1e7158-ad23-4414-9858-0c1056a71f56" (UID: "5e1e7158-ad23-4414-9858-0c1056a71f56"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:45:59 crc kubenswrapper[4965]: I0219 09:45:59.202408 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e1e7158-ad23-4414-9858-0c1056a71f56-kube-api-access-6gwp5" (OuterVolumeSpecName: "kube-api-access-6gwp5") pod "5e1e7158-ad23-4414-9858-0c1056a71f56" (UID: "5e1e7158-ad23-4414-9858-0c1056a71f56"). InnerVolumeSpecName "kube-api-access-6gwp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:45:59 crc kubenswrapper[4965]: I0219 09:45:59.207894 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98f1b66c-456a-415c-b093-20ab1fa33b9b" path="/var/lib/kubelet/pods/98f1b66c-456a-415c-b093-20ab1fa33b9b/volumes" Feb 19 09:45:59 crc kubenswrapper[4965]: I0219 09:45:59.231030 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e1e7158-ad23-4414-9858-0c1056a71f56-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5e1e7158-ad23-4414-9858-0c1056a71f56" (UID: "5e1e7158-ad23-4414-9858-0c1056a71f56"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:45:59 crc kubenswrapper[4965]: I0219 09:45:59.293874 4965 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e1e7158-ad23-4414-9858-0c1056a71f56-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:59 crc kubenswrapper[4965]: I0219 09:45:59.293917 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6gwp5\" (UniqueName: \"kubernetes.io/projected/5e1e7158-ad23-4414-9858-0c1056a71f56-kube-api-access-6gwp5\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:59 crc kubenswrapper[4965]: I0219 09:45:59.293931 4965 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e1e7158-ad23-4414-9858-0c1056a71f56-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 09:45:59 crc kubenswrapper[4965]: I0219 09:45:59.892749 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fxnw5" event={"ID":"c2ea1b40-1bc8-462a-a2a2-218c24c27584","Type":"ContainerStarted","Data":"971cb5acd33203801bcec353714e16d855edfdbd368fc8ce536a258a4a1af14b"} Feb 19 09:45:59 crc kubenswrapper[4965]: I0219 09:45:59.895417 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8tjpx" event={"ID":"7296417d-9dfd-4ca9-8ad7-c0016daa9b53","Type":"ContainerStarted","Data":"6a18b36b7cc4dcad850bd0274daaa5f4e0e1472f4c86ae9d9f6c8f5536ddd84f"} Feb 19 09:45:59 crc kubenswrapper[4965]: I0219 09:45:59.899320 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tlmst" event={"ID":"badd7c24-44c3-4853-9611-aeb49c3df0ab","Type":"ContainerStarted","Data":"c8cdf3f9015872c95f56f22ba2c0ad8547f033744ccae299a507f76f28a9e61c"} Feb 19 09:45:59 crc kubenswrapper[4965]: I0219 09:45:59.902313 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s8pfp" event={"ID":"e428e472-401e-45b3-b70b-d2e0f19b52f9","Type":"ContainerStarted","Data":"672d235e141d03c0b7695ab7aaeabf0cf31a3ba4c6a63fbafad80b099dd50a13"} Feb 19 09:45:59 crc kubenswrapper[4965]: I0219 09:45:59.904690 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5667c" event={"ID":"5e1e7158-ad23-4414-9858-0c1056a71f56","Type":"ContainerDied","Data":"48a986bd7335b23f8dbc5da07fb6dc84d757b200cd61a2761c7f518bf90360c0"} Feb 19 09:45:59 crc kubenswrapper[4965]: I0219 09:45:59.904732 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5667c" Feb 19 09:45:59 crc kubenswrapper[4965]: I0219 09:45:59.904751 4965 scope.go:117] "RemoveContainer" containerID="29c4d915bea09f5fd5887e3598a860f9bd8a51f7be6891477dd557c1777e35f9" Feb 19 09:45:59 crc kubenswrapper[4965]: I0219 09:45:59.919599 4965 scope.go:117] "RemoveContainer" containerID="df9349b40801de7a33659f4da34ffba8824b21bb84fe071f573dede5fe655566" Feb 19 09:45:59 crc kubenswrapper[4965]: I0219 09:45:59.922775 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fxnw5" podStartSLOduration=3.758318089 podStartE2EDuration="57.922732896s" podCreationTimestamp="2026-02-19 09:45:02 +0000 UTC" firstStartedPulling="2026-02-19 09:45:05.141557963 +0000 UTC m=+160.762879273" lastFinishedPulling="2026-02-19 09:45:59.30597275 +0000 UTC m=+214.927294080" observedRunningTime="2026-02-19 09:45:59.921523034 +0000 UTC m=+215.542844344" watchObservedRunningTime="2026-02-19 09:45:59.922732896 +0000 UTC m=+215.544054206" Feb 19 09:45:59 crc kubenswrapper[4965]: I0219 09:45:59.939255 4965 scope.go:117] "RemoveContainer" containerID="0fba96ecb8083271d135fcba1da4ad09e63f802b7ed5069346524c2216ebba66" Feb 19 09:45:59 crc kubenswrapper[4965]: I0219 09:45:59.953305 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tlmst" podStartSLOduration=3.800391486 podStartE2EDuration="57.953282961s" podCreationTimestamp="2026-02-19 09:45:02 +0000 UTC" firstStartedPulling="2026-02-19 09:45:05.151996268 +0000 UTC m=+160.773317578" lastFinishedPulling="2026-02-19 09:45:59.304887723 +0000 UTC m=+214.926209053" observedRunningTime="2026-02-19 09:45:59.952952741 +0000 UTC m=+215.574274061" watchObservedRunningTime="2026-02-19 09:45:59.953282961 +0000 UTC m=+215.574604261" Feb 19 09:45:59 crc kubenswrapper[4965]: I0219 09:45:59.978523 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-s8pfp" podStartSLOduration=3.135489008 podStartE2EDuration="54.978497648s" podCreationTimestamp="2026-02-19 09:45:05 +0000 UTC" firstStartedPulling="2026-02-19 09:45:07.609363241 +0000 UTC m=+163.230684551" lastFinishedPulling="2026-02-19 09:45:59.452371881 +0000 UTC m=+215.073693191" observedRunningTime="2026-02-19 09:45:59.976758353 +0000 UTC m=+215.598079673" watchObservedRunningTime="2026-02-19 09:45:59.978497648 +0000 UTC m=+215.599818958" Feb 19 09:45:59 crc kubenswrapper[4965]: I0219 09:45:59.995581 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5667c"] Feb 19 09:45:59 crc kubenswrapper[4965]: I0219 09:45:59.999032 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5667c"] Feb 19 09:46:00 crc kubenswrapper[4965]: I0219 09:46:00.027906 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8tjpx" podStartSLOduration=3.313136045 podStartE2EDuration="55.027881867s" podCreationTimestamp="2026-02-19 09:45:05 +0000 UTC" firstStartedPulling="2026-02-19 09:45:07.554379641 +0000 UTC m=+163.175700951" lastFinishedPulling="2026-02-19 09:45:59.269125453 +0000 UTC m=+214.890446773" observedRunningTime="2026-02-19 09:46:00.024167211 +0000 UTC m=+215.645488521" watchObservedRunningTime="2026-02-19 09:46:00.027881867 +0000 UTC m=+215.649203177" Feb 19 09:46:01 crc kubenswrapper[4965]: I0219 09:46:01.209293 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e1e7158-ad23-4414-9858-0c1056a71f56" path="/var/lib/kubelet/pods/5e1e7158-ad23-4414-9858-0c1056a71f56/volumes" Feb 19 09:46:02 crc kubenswrapper[4965]: I0219 09:46:02.168383 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b769dd84-5nhwl"] Feb 19 09:46:02 crc kubenswrapper[4965]: I0219 09:46:02.169114 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-b769dd84-5nhwl" podUID="9abcf516-ebb9-47b7-af5d-6ae08a3d1d32" containerName="route-controller-manager" containerID="cri-o://8877f0f03c3781eafe62b6786d2c4afdfb26d80d5cd3e98ab77760013c6859fa" gracePeriod=30 Feb 19 09:46:02 crc kubenswrapper[4965]: I0219 09:46:02.692723 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b769dd84-5nhwl" Feb 19 09:46:02 crc kubenswrapper[4965]: I0219 09:46:02.843907 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9abcf516-ebb9-47b7-af5d-6ae08a3d1d32-serving-cert\") pod \"9abcf516-ebb9-47b7-af5d-6ae08a3d1d32\" (UID: \"9abcf516-ebb9-47b7-af5d-6ae08a3d1d32\") " Feb 19 09:46:02 crc kubenswrapper[4965]: I0219 09:46:02.843978 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9abcf516-ebb9-47b7-af5d-6ae08a3d1d32-client-ca\") pod \"9abcf516-ebb9-47b7-af5d-6ae08a3d1d32\" (UID: \"9abcf516-ebb9-47b7-af5d-6ae08a3d1d32\") " Feb 19 09:46:02 crc kubenswrapper[4965]: I0219 09:46:02.844018 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9abcf516-ebb9-47b7-af5d-6ae08a3d1d32-config\") pod \"9abcf516-ebb9-47b7-af5d-6ae08a3d1d32\" (UID: \"9abcf516-ebb9-47b7-af5d-6ae08a3d1d32\") " Feb 19 09:46:02 crc kubenswrapper[4965]: I0219 09:46:02.844164 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcc89\" (UniqueName: \"kubernetes.io/projected/9abcf516-ebb9-47b7-af5d-6ae08a3d1d32-kube-api-access-dcc89\") pod \"9abcf516-ebb9-47b7-af5d-6ae08a3d1d32\" (UID: \"9abcf516-ebb9-47b7-af5d-6ae08a3d1d32\") " Feb 19 09:46:02 crc kubenswrapper[4965]: I0219 09:46:02.845319 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9abcf516-ebb9-47b7-af5d-6ae08a3d1d32-client-ca" (OuterVolumeSpecName: "client-ca") pod "9abcf516-ebb9-47b7-af5d-6ae08a3d1d32" (UID: "9abcf516-ebb9-47b7-af5d-6ae08a3d1d32"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:46:02 crc kubenswrapper[4965]: I0219 09:46:02.845335 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9abcf516-ebb9-47b7-af5d-6ae08a3d1d32-config" (OuterVolumeSpecName: "config") pod "9abcf516-ebb9-47b7-af5d-6ae08a3d1d32" (UID: "9abcf516-ebb9-47b7-af5d-6ae08a3d1d32"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:46:02 crc kubenswrapper[4965]: I0219 09:46:02.855501 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9abcf516-ebb9-47b7-af5d-6ae08a3d1d32-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9abcf516-ebb9-47b7-af5d-6ae08a3d1d32" (UID: "9abcf516-ebb9-47b7-af5d-6ae08a3d1d32"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:46:02 crc kubenswrapper[4965]: I0219 09:46:02.855685 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9abcf516-ebb9-47b7-af5d-6ae08a3d1d32-kube-api-access-dcc89" (OuterVolumeSpecName: "kube-api-access-dcc89") pod "9abcf516-ebb9-47b7-af5d-6ae08a3d1d32" (UID: "9abcf516-ebb9-47b7-af5d-6ae08a3d1d32"). InnerVolumeSpecName "kube-api-access-dcc89". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:46:02 crc kubenswrapper[4965]: I0219 09:46:02.876232 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fxnw5" Feb 19 09:46:02 crc kubenswrapper[4965]: I0219 09:46:02.876318 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fxnw5" Feb 19 09:46:02 crc kubenswrapper[4965]: I0219 09:46:02.927540 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fxnw5" Feb 19 09:46:02 crc kubenswrapper[4965]: I0219 09:46:02.929761 4965 generic.go:334] "Generic (PLEG): container finished" podID="9abcf516-ebb9-47b7-af5d-6ae08a3d1d32" containerID="8877f0f03c3781eafe62b6786d2c4afdfb26d80d5cd3e98ab77760013c6859fa" exitCode=0 Feb 19 09:46:02 crc kubenswrapper[4965]: I0219 09:46:02.929875 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b769dd84-5nhwl" Feb 19 09:46:02 crc kubenswrapper[4965]: I0219 09:46:02.929952 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b769dd84-5nhwl" event={"ID":"9abcf516-ebb9-47b7-af5d-6ae08a3d1d32","Type":"ContainerDied","Data":"8877f0f03c3781eafe62b6786d2c4afdfb26d80d5cd3e98ab77760013c6859fa"} Feb 19 09:46:02 crc kubenswrapper[4965]: I0219 09:46:02.930011 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b769dd84-5nhwl" event={"ID":"9abcf516-ebb9-47b7-af5d-6ae08a3d1d32","Type":"ContainerDied","Data":"a89b073255fcb7cd3deac7acde84d4440ac189d2266916ad9dac04d4cf18dbc2"} Feb 19 09:46:02 crc kubenswrapper[4965]: I0219 09:46:02.930048 4965 scope.go:117] "RemoveContainer" containerID="8877f0f03c3781eafe62b6786d2c4afdfb26d80d5cd3e98ab77760013c6859fa" Feb 19 09:46:02 crc kubenswrapper[4965]: I0219 09:46:02.945812 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcc89\" (UniqueName: \"kubernetes.io/projected/9abcf516-ebb9-47b7-af5d-6ae08a3d1d32-kube-api-access-dcc89\") on node \"crc\" DevicePath \"\"" Feb 19 09:46:02 crc kubenswrapper[4965]: I0219 09:46:02.945936 4965 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9abcf516-ebb9-47b7-af5d-6ae08a3d1d32-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:46:02 crc kubenswrapper[4965]: I0219 09:46:02.945948 4965 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9abcf516-ebb9-47b7-af5d-6ae08a3d1d32-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 09:46:02 crc kubenswrapper[4965]: I0219 09:46:02.945957 4965 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9abcf516-ebb9-47b7-af5d-6ae08a3d1d32-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:46:02 crc kubenswrapper[4965]: I0219 09:46:02.954624 4965 scope.go:117] "RemoveContainer" containerID="8877f0f03c3781eafe62b6786d2c4afdfb26d80d5cd3e98ab77760013c6859fa" Feb 19 09:46:02 crc kubenswrapper[4965]: E0219 09:46:02.955466 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8877f0f03c3781eafe62b6786d2c4afdfb26d80d5cd3e98ab77760013c6859fa\": container with ID starting with 8877f0f03c3781eafe62b6786d2c4afdfb26d80d5cd3e98ab77760013c6859fa not found: ID does not exist" containerID="8877f0f03c3781eafe62b6786d2c4afdfb26d80d5cd3e98ab77760013c6859fa" Feb 19 09:46:02 crc kubenswrapper[4965]: I0219 09:46:02.955502 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8877f0f03c3781eafe62b6786d2c4afdfb26d80d5cd3e98ab77760013c6859fa"} err="failed to get container status \"8877f0f03c3781eafe62b6786d2c4afdfb26d80d5cd3e98ab77760013c6859fa\": rpc error: code = NotFound desc = could not find container \"8877f0f03c3781eafe62b6786d2c4afdfb26d80d5cd3e98ab77760013c6859fa\": container with ID starting with 8877f0f03c3781eafe62b6786d2c4afdfb26d80d5cd3e98ab77760013c6859fa not found: ID does not exist" Feb 19 09:46:02 crc kubenswrapper[4965]: I0219 09:46:02.973037 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b769dd84-5nhwl"] Feb 19 09:46:02 crc kubenswrapper[4965]: I0219 09:46:02.975395 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b769dd84-5nhwl"] Feb 19 09:46:02 crc kubenswrapper[4965]: I0219 09:46:02.996061 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tlmst" Feb 19 09:46:02 crc kubenswrapper[4965]: I0219 09:46:02.996245 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tlmst" Feb 19 09:46:03 crc kubenswrapper[4965]: I0219 09:46:03.035969 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tlmst" Feb 19 09:46:03 crc kubenswrapper[4965]: I0219 09:46:03.204956 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9abcf516-ebb9-47b7-af5d-6ae08a3d1d32" path="/var/lib/kubelet/pods/9abcf516-ebb9-47b7-af5d-6ae08a3d1d32/volumes" Feb 19 09:46:03 crc kubenswrapper[4965]: I0219 09:46:03.546950 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-78697df6c5-q924m"] Feb 19 09:46:03 crc kubenswrapper[4965]: E0219 09:46:03.547245 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e1e7158-ad23-4414-9858-0c1056a71f56" containerName="registry-server" Feb 19 09:46:03 crc kubenswrapper[4965]: I0219 09:46:03.547260 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e1e7158-ad23-4414-9858-0c1056a71f56" containerName="registry-server" Feb 19 09:46:03 crc kubenswrapper[4965]: E0219 09:46:03.547271 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20180c3a-aa7a-4263-9057-c85c636bfc48" containerName="extract-utilities" Feb 19 09:46:03 crc kubenswrapper[4965]: I0219 09:46:03.547278 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="20180c3a-aa7a-4263-9057-c85c636bfc48" containerName="extract-utilities" Feb 19 09:46:03 crc kubenswrapper[4965]: E0219 09:46:03.547289 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98f1b66c-456a-415c-b093-20ab1fa33b9b" containerName="extract-utilities" Feb 19 09:46:03 crc kubenswrapper[4965]: I0219 09:46:03.547297 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="98f1b66c-456a-415c-b093-20ab1fa33b9b" containerName="extract-utilities" Feb 19 09:46:03 crc kubenswrapper[4965]: E0219 09:46:03.547308 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e1e7158-ad23-4414-9858-0c1056a71f56" containerName="extract-utilities" Feb 19 09:46:03 crc kubenswrapper[4965]: I0219 09:46:03.547314 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e1e7158-ad23-4414-9858-0c1056a71f56" containerName="extract-utilities" Feb 19 09:46:03 crc kubenswrapper[4965]: E0219 09:46:03.547321 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9abcf516-ebb9-47b7-af5d-6ae08a3d1d32" containerName="route-controller-manager" Feb 19 09:46:03 crc kubenswrapper[4965]: I0219 09:46:03.547328 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="9abcf516-ebb9-47b7-af5d-6ae08a3d1d32" containerName="route-controller-manager" Feb 19 09:46:03 crc kubenswrapper[4965]: E0219 09:46:03.547338 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20180c3a-aa7a-4263-9057-c85c636bfc48" containerName="extract-content" Feb 19 09:46:03 crc kubenswrapper[4965]: I0219 09:46:03.547347 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="20180c3a-aa7a-4263-9057-c85c636bfc48" containerName="extract-content" Feb 19 09:46:03 crc kubenswrapper[4965]: E0219 09:46:03.547359 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e1e7158-ad23-4414-9858-0c1056a71f56" containerName="extract-content" Feb 19 09:46:03 crc kubenswrapper[4965]: I0219 09:46:03.547366 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e1e7158-ad23-4414-9858-0c1056a71f56" containerName="extract-content" Feb 19 09:46:03 crc kubenswrapper[4965]: E0219 09:46:03.547373 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98f1b66c-456a-415c-b093-20ab1fa33b9b" containerName="extract-content" Feb 19 09:46:03 crc kubenswrapper[4965]: I0219 09:46:03.547382 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="98f1b66c-456a-415c-b093-20ab1fa33b9b" containerName="extract-content" Feb 19 09:46:03 crc kubenswrapper[4965]: E0219 09:46:03.547390 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98f1b66c-456a-415c-b093-20ab1fa33b9b" containerName="registry-server" Feb 19 09:46:03 crc kubenswrapper[4965]: I0219 09:46:03.547396 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="98f1b66c-456a-415c-b093-20ab1fa33b9b" containerName="registry-server" Feb 19 09:46:03 crc kubenswrapper[4965]: E0219 09:46:03.547405 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20180c3a-aa7a-4263-9057-c85c636bfc48" containerName="registry-server" Feb 19 09:46:03 crc kubenswrapper[4965]: I0219 09:46:03.547411 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="20180c3a-aa7a-4263-9057-c85c636bfc48" containerName="registry-server" Feb 19 09:46:03 crc kubenswrapper[4965]: I0219 09:46:03.547540 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="98f1b66c-456a-415c-b093-20ab1fa33b9b" containerName="registry-server" Feb 19 09:46:03 crc kubenswrapper[4965]: I0219 09:46:03.547553 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e1e7158-ad23-4414-9858-0c1056a71f56" containerName="registry-server" Feb 19 09:46:03 crc kubenswrapper[4965]: I0219 09:46:03.547580 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="20180c3a-aa7a-4263-9057-c85c636bfc48" containerName="registry-server" Feb 19 09:46:03 crc kubenswrapper[4965]: I0219 09:46:03.547590 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="9abcf516-ebb9-47b7-af5d-6ae08a3d1d32" containerName="route-controller-manager" Feb 19 09:46:03 crc kubenswrapper[4965]: I0219 09:46:03.547986 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-78697df6c5-q924m" Feb 19 09:46:03 crc kubenswrapper[4965]: I0219 09:46:03.552346 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 09:46:03 crc kubenswrapper[4965]: I0219 09:46:03.552469 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 09:46:03 crc kubenswrapper[4965]: I0219 09:46:03.552505 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 19 09:46:03 crc kubenswrapper[4965]: I0219 09:46:03.552591 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 09:46:03 crc kubenswrapper[4965]: I0219 09:46:03.552636 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 09:46:03 crc kubenswrapper[4965]: I0219 09:46:03.552475 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 09:46:03 crc kubenswrapper[4965]: I0219 09:46:03.553479 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/061341ab-5fe8-480f-94d1-594f1fc9b26f-serving-cert\") pod \"route-controller-manager-78697df6c5-q924m\" (UID: \"061341ab-5fe8-480f-94d1-594f1fc9b26f\") " pod="openshift-route-controller-manager/route-controller-manager-78697df6c5-q924m" Feb 19 09:46:03 crc kubenswrapper[4965]: I0219 09:46:03.553806 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8thz\" (UniqueName: \"kubernetes.io/projected/061341ab-5fe8-480f-94d1-594f1fc9b26f-kube-api-access-f8thz\") pod \"route-controller-manager-78697df6c5-q924m\" (UID: \"061341ab-5fe8-480f-94d1-594f1fc9b26f\") " pod="openshift-route-controller-manager/route-controller-manager-78697df6c5-q924m" Feb 19 09:46:03 crc kubenswrapper[4965]: I0219 09:46:03.553892 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/061341ab-5fe8-480f-94d1-594f1fc9b26f-client-ca\") pod \"route-controller-manager-78697df6c5-q924m\" (UID: \"061341ab-5fe8-480f-94d1-594f1fc9b26f\") " pod="openshift-route-controller-manager/route-controller-manager-78697df6c5-q924m" Feb 19 09:46:03 crc kubenswrapper[4965]: I0219 09:46:03.553967 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/061341ab-5fe8-480f-94d1-594f1fc9b26f-config\") pod \"route-controller-manager-78697df6c5-q924m\" (UID: \"061341ab-5fe8-480f-94d1-594f1fc9b26f\") " pod="openshift-route-controller-manager/route-controller-manager-78697df6c5-q924m" Feb 19 09:46:03 crc kubenswrapper[4965]: I0219 09:46:03.568332 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-78697df6c5-q924m"] Feb 19 09:46:03 crc kubenswrapper[4965]: I0219 09:46:03.654897 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8thz\" (UniqueName: \"kubernetes.io/projected/061341ab-5fe8-480f-94d1-594f1fc9b26f-kube-api-access-f8thz\") pod \"route-controller-manager-78697df6c5-q924m\" (UID: \"061341ab-5fe8-480f-94d1-594f1fc9b26f\") " pod="openshift-route-controller-manager/route-controller-manager-78697df6c5-q924m" Feb 19 09:46:03 crc kubenswrapper[4965]: I0219 09:46:03.654952 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/061341ab-5fe8-480f-94d1-594f1fc9b26f-client-ca\") pod \"route-controller-manager-78697df6c5-q924m\" (UID: \"061341ab-5fe8-480f-94d1-594f1fc9b26f\") " pod="openshift-route-controller-manager/route-controller-manager-78697df6c5-q924m" Feb 19 09:46:03 crc kubenswrapper[4965]: I0219 09:46:03.655215 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/061341ab-5fe8-480f-94d1-594f1fc9b26f-config\") pod \"route-controller-manager-78697df6c5-q924m\" (UID: \"061341ab-5fe8-480f-94d1-594f1fc9b26f\") " pod="openshift-route-controller-manager/route-controller-manager-78697df6c5-q924m" Feb 19 09:46:03 crc kubenswrapper[4965]: I0219 09:46:03.655299 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/061341ab-5fe8-480f-94d1-594f1fc9b26f-serving-cert\") pod \"route-controller-manager-78697df6c5-q924m\" (UID: \"061341ab-5fe8-480f-94d1-594f1fc9b26f\") " pod="openshift-route-controller-manager/route-controller-manager-78697df6c5-q924m" Feb 19 09:46:03 crc kubenswrapper[4965]: I0219 09:46:03.656059 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/061341ab-5fe8-480f-94d1-594f1fc9b26f-client-ca\") pod \"route-controller-manager-78697df6c5-q924m\" (UID: \"061341ab-5fe8-480f-94d1-594f1fc9b26f\") " pod="openshift-route-controller-manager/route-controller-manager-78697df6c5-q924m" Feb 19 09:46:03 crc kubenswrapper[4965]: I0219 09:46:03.656209 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/061341ab-5fe8-480f-94d1-594f1fc9b26f-config\") pod \"route-controller-manager-78697df6c5-q924m\" (UID: \"061341ab-5fe8-480f-94d1-594f1fc9b26f\") " pod="openshift-route-controller-manager/route-controller-manager-78697df6c5-q924m" Feb 19 09:46:03 crc kubenswrapper[4965]: I0219 09:46:03.667400 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/061341ab-5fe8-480f-94d1-594f1fc9b26f-serving-cert\") pod \"route-controller-manager-78697df6c5-q924m\" (UID: \"061341ab-5fe8-480f-94d1-594f1fc9b26f\") " pod="openshift-route-controller-manager/route-controller-manager-78697df6c5-q924m" Feb 19 09:46:03 crc kubenswrapper[4965]: I0219 09:46:03.684622 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8thz\" (UniqueName: \"kubernetes.io/projected/061341ab-5fe8-480f-94d1-594f1fc9b26f-kube-api-access-f8thz\") pod \"route-controller-manager-78697df6c5-q924m\" (UID: \"061341ab-5fe8-480f-94d1-594f1fc9b26f\") " pod="openshift-route-controller-manager/route-controller-manager-78697df6c5-q924m" Feb 19 09:46:03 crc kubenswrapper[4965]: I0219 09:46:03.908410 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-78697df6c5-q924m" Feb 19 09:46:04 crc kubenswrapper[4965]: I0219 09:46:04.354226 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-78697df6c5-q924m"] Feb 19 09:46:04 crc kubenswrapper[4965]: W0219 09:46:04.365963 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod061341ab_5fe8_480f_94d1_594f1fc9b26f.slice/crio-763764dcd8c0f9c76a2a5dfe3f301ae92676e988fc30e5539c3f2b401488f865 WatchSource:0}: Error finding container 763764dcd8c0f9c76a2a5dfe3f301ae92676e988fc30e5539c3f2b401488f865: Status 404 returned error can't find the container with id 763764dcd8c0f9c76a2a5dfe3f301ae92676e988fc30e5539c3f2b401488f865 Feb 19 09:46:04 crc kubenswrapper[4965]: I0219 09:46:04.762273 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-c55hf" Feb 19 09:46:04 crc kubenswrapper[4965]: I0219 09:46:04.762805 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-c55hf" Feb 19 09:46:04 crc kubenswrapper[4965]: I0219 09:46:04.811180 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-c55hf" Feb 19 09:46:04 crc kubenswrapper[4965]: I0219 09:46:04.951251 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-78697df6c5-q924m" event={"ID":"061341ab-5fe8-480f-94d1-594f1fc9b26f","Type":"ContainerStarted","Data":"b6eb9241f619c670a9850ec17f2ed26c6766e8db31c84dc2882ee91a9892c9ea"} Feb 19 09:46:04 crc kubenswrapper[4965]: I0219 09:46:04.951327 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-78697df6c5-q924m" event={"ID":"061341ab-5fe8-480f-94d1-594f1fc9b26f","Type":"ContainerStarted","Data":"763764dcd8c0f9c76a2a5dfe3f301ae92676e988fc30e5539c3f2b401488f865"} Feb 19 09:46:04 crc kubenswrapper[4965]: I0219 09:46:04.972019 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-78697df6c5-q924m" podStartSLOduration=2.971988496 podStartE2EDuration="2.971988496s" podCreationTimestamp="2026-02-19 09:46:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:46:04.968128127 +0000 UTC m=+220.589449437" watchObservedRunningTime="2026-02-19 09:46:04.971988496 +0000 UTC m=+220.593309806" Feb 19 09:46:05 crc kubenswrapper[4965]: I0219 09:46:05.000068 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-c55hf" Feb 19 09:46:05 crc kubenswrapper[4965]: I0219 09:46:05.957263 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-78697df6c5-q924m" Feb 19 09:46:05 crc kubenswrapper[4965]: I0219 09:46:05.963802 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-78697df6c5-q924m" Feb 19 09:46:06 crc kubenswrapper[4965]: I0219 09:46:06.056929 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-s8pfp" Feb 19 09:46:06 crc kubenswrapper[4965]: I0219 09:46:06.057372 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-s8pfp" Feb 19 09:46:06 crc kubenswrapper[4965]: I0219 09:46:06.104300 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-s8pfp" Feb 19 09:46:06 crc kubenswrapper[4965]: I0219 09:46:06.334261 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8tjpx" Feb 19 09:46:06 crc kubenswrapper[4965]: I0219 09:46:06.334607 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8tjpx" Feb 19 09:46:06 crc kubenswrapper[4965]: I0219 09:46:06.389002 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8tjpx" Feb 19 09:46:07 crc kubenswrapper[4965]: I0219 09:46:07.012256 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-s8pfp" Feb 19 09:46:07 crc kubenswrapper[4965]: I0219 09:46:07.025374 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8tjpx" Feb 19 09:46:08 crc kubenswrapper[4965]: I0219 09:46:08.113382 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8tjpx"] Feb 19 09:46:10 crc kubenswrapper[4965]: I0219 09:46:10.007108 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8tjpx" podUID="7296417d-9dfd-4ca9-8ad7-c0016daa9b53" containerName="registry-server" containerID="cri-o://6a18b36b7cc4dcad850bd0274daaa5f4e0e1472f4c86ae9d9f6c8f5536ddd84f" gracePeriod=2 Feb 19 09:46:11 crc kubenswrapper[4965]: I0219 09:46:11.017309 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8tjpx" event={"ID":"7296417d-9dfd-4ca9-8ad7-c0016daa9b53","Type":"ContainerDied","Data":"6a18b36b7cc4dcad850bd0274daaa5f4e0e1472f4c86ae9d9f6c8f5536ddd84f"} Feb 19 09:46:11 crc kubenswrapper[4965]: I0219 09:46:11.017381 4965 generic.go:334] "Generic (PLEG): container finished" podID="7296417d-9dfd-4ca9-8ad7-c0016daa9b53" containerID="6a18b36b7cc4dcad850bd0274daaa5f4e0e1472f4c86ae9d9f6c8f5536ddd84f" exitCode=0 Feb 19 09:46:11 crc kubenswrapper[4965]: I0219 09:46:11.260681 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8tjpx" Feb 19 09:46:11 crc kubenswrapper[4965]: I0219 09:46:11.357694 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7296417d-9dfd-4ca9-8ad7-c0016daa9b53-catalog-content\") pod \"7296417d-9dfd-4ca9-8ad7-c0016daa9b53\" (UID: \"7296417d-9dfd-4ca9-8ad7-c0016daa9b53\") " Feb 19 09:46:11 crc kubenswrapper[4965]: I0219 09:46:11.357772 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5p7n4\" (UniqueName: \"kubernetes.io/projected/7296417d-9dfd-4ca9-8ad7-c0016daa9b53-kube-api-access-5p7n4\") pod \"7296417d-9dfd-4ca9-8ad7-c0016daa9b53\" (UID: \"7296417d-9dfd-4ca9-8ad7-c0016daa9b53\") " Feb 19 09:46:11 crc kubenswrapper[4965]: I0219 09:46:11.357953 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7296417d-9dfd-4ca9-8ad7-c0016daa9b53-utilities\") pod \"7296417d-9dfd-4ca9-8ad7-c0016daa9b53\" (UID: \"7296417d-9dfd-4ca9-8ad7-c0016daa9b53\") " Feb 19 09:46:11 crc kubenswrapper[4965]: I0219 09:46:11.359179 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7296417d-9dfd-4ca9-8ad7-c0016daa9b53-utilities" (OuterVolumeSpecName: "utilities") pod "7296417d-9dfd-4ca9-8ad7-c0016daa9b53" (UID: "7296417d-9dfd-4ca9-8ad7-c0016daa9b53"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:46:11 crc kubenswrapper[4965]: I0219 09:46:11.360725 4965 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7296417d-9dfd-4ca9-8ad7-c0016daa9b53-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 09:46:11 crc kubenswrapper[4965]: I0219 09:46:11.367422 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7296417d-9dfd-4ca9-8ad7-c0016daa9b53-kube-api-access-5p7n4" (OuterVolumeSpecName: "kube-api-access-5p7n4") pod "7296417d-9dfd-4ca9-8ad7-c0016daa9b53" (UID: "7296417d-9dfd-4ca9-8ad7-c0016daa9b53"). InnerVolumeSpecName "kube-api-access-5p7n4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:46:11 crc kubenswrapper[4965]: I0219 09:46:11.461601 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5p7n4\" (UniqueName: \"kubernetes.io/projected/7296417d-9dfd-4ca9-8ad7-c0016daa9b53-kube-api-access-5p7n4\") on node \"crc\" DevicePath \"\"" Feb 19 09:46:11 crc kubenswrapper[4965]: I0219 09:46:11.491627 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7296417d-9dfd-4ca9-8ad7-c0016daa9b53-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7296417d-9dfd-4ca9-8ad7-c0016daa9b53" (UID: "7296417d-9dfd-4ca9-8ad7-c0016daa9b53"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:46:11 crc kubenswrapper[4965]: I0219 09:46:11.563528 4965 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7296417d-9dfd-4ca9-8ad7-c0016daa9b53-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 09:46:12 crc kubenswrapper[4965]: I0219 09:46:12.029776 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8tjpx" event={"ID":"7296417d-9dfd-4ca9-8ad7-c0016daa9b53","Type":"ContainerDied","Data":"fd6de5089869824e38dba5d55f2ba30914a2e723685ba4fd50d3161a36935588"} Feb 19 09:46:12 crc kubenswrapper[4965]: I0219 09:46:12.030122 4965 scope.go:117] "RemoveContainer" containerID="6a18b36b7cc4dcad850bd0274daaa5f4e0e1472f4c86ae9d9f6c8f5536ddd84f" Feb 19 09:46:12 crc kubenswrapper[4965]: I0219 09:46:12.029915 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8tjpx" Feb 19 09:46:12 crc kubenswrapper[4965]: I0219 09:46:12.060293 4965 scope.go:117] "RemoveContainer" containerID="98ddda2a71f22cc5fb88701bd7f1acb311632e3069697f7ffa1af4f3d88f5f2a" Feb 19 09:46:12 crc kubenswrapper[4965]: I0219 09:46:12.078465 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8tjpx"] Feb 19 09:46:12 crc kubenswrapper[4965]: I0219 09:46:12.084123 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8tjpx"] Feb 19 09:46:12 crc kubenswrapper[4965]: I0219 09:46:12.111482 4965 scope.go:117] "RemoveContainer" containerID="8e651eb8e2f2eee86cc9bbe2aabbbf56297f5fd760e9f6136bd99b5a24fbc1e7" Feb 19 09:46:12 crc kubenswrapper[4965]: I0219 09:46:12.648990 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-88wlz" podUID="e255fdb7-438f-413c-baf2-52e93f1eb0a3" containerName="oauth-openshift" containerID="cri-o://03f733d0ceee5e5ff28829ebae2bcf82cacad169ed9730a17481e7fe24f3cdaa" gracePeriod=15 Feb 19 09:46:12 crc kubenswrapper[4965]: I0219 09:46:12.705376 4965 patch_prober.go:28] interesting pod/controller-manager-784c4bbc6f-g2f6x container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.56:8443/healthz\": dial tcp 10.217.0.56:8443: connect: connection refused" start-of-body= Feb 19 09:46:12 crc kubenswrapper[4965]: I0219 09:46:12.705472 4965 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-784c4bbc6f-g2f6x" podUID="2fe3f117-83d8-436e-90ed-37728223eb61" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.56:8443/healthz\": dial tcp 10.217.0.56:8443: connect: connection refused" Feb 19 09:46:12 crc kubenswrapper[4965]: I0219 09:46:12.938783 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fxnw5" Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.049600 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tlmst" Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.053846 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-784c4bbc6f-g2f6x_2fe3f117-83d8-436e-90ed-37728223eb61/controller-manager/0.log" Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.053922 4965 generic.go:334] "Generic (PLEG): container finished" podID="2fe3f117-83d8-436e-90ed-37728223eb61" containerID="548ac61ed93abddef97afbe570ce182fef0b4c7d5c89b18be08c44b3527eeba8" exitCode=137 Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.053977 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-784c4bbc6f-g2f6x" event={"ID":"2fe3f117-83d8-436e-90ed-37728223eb61","Type":"ContainerDied","Data":"548ac61ed93abddef97afbe570ce182fef0b4c7d5c89b18be08c44b3527eeba8"} Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.063845 4965 generic.go:334] "Generic (PLEG): container finished" podID="e255fdb7-438f-413c-baf2-52e93f1eb0a3" containerID="03f733d0ceee5e5ff28829ebae2bcf82cacad169ed9730a17481e7fe24f3cdaa" exitCode=0 Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.063917 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-88wlz" event={"ID":"e255fdb7-438f-413c-baf2-52e93f1eb0a3","Type":"ContainerDied","Data":"03f733d0ceee5e5ff28829ebae2bcf82cacad169ed9730a17481e7fe24f3cdaa"} Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.165937 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-88wlz" Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.206427 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7296417d-9dfd-4ca9-8ad7-c0016daa9b53" path="/var/lib/kubelet/pods/7296417d-9dfd-4ca9-8ad7-c0016daa9b53/volumes" Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.290762 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e255fdb7-438f-413c-baf2-52e93f1eb0a3-v4-0-config-system-service-ca\") pod \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\" (UID: \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\") " Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.290855 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e255fdb7-438f-413c-baf2-52e93f1eb0a3-v4-0-config-user-template-provider-selection\") pod \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\" (UID: \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\") " Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.290889 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e255fdb7-438f-413c-baf2-52e93f1eb0a3-v4-0-config-user-template-error\") pod \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\" (UID: \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\") " Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.290937 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmqnm\" (UniqueName: \"kubernetes.io/projected/e255fdb7-438f-413c-baf2-52e93f1eb0a3-kube-api-access-bmqnm\") pod \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\" (UID: \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\") " Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.290998 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e255fdb7-438f-413c-baf2-52e93f1eb0a3-v4-0-config-system-serving-cert\") pod \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\" (UID: \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\") " Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.291024 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e255fdb7-438f-413c-baf2-52e93f1eb0a3-v4-0-config-system-trusted-ca-bundle\") pod \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\" (UID: \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\") " Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.291051 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e255fdb7-438f-413c-baf2-52e93f1eb0a3-v4-0-config-system-router-certs\") pod \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\" (UID: \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\") " Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.291075 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e255fdb7-438f-413c-baf2-52e93f1eb0a3-v4-0-config-system-session\") pod \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\" (UID: \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\") " Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.291108 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e255fdb7-438f-413c-baf2-52e93f1eb0a3-v4-0-config-user-template-login\") pod \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\" (UID: \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\") " Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.291165 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e255fdb7-438f-413c-baf2-52e93f1eb0a3-audit-policies\") pod \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\" (UID: \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\") " Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.291257 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e255fdb7-438f-413c-baf2-52e93f1eb0a3-v4-0-config-system-ocp-branding-template\") pod \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\" (UID: \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\") " Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.291323 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e255fdb7-438f-413c-baf2-52e93f1eb0a3-v4-0-config-system-cliconfig\") pod \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\" (UID: \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\") " Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.291358 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e255fdb7-438f-413c-baf2-52e93f1eb0a3-v4-0-config-user-idp-0-file-data\") pod \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\" (UID: \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\") " Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.291406 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e255fdb7-438f-413c-baf2-52e93f1eb0a3-audit-dir\") pod \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\" (UID: \"e255fdb7-438f-413c-baf2-52e93f1eb0a3\") " Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.292585 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e255fdb7-438f-413c-baf2-52e93f1eb0a3-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "e255fdb7-438f-413c-baf2-52e93f1eb0a3" (UID: "e255fdb7-438f-413c-baf2-52e93f1eb0a3"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.292726 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e255fdb7-438f-413c-baf2-52e93f1eb0a3-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "e255fdb7-438f-413c-baf2-52e93f1eb0a3" (UID: "e255fdb7-438f-413c-baf2-52e93f1eb0a3"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.292916 4965 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e255fdb7-438f-413c-baf2-52e93f1eb0a3-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.292939 4965 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e255fdb7-438f-413c-baf2-52e93f1eb0a3-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.294379 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e255fdb7-438f-413c-baf2-52e93f1eb0a3-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "e255fdb7-438f-413c-baf2-52e93f1eb0a3" (UID: "e255fdb7-438f-413c-baf2-52e93f1eb0a3"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.294523 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e255fdb7-438f-413c-baf2-52e93f1eb0a3-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "e255fdb7-438f-413c-baf2-52e93f1eb0a3" (UID: "e255fdb7-438f-413c-baf2-52e93f1eb0a3"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.296364 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e255fdb7-438f-413c-baf2-52e93f1eb0a3-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "e255fdb7-438f-413c-baf2-52e93f1eb0a3" (UID: "e255fdb7-438f-413c-baf2-52e93f1eb0a3"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.296682 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e255fdb7-438f-413c-baf2-52e93f1eb0a3-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "e255fdb7-438f-413c-baf2-52e93f1eb0a3" (UID: "e255fdb7-438f-413c-baf2-52e93f1eb0a3"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.297130 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e255fdb7-438f-413c-baf2-52e93f1eb0a3-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "e255fdb7-438f-413c-baf2-52e93f1eb0a3" (UID: "e255fdb7-438f-413c-baf2-52e93f1eb0a3"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.297242 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e255fdb7-438f-413c-baf2-52e93f1eb0a3-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "e255fdb7-438f-413c-baf2-52e93f1eb0a3" (UID: "e255fdb7-438f-413c-baf2-52e93f1eb0a3"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.297549 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e255fdb7-438f-413c-baf2-52e93f1eb0a3-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "e255fdb7-438f-413c-baf2-52e93f1eb0a3" (UID: "e255fdb7-438f-413c-baf2-52e93f1eb0a3"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.298085 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e255fdb7-438f-413c-baf2-52e93f1eb0a3-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "e255fdb7-438f-413c-baf2-52e93f1eb0a3" (UID: "e255fdb7-438f-413c-baf2-52e93f1eb0a3"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.299378 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e255fdb7-438f-413c-baf2-52e93f1eb0a3-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "e255fdb7-438f-413c-baf2-52e93f1eb0a3" (UID: "e255fdb7-438f-413c-baf2-52e93f1eb0a3"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.299496 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e255fdb7-438f-413c-baf2-52e93f1eb0a3-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "e255fdb7-438f-413c-baf2-52e93f1eb0a3" (UID: "e255fdb7-438f-413c-baf2-52e93f1eb0a3"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.299506 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e255fdb7-438f-413c-baf2-52e93f1eb0a3-kube-api-access-bmqnm" (OuterVolumeSpecName: "kube-api-access-bmqnm") pod "e255fdb7-438f-413c-baf2-52e93f1eb0a3" (UID: "e255fdb7-438f-413c-baf2-52e93f1eb0a3"). InnerVolumeSpecName "kube-api-access-bmqnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.300663 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e255fdb7-438f-413c-baf2-52e93f1eb0a3-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "e255fdb7-438f-413c-baf2-52e93f1eb0a3" (UID: "e255fdb7-438f-413c-baf2-52e93f1eb0a3"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.329112 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-784c4bbc6f-g2f6x_2fe3f117-83d8-436e-90ed-37728223eb61/controller-manager/0.log" Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.329209 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-784c4bbc6f-g2f6x" Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.394661 4965 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e255fdb7-438f-413c-baf2-52e93f1eb0a3-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.394730 4965 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e255fdb7-438f-413c-baf2-52e93f1eb0a3-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.394753 4965 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e255fdb7-438f-413c-baf2-52e93f1eb0a3-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.394772 4965 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e255fdb7-438f-413c-baf2-52e93f1eb0a3-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.394788 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmqnm\" (UniqueName: \"kubernetes.io/projected/e255fdb7-438f-413c-baf2-52e93f1eb0a3-kube-api-access-bmqnm\") on node \"crc\" DevicePath \"\"" Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.394803 4965 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e255fdb7-438f-413c-baf2-52e93f1eb0a3-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.394820 4965 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e255fdb7-438f-413c-baf2-52e93f1eb0a3-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.394835 4965 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e255fdb7-438f-413c-baf2-52e93f1eb0a3-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.395387 4965 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e255fdb7-438f-413c-baf2-52e93f1eb0a3-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.395441 4965 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e255fdb7-438f-413c-baf2-52e93f1eb0a3-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.395463 4965 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e255fdb7-438f-413c-baf2-52e93f1eb0a3-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.395482 4965 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e255fdb7-438f-413c-baf2-52e93f1eb0a3-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.496316 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2fe3f117-83d8-436e-90ed-37728223eb61-client-ca\") pod \"2fe3f117-83d8-436e-90ed-37728223eb61\" (UID: \"2fe3f117-83d8-436e-90ed-37728223eb61\") " Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.496370 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fe3f117-83d8-436e-90ed-37728223eb61-config\") pod \"2fe3f117-83d8-436e-90ed-37728223eb61\" (UID: \"2fe3f117-83d8-436e-90ed-37728223eb61\") " Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.496435 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2fe3f117-83d8-436e-90ed-37728223eb61-proxy-ca-bundles\") pod \"2fe3f117-83d8-436e-90ed-37728223eb61\" (UID: \"2fe3f117-83d8-436e-90ed-37728223eb61\") " Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.496515 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2fe3f117-83d8-436e-90ed-37728223eb61-serving-cert\") pod \"2fe3f117-83d8-436e-90ed-37728223eb61\" (UID: \"2fe3f117-83d8-436e-90ed-37728223eb61\") " Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.496587 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdwj4\" (UniqueName: \"kubernetes.io/projected/2fe3f117-83d8-436e-90ed-37728223eb61-kube-api-access-tdwj4\") pod \"2fe3f117-83d8-436e-90ed-37728223eb61\" (UID: \"2fe3f117-83d8-436e-90ed-37728223eb61\") " Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.497874 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fe3f117-83d8-436e-90ed-37728223eb61-client-ca" (OuterVolumeSpecName: "client-ca") pod "2fe3f117-83d8-436e-90ed-37728223eb61" (UID: "2fe3f117-83d8-436e-90ed-37728223eb61"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.497889 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fe3f117-83d8-436e-90ed-37728223eb61-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "2fe3f117-83d8-436e-90ed-37728223eb61" (UID: "2fe3f117-83d8-436e-90ed-37728223eb61"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.498031 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fe3f117-83d8-436e-90ed-37728223eb61-config" (OuterVolumeSpecName: "config") pod "2fe3f117-83d8-436e-90ed-37728223eb61" (UID: "2fe3f117-83d8-436e-90ed-37728223eb61"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.500724 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fe3f117-83d8-436e-90ed-37728223eb61-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2fe3f117-83d8-436e-90ed-37728223eb61" (UID: "2fe3f117-83d8-436e-90ed-37728223eb61"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.511442 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fe3f117-83d8-436e-90ed-37728223eb61-kube-api-access-tdwj4" (OuterVolumeSpecName: "kube-api-access-tdwj4") pod "2fe3f117-83d8-436e-90ed-37728223eb61" (UID: "2fe3f117-83d8-436e-90ed-37728223eb61"). InnerVolumeSpecName "kube-api-access-tdwj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.598646 4965 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2fe3f117-83d8-436e-90ed-37728223eb61-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.598708 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdwj4\" (UniqueName: \"kubernetes.io/projected/2fe3f117-83d8-436e-90ed-37728223eb61-kube-api-access-tdwj4\") on node \"crc\" DevicePath \"\"" Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.598729 4965 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2fe3f117-83d8-436e-90ed-37728223eb61-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.598748 4965 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fe3f117-83d8-436e-90ed-37728223eb61-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:46:13 crc kubenswrapper[4965]: I0219 09:46:13.598767 4965 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2fe3f117-83d8-436e-90ed-37728223eb61-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 09:46:14 crc kubenswrapper[4965]: I0219 09:46:14.070297 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-784c4bbc6f-g2f6x_2fe3f117-83d8-436e-90ed-37728223eb61/controller-manager/0.log" Feb 19 09:46:14 crc kubenswrapper[4965]: I0219 09:46:14.070711 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-784c4bbc6f-g2f6x" event={"ID":"2fe3f117-83d8-436e-90ed-37728223eb61","Type":"ContainerDied","Data":"6720008ec0df243c334d0fbf8dea7052853f637af1db6f4e471dd77d8684027d"} Feb 19 09:46:14 crc kubenswrapper[4965]: I0219 09:46:14.070759 4965 scope.go:117] "RemoveContainer" containerID="548ac61ed93abddef97afbe570ce182fef0b4c7d5c89b18be08c44b3527eeba8" Feb 19 09:46:14 crc kubenswrapper[4965]: I0219 09:46:14.070770 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-784c4bbc6f-g2f6x" Feb 19 09:46:14 crc kubenswrapper[4965]: I0219 09:46:14.072043 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-88wlz" event={"ID":"e255fdb7-438f-413c-baf2-52e93f1eb0a3","Type":"ContainerDied","Data":"ba7fa50cfe4329cf5c0a1936e34e70471d831c12c0bf9ba143aef1632f445bd2"} Feb 19 09:46:14 crc kubenswrapper[4965]: I0219 09:46:14.072120 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-88wlz" Feb 19 09:46:14 crc kubenswrapper[4965]: I0219 09:46:14.101967 4965 scope.go:117] "RemoveContainer" containerID="03f733d0ceee5e5ff28829ebae2bcf82cacad169ed9730a17481e7fe24f3cdaa" Feb 19 09:46:14 crc kubenswrapper[4965]: I0219 09:46:14.116432 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-784c4bbc6f-g2f6x"] Feb 19 09:46:14 crc kubenswrapper[4965]: I0219 09:46:14.125583 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-784c4bbc6f-g2f6x"] Feb 19 09:46:14 crc kubenswrapper[4965]: I0219 09:46:14.135032 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-88wlz"] Feb 19 09:46:14 crc kubenswrapper[4965]: I0219 09:46:14.137653 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-88wlz"] Feb 19 09:46:15 crc kubenswrapper[4965]: I0219 09:46:15.204859 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fe3f117-83d8-436e-90ed-37728223eb61" path="/var/lib/kubelet/pods/2fe3f117-83d8-436e-90ed-37728223eb61/volumes" Feb 19 09:46:15 crc kubenswrapper[4965]: I0219 09:46:15.205591 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e255fdb7-438f-413c-baf2-52e93f1eb0a3" path="/var/lib/kubelet/pods/e255fdb7-438f-413c-baf2-52e93f1eb0a3/volumes" Feb 19 09:46:15 crc kubenswrapper[4965]: I0219 09:46:15.560006 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-f9f4cb9b6-tvlnp"] Feb 19 09:46:15 crc kubenswrapper[4965]: E0219 09:46:15.560383 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7296417d-9dfd-4ca9-8ad7-c0016daa9b53" containerName="extract-content" Feb 19 09:46:15 crc kubenswrapper[4965]: I0219 09:46:15.560414 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="7296417d-9dfd-4ca9-8ad7-c0016daa9b53" containerName="extract-content" Feb 19 09:46:15 crc kubenswrapper[4965]: E0219 09:46:15.560428 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7296417d-9dfd-4ca9-8ad7-c0016daa9b53" containerName="extract-utilities" Feb 19 09:46:15 crc kubenswrapper[4965]: I0219 09:46:15.560436 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="7296417d-9dfd-4ca9-8ad7-c0016daa9b53" containerName="extract-utilities" Feb 19 09:46:15 crc kubenswrapper[4965]: E0219 09:46:15.560448 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e255fdb7-438f-413c-baf2-52e93f1eb0a3" containerName="oauth-openshift" Feb 19 09:46:15 crc kubenswrapper[4965]: I0219 09:46:15.560457 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="e255fdb7-438f-413c-baf2-52e93f1eb0a3" containerName="oauth-openshift" Feb 19 09:46:15 crc kubenswrapper[4965]: E0219 09:46:15.560469 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fe3f117-83d8-436e-90ed-37728223eb61" containerName="controller-manager" Feb 19 09:46:15 crc kubenswrapper[4965]: I0219 09:46:15.560477 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fe3f117-83d8-436e-90ed-37728223eb61" containerName="controller-manager" Feb 19 09:46:15 crc kubenswrapper[4965]: E0219 09:46:15.560496 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7296417d-9dfd-4ca9-8ad7-c0016daa9b53" containerName="registry-server" Feb 19 09:46:15 crc kubenswrapper[4965]: I0219 09:46:15.560504 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="7296417d-9dfd-4ca9-8ad7-c0016daa9b53" containerName="registry-server" Feb 19 09:46:15 crc kubenswrapper[4965]: I0219 09:46:15.560622 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="7296417d-9dfd-4ca9-8ad7-c0016daa9b53" containerName="registry-server" Feb 19 09:46:15 crc kubenswrapper[4965]: I0219 09:46:15.560648 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fe3f117-83d8-436e-90ed-37728223eb61" containerName="controller-manager" Feb 19 09:46:15 crc kubenswrapper[4965]: I0219 09:46:15.560658 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="e255fdb7-438f-413c-baf2-52e93f1eb0a3" containerName="oauth-openshift" Feb 19 09:46:15 crc kubenswrapper[4965]: I0219 09:46:15.561173 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f9f4cb9b6-tvlnp" Feb 19 09:46:15 crc kubenswrapper[4965]: I0219 09:46:15.563688 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 09:46:15 crc kubenswrapper[4965]: I0219 09:46:15.564310 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 09:46:15 crc kubenswrapper[4965]: I0219 09:46:15.564666 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 09:46:15 crc kubenswrapper[4965]: I0219 09:46:15.565442 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 09:46:15 crc kubenswrapper[4965]: I0219 09:46:15.565692 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 09:46:15 crc kubenswrapper[4965]: I0219 09:46:15.566562 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 09:46:15 crc kubenswrapper[4965]: I0219 09:46:15.574522 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f9f4cb9b6-tvlnp"] Feb 19 09:46:15 crc kubenswrapper[4965]: I0219 09:46:15.577413 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 09:46:15 crc kubenswrapper[4965]: I0219 09:46:15.726698 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df67c235-5c37-4aff-be16-64f660b26f32-config\") pod \"controller-manager-f9f4cb9b6-tvlnp\" (UID: \"df67c235-5c37-4aff-be16-64f660b26f32\") " pod="openshift-controller-manager/controller-manager-f9f4cb9b6-tvlnp" Feb 19 09:46:15 crc kubenswrapper[4965]: I0219 09:46:15.727181 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df67c235-5c37-4aff-be16-64f660b26f32-client-ca\") pod \"controller-manager-f9f4cb9b6-tvlnp\" (UID: \"df67c235-5c37-4aff-be16-64f660b26f32\") " pod="openshift-controller-manager/controller-manager-f9f4cb9b6-tvlnp" Feb 19 09:46:15 crc kubenswrapper[4965]: I0219 09:46:15.727308 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/df67c235-5c37-4aff-be16-64f660b26f32-proxy-ca-bundles\") pod \"controller-manager-f9f4cb9b6-tvlnp\" (UID: \"df67c235-5c37-4aff-be16-64f660b26f32\") " pod="openshift-controller-manager/controller-manager-f9f4cb9b6-tvlnp" Feb 19 09:46:15 crc kubenswrapper[4965]: I0219 09:46:15.727387 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxv7k\" (UniqueName: \"kubernetes.io/projected/df67c235-5c37-4aff-be16-64f660b26f32-kube-api-access-zxv7k\") pod \"controller-manager-f9f4cb9b6-tvlnp\" (UID: \"df67c235-5c37-4aff-be16-64f660b26f32\") " pod="openshift-controller-manager/controller-manager-f9f4cb9b6-tvlnp" Feb 19 09:46:15 crc kubenswrapper[4965]: I0219 09:46:15.727424 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df67c235-5c37-4aff-be16-64f660b26f32-serving-cert\") pod \"controller-manager-f9f4cb9b6-tvlnp\" (UID: \"df67c235-5c37-4aff-be16-64f660b26f32\") " pod="openshift-controller-manager/controller-manager-f9f4cb9b6-tvlnp" Feb 19 09:46:15 crc kubenswrapper[4965]: I0219 09:46:15.828710 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/df67c235-5c37-4aff-be16-64f660b26f32-proxy-ca-bundles\") pod \"controller-manager-f9f4cb9b6-tvlnp\" (UID: \"df67c235-5c37-4aff-be16-64f660b26f32\") " pod="openshift-controller-manager/controller-manager-f9f4cb9b6-tvlnp" Feb 19 09:46:15 crc kubenswrapper[4965]: I0219 09:46:15.828798 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxv7k\" (UniqueName: \"kubernetes.io/projected/df67c235-5c37-4aff-be16-64f660b26f32-kube-api-access-zxv7k\") pod \"controller-manager-f9f4cb9b6-tvlnp\" (UID: \"df67c235-5c37-4aff-be16-64f660b26f32\") " pod="openshift-controller-manager/controller-manager-f9f4cb9b6-tvlnp" Feb 19 09:46:15 crc kubenswrapper[4965]: I0219 09:46:15.828832 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df67c235-5c37-4aff-be16-64f660b26f32-serving-cert\") pod \"controller-manager-f9f4cb9b6-tvlnp\" (UID: \"df67c235-5c37-4aff-be16-64f660b26f32\") " pod="openshift-controller-manager/controller-manager-f9f4cb9b6-tvlnp" Feb 19 09:46:15 crc kubenswrapper[4965]: I0219 09:46:15.828891 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df67c235-5c37-4aff-be16-64f660b26f32-config\") pod \"controller-manager-f9f4cb9b6-tvlnp\" (UID: \"df67c235-5c37-4aff-be16-64f660b26f32\") " pod="openshift-controller-manager/controller-manager-f9f4cb9b6-tvlnp" Feb 19 09:46:15 crc kubenswrapper[4965]: I0219 09:46:15.828920 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df67c235-5c37-4aff-be16-64f660b26f32-client-ca\") pod \"controller-manager-f9f4cb9b6-tvlnp\" (UID: \"df67c235-5c37-4aff-be16-64f660b26f32\") " pod="openshift-controller-manager/controller-manager-f9f4cb9b6-tvlnp" Feb 19 09:46:15 crc kubenswrapper[4965]: I0219 09:46:15.830317 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df67c235-5c37-4aff-be16-64f660b26f32-client-ca\") pod \"controller-manager-f9f4cb9b6-tvlnp\" (UID: \"df67c235-5c37-4aff-be16-64f660b26f32\") " pod="openshift-controller-manager/controller-manager-f9f4cb9b6-tvlnp" Feb 19 09:46:15 crc kubenswrapper[4965]: I0219 09:46:15.830474 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/df67c235-5c37-4aff-be16-64f660b26f32-proxy-ca-bundles\") pod \"controller-manager-f9f4cb9b6-tvlnp\" (UID: \"df67c235-5c37-4aff-be16-64f660b26f32\") " pod="openshift-controller-manager/controller-manager-f9f4cb9b6-tvlnp" Feb 19 09:46:15 crc kubenswrapper[4965]: I0219 09:46:15.830637 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df67c235-5c37-4aff-be16-64f660b26f32-config\") pod \"controller-manager-f9f4cb9b6-tvlnp\" (UID: \"df67c235-5c37-4aff-be16-64f660b26f32\") " pod="openshift-controller-manager/controller-manager-f9f4cb9b6-tvlnp" Feb 19 09:46:15 crc kubenswrapper[4965]: I0219 09:46:15.834397 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df67c235-5c37-4aff-be16-64f660b26f32-serving-cert\") pod \"controller-manager-f9f4cb9b6-tvlnp\" (UID: \"df67c235-5c37-4aff-be16-64f660b26f32\") " pod="openshift-controller-manager/controller-manager-f9f4cb9b6-tvlnp" Feb 19 09:46:15 crc kubenswrapper[4965]: I0219 09:46:15.844616 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxv7k\" (UniqueName: \"kubernetes.io/projected/df67c235-5c37-4aff-be16-64f660b26f32-kube-api-access-zxv7k\") pod \"controller-manager-f9f4cb9b6-tvlnp\" (UID: \"df67c235-5c37-4aff-be16-64f660b26f32\") " pod="openshift-controller-manager/controller-manager-f9f4cb9b6-tvlnp" Feb 19 09:46:15 crc kubenswrapper[4965]: I0219 09:46:15.876685 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f9f4cb9b6-tvlnp" Feb 19 09:46:16 crc kubenswrapper[4965]: I0219 09:46:16.438183 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f9f4cb9b6-tvlnp"] Feb 19 09:46:16 crc kubenswrapper[4965]: I0219 09:46:16.601424 4965 patch_prober.go:28] interesting pod/machine-config-daemon-7mhh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 09:46:16 crc kubenswrapper[4965]: I0219 09:46:16.601701 4965 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 09:46:16 crc kubenswrapper[4965]: I0219 09:46:16.601773 4965 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" Feb 19 09:46:16 crc kubenswrapper[4965]: I0219 09:46:16.602546 4965 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a1ff237da7e509d3b4a25e8042c384a768ef0123d1687b574502f769bde3121b"} pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 09:46:16 crc kubenswrapper[4965]: I0219 09:46:16.602624 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerName="machine-config-daemon" containerID="cri-o://a1ff237da7e509d3b4a25e8042c384a768ef0123d1687b574502f769bde3121b" gracePeriod=600 Feb 19 09:46:17 crc kubenswrapper[4965]: I0219 09:46:17.102551 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f9f4cb9b6-tvlnp" event={"ID":"df67c235-5c37-4aff-be16-64f660b26f32","Type":"ContainerStarted","Data":"8aee0c2cdbe6377efe8b5304dcb8a8d3330256ac3e6068a906043951c755e938"} Feb 19 09:46:17 crc kubenswrapper[4965]: I0219 09:46:17.103073 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f9f4cb9b6-tvlnp" event={"ID":"df67c235-5c37-4aff-be16-64f660b26f32","Type":"ContainerStarted","Data":"ba1d997cc2bcacac7557e27d3a34be576a8933df81399e3fab3ae443ce6b228d"} Feb 19 09:46:17 crc kubenswrapper[4965]: I0219 09:46:17.103115 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-f9f4cb9b6-tvlnp" Feb 19 09:46:17 crc kubenswrapper[4965]: I0219 09:46:17.106537 4965 generic.go:334] "Generic (PLEG): container finished" podID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerID="a1ff237da7e509d3b4a25e8042c384a768ef0123d1687b574502f769bde3121b" exitCode=0 Feb 19 09:46:17 crc kubenswrapper[4965]: I0219 09:46:17.106576 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" event={"ID":"63ef3eb8-6103-492d-b6ef-f16081d15e83","Type":"ContainerDied","Data":"a1ff237da7e509d3b4a25e8042c384a768ef0123d1687b574502f769bde3121b"} Feb 19 09:46:17 crc kubenswrapper[4965]: I0219 09:46:17.107036 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" event={"ID":"63ef3eb8-6103-492d-b6ef-f16081d15e83","Type":"ContainerStarted","Data":"a3a16677e101e0014d7e0c43b5b3a431fd87db479114715ef53b03062691e273"} Feb 19 09:46:17 crc kubenswrapper[4965]: I0219 09:46:17.111743 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-f9f4cb9b6-tvlnp" Feb 19 09:46:17 crc kubenswrapper[4965]: I0219 09:46:17.120832 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-f9f4cb9b6-tvlnp" podStartSLOduration=15.120818232 podStartE2EDuration="15.120818232s" podCreationTimestamp="2026-02-19 09:46:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:46:17.120046612 +0000 UTC m=+232.741367942" watchObservedRunningTime="2026-02-19 09:46:17.120818232 +0000 UTC m=+232.742139542" Feb 19 09:46:18 crc kubenswrapper[4965]: I0219 09:46:18.564260 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-77df6bdc9c-lfqjn"] Feb 19 09:46:18 crc kubenswrapper[4965]: I0219 09:46:18.566006 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-77df6bdc9c-lfqjn" Feb 19 09:46:18 crc kubenswrapper[4965]: I0219 09:46:18.568039 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 19 09:46:18 crc kubenswrapper[4965]: I0219 09:46:18.574060 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 19 09:46:18 crc kubenswrapper[4965]: I0219 09:46:18.574340 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 19 09:46:18 crc kubenswrapper[4965]: I0219 09:46:18.575263 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 19 09:46:18 crc kubenswrapper[4965]: I0219 09:46:18.578581 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 19 09:46:18 crc kubenswrapper[4965]: I0219 09:46:18.579057 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 19 09:46:18 crc kubenswrapper[4965]: I0219 09:46:18.579800 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 19 09:46:18 crc kubenswrapper[4965]: I0219 09:46:18.580793 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 19 09:46:18 crc kubenswrapper[4965]: I0219 09:46:18.580976 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 19 09:46:18 crc kubenswrapper[4965]: I0219 09:46:18.581582 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 19 09:46:18 crc kubenswrapper[4965]: I0219 09:46:18.581736 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 19 09:46:18 crc kubenswrapper[4965]: I0219 09:46:18.582146 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 19 09:46:18 crc kubenswrapper[4965]: I0219 09:46:18.583528 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 19 09:46:18 crc kubenswrapper[4965]: I0219 09:46:18.588992 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-77df6bdc9c-lfqjn"] Feb 19 09:46:18 crc kubenswrapper[4965]: I0219 09:46:18.595634 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 19 09:46:18 crc kubenswrapper[4965]: I0219 09:46:18.600340 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 19 09:46:18 crc kubenswrapper[4965]: I0219 09:46:18.667735 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a-v4-0-config-system-router-certs\") pod \"oauth-openshift-77df6bdc9c-lfqjn\" (UID: \"bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-lfqjn" Feb 19 09:46:18 crc kubenswrapper[4965]: I0219 09:46:18.667780 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-77df6bdc9c-lfqjn\" (UID: \"bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-lfqjn" Feb 19 09:46:18 crc kubenswrapper[4965]: I0219 09:46:18.667803 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a-v4-0-config-system-service-ca\") pod \"oauth-openshift-77df6bdc9c-lfqjn\" (UID: \"bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-lfqjn" Feb 19 09:46:18 crc kubenswrapper[4965]: I0219 09:46:18.667839 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a-audit-policies\") pod \"oauth-openshift-77df6bdc9c-lfqjn\" (UID: \"bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-lfqjn" Feb 19 09:46:18 crc kubenswrapper[4965]: I0219 09:46:18.668070 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-77df6bdc9c-lfqjn\" (UID: \"bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-lfqjn" Feb 19 09:46:18 crc kubenswrapper[4965]: I0219 09:46:18.668101 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a-audit-dir\") pod \"oauth-openshift-77df6bdc9c-lfqjn\" (UID: \"bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-lfqjn" Feb 19 09:46:18 crc kubenswrapper[4965]: I0219 09:46:18.668117 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-77df6bdc9c-lfqjn\" (UID: \"bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-lfqjn" Feb 19 09:46:18 crc kubenswrapper[4965]: I0219 09:46:18.668134 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a-v4-0-config-user-template-error\") pod \"oauth-openshift-77df6bdc9c-lfqjn\" (UID: \"bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-lfqjn" Feb 19 09:46:18 crc kubenswrapper[4965]: I0219 09:46:18.668155 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkngd\" (UniqueName: \"kubernetes.io/projected/bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a-kube-api-access-tkngd\") pod \"oauth-openshift-77df6bdc9c-lfqjn\" (UID: \"bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-lfqjn" Feb 19 09:46:18 crc kubenswrapper[4965]: I0219 09:46:18.668179 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-77df6bdc9c-lfqjn\" (UID: \"bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-lfqjn" Feb 19 09:46:18 crc kubenswrapper[4965]: I0219 09:46:18.668292 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a-v4-0-config-system-session\") pod \"oauth-openshift-77df6bdc9c-lfqjn\" (UID: \"bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-lfqjn" Feb 19 09:46:18 crc kubenswrapper[4965]: I0219 09:46:18.668415 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a-v4-0-config-user-template-login\") pod \"oauth-openshift-77df6bdc9c-lfqjn\" (UID: \"bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-lfqjn" Feb 19 09:46:18 crc kubenswrapper[4965]: I0219 09:46:18.668491 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-77df6bdc9c-lfqjn\" (UID: \"bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-lfqjn" Feb 19 09:46:18 crc kubenswrapper[4965]: I0219 09:46:18.668515 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-77df6bdc9c-lfqjn\" (UID: \"bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-lfqjn" Feb 19 09:46:18 crc kubenswrapper[4965]: I0219 09:46:18.769866 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a-audit-policies\") pod \"oauth-openshift-77df6bdc9c-lfqjn\" (UID: \"bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-lfqjn" Feb 19 09:46:18 crc kubenswrapper[4965]: I0219 09:46:18.770035 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-77df6bdc9c-lfqjn\" (UID: \"bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-lfqjn" Feb 19 09:46:18 crc kubenswrapper[4965]: I0219 09:46:18.770106 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a-audit-dir\") pod \"oauth-openshift-77df6bdc9c-lfqjn\" (UID: \"bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-lfqjn" Feb 19 09:46:18 crc kubenswrapper[4965]: I0219 09:46:18.770157 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-77df6bdc9c-lfqjn\" (UID: \"bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-lfqjn" Feb 19 09:46:18 crc kubenswrapper[4965]: I0219 09:46:18.770271 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a-audit-dir\") pod \"oauth-openshift-77df6bdc9c-lfqjn\" (UID: \"bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-lfqjn" Feb 19 09:46:18 crc kubenswrapper[4965]: I0219 09:46:18.770281 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a-v4-0-config-user-template-error\") pod \"oauth-openshift-77df6bdc9c-lfqjn\" (UID: \"bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-lfqjn" Feb 19 09:46:18 crc kubenswrapper[4965]: I0219 09:46:18.770371 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkngd\" (UniqueName: \"kubernetes.io/projected/bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a-kube-api-access-tkngd\") pod \"oauth-openshift-77df6bdc9c-lfqjn\" (UID: \"bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-lfqjn" Feb 19 09:46:18 crc kubenswrapper[4965]: I0219 09:46:18.770399 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-77df6bdc9c-lfqjn\" (UID: \"bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-lfqjn" Feb 19 09:46:18 crc kubenswrapper[4965]: I0219 09:46:18.770429 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a-v4-0-config-system-session\") pod \"oauth-openshift-77df6bdc9c-lfqjn\" (UID: \"bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-lfqjn" Feb 19 09:46:18 crc kubenswrapper[4965]: I0219 09:46:18.770449 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a-v4-0-config-user-template-login\") pod \"oauth-openshift-77df6bdc9c-lfqjn\" (UID: \"bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-lfqjn" Feb 19 09:46:18 crc kubenswrapper[4965]: I0219 09:46:18.770487 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-77df6bdc9c-lfqjn\" (UID: \"bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-lfqjn" Feb 19 09:46:18 crc kubenswrapper[4965]: I0219 09:46:18.770513 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-77df6bdc9c-lfqjn\" (UID: \"bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-lfqjn" Feb 19 09:46:18 crc kubenswrapper[4965]: I0219 09:46:18.770561 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a-v4-0-config-system-router-certs\") pod \"oauth-openshift-77df6bdc9c-lfqjn\" (UID: \"bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-lfqjn" Feb 19 09:46:18 crc kubenswrapper[4965]: I0219 09:46:18.770589 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-77df6bdc9c-lfqjn\" (UID: \"bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-lfqjn" Feb 19 09:46:18 crc kubenswrapper[4965]: I0219 09:46:18.770615 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a-v4-0-config-system-service-ca\") pod \"oauth-openshift-77df6bdc9c-lfqjn\" (UID: \"bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-lfqjn" Feb 19 09:46:18 crc kubenswrapper[4965]: I0219 09:46:18.770752 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a-audit-policies\") pod \"oauth-openshift-77df6bdc9c-lfqjn\" (UID: \"bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-lfqjn" Feb 19 09:46:18 crc kubenswrapper[4965]: I0219 09:46:18.770998 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-77df6bdc9c-lfqjn\" (UID: \"bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-lfqjn" Feb 19 09:46:18 crc kubenswrapper[4965]: I0219 09:46:18.771255 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a-v4-0-config-system-service-ca\") pod \"oauth-openshift-77df6bdc9c-lfqjn\" (UID: \"bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-lfqjn" Feb 19 09:46:18 crc kubenswrapper[4965]: I0219 09:46:18.774242 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-77df6bdc9c-lfqjn\" (UID: \"bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-lfqjn" Feb 19 09:46:18 crc kubenswrapper[4965]: I0219 09:46:18.778708 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-77df6bdc9c-lfqjn\" (UID: \"bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-lfqjn" Feb 19 09:46:18 crc kubenswrapper[4965]: I0219 09:46:18.779257 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-77df6bdc9c-lfqjn\" (UID: \"bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-lfqjn" Feb 19 09:46:18 crc kubenswrapper[4965]: I0219 09:46:18.779328 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a-v4-0-config-system-session\") pod \"oauth-openshift-77df6bdc9c-lfqjn\" (UID: \"bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-lfqjn" Feb 19 09:46:18 crc kubenswrapper[4965]: I0219 09:46:18.781439 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-77df6bdc9c-lfqjn\" (UID: \"bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-lfqjn" Feb 19 09:46:18 crc kubenswrapper[4965]: I0219 09:46:18.780539 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a-v4-0-config-user-template-login\") pod \"oauth-openshift-77df6bdc9c-lfqjn\" (UID: \"bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-lfqjn" Feb 19 09:46:18 crc kubenswrapper[4965]: I0219 09:46:18.781125 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-77df6bdc9c-lfqjn\" (UID: \"bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-lfqjn" Feb 19 09:46:18 crc kubenswrapper[4965]: I0219 09:46:18.781048 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a-v4-0-config-user-template-error\") pod \"oauth-openshift-77df6bdc9c-lfqjn\" (UID: \"bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-lfqjn" Feb 19 09:46:18 crc kubenswrapper[4965]: I0219 09:46:18.781730 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a-v4-0-config-system-router-certs\") pod \"oauth-openshift-77df6bdc9c-lfqjn\" (UID: \"bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-lfqjn" Feb 19 09:46:18 crc kubenswrapper[4965]: I0219 09:46:18.793851 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkngd\" (UniqueName: \"kubernetes.io/projected/bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a-kube-api-access-tkngd\") pod \"oauth-openshift-77df6bdc9c-lfqjn\" (UID: \"bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a\") " pod="openshift-authentication/oauth-openshift-77df6bdc9c-lfqjn" Feb 19 09:46:18 crc kubenswrapper[4965]: I0219 09:46:18.963171 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-77df6bdc9c-lfqjn" Feb 19 09:46:19 crc kubenswrapper[4965]: I0219 09:46:19.395183 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-77df6bdc9c-lfqjn"] Feb 19 09:46:20 crc kubenswrapper[4965]: I0219 09:46:20.130988 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-77df6bdc9c-lfqjn" event={"ID":"bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a","Type":"ContainerStarted","Data":"3e00f6a67166240ae5da4f1f3de0cf3f88f9e32849c5a2f1daa35be4670f9da1"} Feb 19 09:46:20 crc kubenswrapper[4965]: I0219 09:46:20.131593 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-77df6bdc9c-lfqjn" event={"ID":"bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a","Type":"ContainerStarted","Data":"46d9d703e248bdc28ba75630fbba0805d52df3295ef9f01b615347d625b68dbc"} Feb 19 09:46:20 crc kubenswrapper[4965]: I0219 09:46:20.131636 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-77df6bdc9c-lfqjn" Feb 19 09:46:20 crc kubenswrapper[4965]: I0219 09:46:20.156101 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-77df6bdc9c-lfqjn" podStartSLOduration=33.156073531 podStartE2EDuration="33.156073531s" podCreationTimestamp="2026-02-19 09:45:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:46:20.155827545 +0000 UTC m=+235.777148855" watchObservedRunningTime="2026-02-19 09:46:20.156073531 +0000 UTC m=+235.777394861" Feb 19 09:46:20 crc kubenswrapper[4965]: I0219 09:46:20.181042 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-77df6bdc9c-lfqjn" Feb 19 09:46:22 crc kubenswrapper[4965]: I0219 09:46:22.195995 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-f9f4cb9b6-tvlnp"] Feb 19 09:46:22 crc kubenswrapper[4965]: I0219 09:46:22.196603 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-f9f4cb9b6-tvlnp" podUID="df67c235-5c37-4aff-be16-64f660b26f32" containerName="controller-manager" containerID="cri-o://8aee0c2cdbe6377efe8b5304dcb8a8d3330256ac3e6068a906043951c755e938" gracePeriod=30 Feb 19 09:46:22 crc kubenswrapper[4965]: I0219 09:46:22.294865 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-78697df6c5-q924m"] Feb 19 09:46:22 crc kubenswrapper[4965]: I0219 09:46:22.295124 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-78697df6c5-q924m" podUID="061341ab-5fe8-480f-94d1-594f1fc9b26f" containerName="route-controller-manager" containerID="cri-o://b6eb9241f619c670a9850ec17f2ed26c6766e8db31c84dc2882ee91a9892c9ea" gracePeriod=30 Feb 19 09:46:22 crc kubenswrapper[4965]: I0219 09:46:22.628323 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f9f4cb9b6-tvlnp" Feb 19 09:46:22 crc kubenswrapper[4965]: I0219 09:46:22.686687 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-78697df6c5-q924m" Feb 19 09:46:22 crc kubenswrapper[4965]: I0219 09:46:22.727818 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/df67c235-5c37-4aff-be16-64f660b26f32-proxy-ca-bundles\") pod \"df67c235-5c37-4aff-be16-64f660b26f32\" (UID: \"df67c235-5c37-4aff-be16-64f660b26f32\") " Feb 19 09:46:22 crc kubenswrapper[4965]: I0219 09:46:22.728298 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df67c235-5c37-4aff-be16-64f660b26f32-config\") pod \"df67c235-5c37-4aff-be16-64f660b26f32\" (UID: \"df67c235-5c37-4aff-be16-64f660b26f32\") " Feb 19 09:46:22 crc kubenswrapper[4965]: I0219 09:46:22.728429 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8thz\" (UniqueName: \"kubernetes.io/projected/061341ab-5fe8-480f-94d1-594f1fc9b26f-kube-api-access-f8thz\") pod \"061341ab-5fe8-480f-94d1-594f1fc9b26f\" (UID: \"061341ab-5fe8-480f-94d1-594f1fc9b26f\") " Feb 19 09:46:22 crc kubenswrapper[4965]: I0219 09:46:22.728481 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxv7k\" (UniqueName: \"kubernetes.io/projected/df67c235-5c37-4aff-be16-64f660b26f32-kube-api-access-zxv7k\") pod \"df67c235-5c37-4aff-be16-64f660b26f32\" (UID: \"df67c235-5c37-4aff-be16-64f660b26f32\") " Feb 19 09:46:22 crc kubenswrapper[4965]: I0219 09:46:22.728522 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/061341ab-5fe8-480f-94d1-594f1fc9b26f-serving-cert\") pod \"061341ab-5fe8-480f-94d1-594f1fc9b26f\" (UID: \"061341ab-5fe8-480f-94d1-594f1fc9b26f\") " Feb 19 09:46:22 crc kubenswrapper[4965]: I0219 09:46:22.728549 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/061341ab-5fe8-480f-94d1-594f1fc9b26f-config\") pod \"061341ab-5fe8-480f-94d1-594f1fc9b26f\" (UID: \"061341ab-5fe8-480f-94d1-594f1fc9b26f\") " Feb 19 09:46:22 crc kubenswrapper[4965]: I0219 09:46:22.728628 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df67c235-5c37-4aff-be16-64f660b26f32-client-ca\") pod \"df67c235-5c37-4aff-be16-64f660b26f32\" (UID: \"df67c235-5c37-4aff-be16-64f660b26f32\") " Feb 19 09:46:22 crc kubenswrapper[4965]: I0219 09:46:22.728655 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/061341ab-5fe8-480f-94d1-594f1fc9b26f-client-ca\") pod \"061341ab-5fe8-480f-94d1-594f1fc9b26f\" (UID: \"061341ab-5fe8-480f-94d1-594f1fc9b26f\") " Feb 19 09:46:22 crc kubenswrapper[4965]: I0219 09:46:22.728692 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df67c235-5c37-4aff-be16-64f660b26f32-serving-cert\") pod \"df67c235-5c37-4aff-be16-64f660b26f32\" (UID: \"df67c235-5c37-4aff-be16-64f660b26f32\") " Feb 19 09:46:22 crc kubenswrapper[4965]: I0219 09:46:22.730744 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df67c235-5c37-4aff-be16-64f660b26f32-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "df67c235-5c37-4aff-be16-64f660b26f32" (UID: "df67c235-5c37-4aff-be16-64f660b26f32"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:46:22 crc kubenswrapper[4965]: I0219 09:46:22.734907 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/061341ab-5fe8-480f-94d1-594f1fc9b26f-kube-api-access-f8thz" (OuterVolumeSpecName: "kube-api-access-f8thz") pod "061341ab-5fe8-480f-94d1-594f1fc9b26f" (UID: "061341ab-5fe8-480f-94d1-594f1fc9b26f"). InnerVolumeSpecName "kube-api-access-f8thz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:46:22 crc kubenswrapper[4965]: I0219 09:46:22.734976 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df67c235-5c37-4aff-be16-64f660b26f32-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "df67c235-5c37-4aff-be16-64f660b26f32" (UID: "df67c235-5c37-4aff-be16-64f660b26f32"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:46:22 crc kubenswrapper[4965]: I0219 09:46:22.735721 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df67c235-5c37-4aff-be16-64f660b26f32-config" (OuterVolumeSpecName: "config") pod "df67c235-5c37-4aff-be16-64f660b26f32" (UID: "df67c235-5c37-4aff-be16-64f660b26f32"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:46:22 crc kubenswrapper[4965]: I0219 09:46:22.736096 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/061341ab-5fe8-480f-94d1-594f1fc9b26f-config" (OuterVolumeSpecName: "config") pod "061341ab-5fe8-480f-94d1-594f1fc9b26f" (UID: "061341ab-5fe8-480f-94d1-594f1fc9b26f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:46:22 crc kubenswrapper[4965]: I0219 09:46:22.736253 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df67c235-5c37-4aff-be16-64f660b26f32-client-ca" (OuterVolumeSpecName: "client-ca") pod "df67c235-5c37-4aff-be16-64f660b26f32" (UID: "df67c235-5c37-4aff-be16-64f660b26f32"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:46:22 crc kubenswrapper[4965]: I0219 09:46:22.736570 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/061341ab-5fe8-480f-94d1-594f1fc9b26f-client-ca" (OuterVolumeSpecName: "client-ca") pod "061341ab-5fe8-480f-94d1-594f1fc9b26f" (UID: "061341ab-5fe8-480f-94d1-594f1fc9b26f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:46:22 crc kubenswrapper[4965]: I0219 09:46:22.736565 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/061341ab-5fe8-480f-94d1-594f1fc9b26f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "061341ab-5fe8-480f-94d1-594f1fc9b26f" (UID: "061341ab-5fe8-480f-94d1-594f1fc9b26f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:46:22 crc kubenswrapper[4965]: I0219 09:46:22.737237 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df67c235-5c37-4aff-be16-64f660b26f32-kube-api-access-zxv7k" (OuterVolumeSpecName: "kube-api-access-zxv7k") pod "df67c235-5c37-4aff-be16-64f660b26f32" (UID: "df67c235-5c37-4aff-be16-64f660b26f32"). InnerVolumeSpecName "kube-api-access-zxv7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:46:22 crc kubenswrapper[4965]: I0219 09:46:22.830047 4965 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df67c235-5c37-4aff-be16-64f660b26f32-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 09:46:22 crc kubenswrapper[4965]: I0219 09:46:22.830104 4965 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/061341ab-5fe8-480f-94d1-594f1fc9b26f-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 09:46:22 crc kubenswrapper[4965]: I0219 09:46:22.830117 4965 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df67c235-5c37-4aff-be16-64f660b26f32-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:46:22 crc kubenswrapper[4965]: I0219 09:46:22.830132 4965 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/df67c235-5c37-4aff-be16-64f660b26f32-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 09:46:22 crc kubenswrapper[4965]: I0219 09:46:22.830182 4965 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df67c235-5c37-4aff-be16-64f660b26f32-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:46:22 crc kubenswrapper[4965]: I0219 09:46:22.830216 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8thz\" (UniqueName: \"kubernetes.io/projected/061341ab-5fe8-480f-94d1-594f1fc9b26f-kube-api-access-f8thz\") on node \"crc\" DevicePath \"\"" Feb 19 09:46:22 crc kubenswrapper[4965]: I0219 09:46:22.830232 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxv7k\" (UniqueName: \"kubernetes.io/projected/df67c235-5c37-4aff-be16-64f660b26f32-kube-api-access-zxv7k\") on node \"crc\" DevicePath \"\"" Feb 19 09:46:22 crc kubenswrapper[4965]: I0219 09:46:22.830243 4965 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/061341ab-5fe8-480f-94d1-594f1fc9b26f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:46:22 crc kubenswrapper[4965]: I0219 09:46:22.830254 4965 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/061341ab-5fe8-480f-94d1-594f1fc9b26f-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.149460 4965 generic.go:334] "Generic (PLEG): container finished" podID="df67c235-5c37-4aff-be16-64f660b26f32" containerID="8aee0c2cdbe6377efe8b5304dcb8a8d3330256ac3e6068a906043951c755e938" exitCode=0 Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.149547 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f9f4cb9b6-tvlnp" event={"ID":"df67c235-5c37-4aff-be16-64f660b26f32","Type":"ContainerDied","Data":"8aee0c2cdbe6377efe8b5304dcb8a8d3330256ac3e6068a906043951c755e938"} Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.149594 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f9f4cb9b6-tvlnp" event={"ID":"df67c235-5c37-4aff-be16-64f660b26f32","Type":"ContainerDied","Data":"ba1d997cc2bcacac7557e27d3a34be576a8933df81399e3fab3ae443ce6b228d"} Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.149624 4965 scope.go:117] "RemoveContainer" containerID="8aee0c2cdbe6377efe8b5304dcb8a8d3330256ac3e6068a906043951c755e938" Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.149648 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f9f4cb9b6-tvlnp" Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.152127 4965 generic.go:334] "Generic (PLEG): container finished" podID="061341ab-5fe8-480f-94d1-594f1fc9b26f" containerID="b6eb9241f619c670a9850ec17f2ed26c6766e8db31c84dc2882ee91a9892c9ea" exitCode=0 Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.152262 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-78697df6c5-q924m" Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.152256 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-78697df6c5-q924m" event={"ID":"061341ab-5fe8-480f-94d1-594f1fc9b26f","Type":"ContainerDied","Data":"b6eb9241f619c670a9850ec17f2ed26c6766e8db31c84dc2882ee91a9892c9ea"} Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.152943 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-78697df6c5-q924m" event={"ID":"061341ab-5fe8-480f-94d1-594f1fc9b26f","Type":"ContainerDied","Data":"763764dcd8c0f9c76a2a5dfe3f301ae92676e988fc30e5539c3f2b401488f865"} Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.172418 4965 scope.go:117] "RemoveContainer" containerID="8aee0c2cdbe6377efe8b5304dcb8a8d3330256ac3e6068a906043951c755e938" Feb 19 09:46:23 crc kubenswrapper[4965]: E0219 09:46:23.172936 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8aee0c2cdbe6377efe8b5304dcb8a8d3330256ac3e6068a906043951c755e938\": container with ID starting with 8aee0c2cdbe6377efe8b5304dcb8a8d3330256ac3e6068a906043951c755e938 not found: ID does not exist" containerID="8aee0c2cdbe6377efe8b5304dcb8a8d3330256ac3e6068a906043951c755e938" Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.172981 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8aee0c2cdbe6377efe8b5304dcb8a8d3330256ac3e6068a906043951c755e938"} err="failed to get container status \"8aee0c2cdbe6377efe8b5304dcb8a8d3330256ac3e6068a906043951c755e938\": rpc error: code = NotFound desc = could not find container \"8aee0c2cdbe6377efe8b5304dcb8a8d3330256ac3e6068a906043951c755e938\": container with ID starting with 8aee0c2cdbe6377efe8b5304dcb8a8d3330256ac3e6068a906043951c755e938 not found: ID does not exist" Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.173019 4965 scope.go:117] "RemoveContainer" containerID="b6eb9241f619c670a9850ec17f2ed26c6766e8db31c84dc2882ee91a9892c9ea" Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.187479 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-78697df6c5-q924m"] Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.193765 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-78697df6c5-q924m"] Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.200269 4965 scope.go:117] "RemoveContainer" containerID="b6eb9241f619c670a9850ec17f2ed26c6766e8db31c84dc2882ee91a9892c9ea" Feb 19 09:46:23 crc kubenswrapper[4965]: E0219 09:46:23.201597 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6eb9241f619c670a9850ec17f2ed26c6766e8db31c84dc2882ee91a9892c9ea\": container with ID starting with b6eb9241f619c670a9850ec17f2ed26c6766e8db31c84dc2882ee91a9892c9ea not found: ID does not exist" containerID="b6eb9241f619c670a9850ec17f2ed26c6766e8db31c84dc2882ee91a9892c9ea" Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.201654 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6eb9241f619c670a9850ec17f2ed26c6766e8db31c84dc2882ee91a9892c9ea"} err="failed to get container status \"b6eb9241f619c670a9850ec17f2ed26c6766e8db31c84dc2882ee91a9892c9ea\": rpc error: code = NotFound desc = could not find container \"b6eb9241f619c670a9850ec17f2ed26c6766e8db31c84dc2882ee91a9892c9ea\": container with ID starting with b6eb9241f619c670a9850ec17f2ed26c6766e8db31c84dc2882ee91a9892c9ea not found: ID does not exist" Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.220141 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="061341ab-5fe8-480f-94d1-594f1fc9b26f" path="/var/lib/kubelet/pods/061341ab-5fe8-480f-94d1-594f1fc9b26f/volumes" Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.220861 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-f9f4cb9b6-tvlnp"] Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.223000 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-f9f4cb9b6-tvlnp"] Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.568661 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f5ff477dd-4bshk"] Feb 19 09:46:23 crc kubenswrapper[4965]: E0219 09:46:23.569270 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="061341ab-5fe8-480f-94d1-594f1fc9b26f" containerName="route-controller-manager" Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.569339 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="061341ab-5fe8-480f-94d1-594f1fc9b26f" containerName="route-controller-manager" Feb 19 09:46:23 crc kubenswrapper[4965]: E0219 09:46:23.569358 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df67c235-5c37-4aff-be16-64f660b26f32" containerName="controller-manager" Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.569370 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="df67c235-5c37-4aff-be16-64f660b26f32" containerName="controller-manager" Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.569716 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="061341ab-5fe8-480f-94d1-594f1fc9b26f" containerName="route-controller-manager" Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.569848 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="df67c235-5c37-4aff-be16-64f660b26f32" containerName="controller-manager" Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.570790 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f5ff477dd-4bshk" Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.574947 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.575124 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.575255 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.575435 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.578289 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.578712 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.582792 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-77b488fb6d-cl9wd"] Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.583875 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77b488fb6d-cl9wd" Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.585745 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.589691 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.590385 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.590506 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.590444 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.591436 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.592713 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f5ff477dd-4bshk"] Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.595275 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.595745 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77b488fb6d-cl9wd"] Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.643522 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/858eb55a-89cd-40d6-a8a1-beec97cc18a6-serving-cert\") pod \"controller-manager-77b488fb6d-cl9wd\" (UID: \"858eb55a-89cd-40d6-a8a1-beec97cc18a6\") " pod="openshift-controller-manager/controller-manager-77b488fb6d-cl9wd" Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.643583 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/858eb55a-89cd-40d6-a8a1-beec97cc18a6-proxy-ca-bundles\") pod \"controller-manager-77b488fb6d-cl9wd\" (UID: \"858eb55a-89cd-40d6-a8a1-beec97cc18a6\") " pod="openshift-controller-manager/controller-manager-77b488fb6d-cl9wd" Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.643608 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jgww\" (UniqueName: \"kubernetes.io/projected/858eb55a-89cd-40d6-a8a1-beec97cc18a6-kube-api-access-4jgww\") pod \"controller-manager-77b488fb6d-cl9wd\" (UID: \"858eb55a-89cd-40d6-a8a1-beec97cc18a6\") " pod="openshift-controller-manager/controller-manager-77b488fb6d-cl9wd" Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.643627 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/858eb55a-89cd-40d6-a8a1-beec97cc18a6-client-ca\") pod \"controller-manager-77b488fb6d-cl9wd\" (UID: \"858eb55a-89cd-40d6-a8a1-beec97cc18a6\") " pod="openshift-controller-manager/controller-manager-77b488fb6d-cl9wd" Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.643664 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ced7a0d-5026-48b4-9bbe-bfe2af69c545-config\") pod \"route-controller-manager-f5ff477dd-4bshk\" (UID: \"9ced7a0d-5026-48b4-9bbe-bfe2af69c545\") " pod="openshift-route-controller-manager/route-controller-manager-f5ff477dd-4bshk" Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.643679 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/858eb55a-89cd-40d6-a8a1-beec97cc18a6-config\") pod \"controller-manager-77b488fb6d-cl9wd\" (UID: \"858eb55a-89cd-40d6-a8a1-beec97cc18a6\") " pod="openshift-controller-manager/controller-manager-77b488fb6d-cl9wd" Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.643715 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ced7a0d-5026-48b4-9bbe-bfe2af69c545-serving-cert\") pod \"route-controller-manager-f5ff477dd-4bshk\" (UID: \"9ced7a0d-5026-48b4-9bbe-bfe2af69c545\") " pod="openshift-route-controller-manager/route-controller-manager-f5ff477dd-4bshk" Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.643747 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld42c\" (UniqueName: \"kubernetes.io/projected/9ced7a0d-5026-48b4-9bbe-bfe2af69c545-kube-api-access-ld42c\") pod \"route-controller-manager-f5ff477dd-4bshk\" (UID: \"9ced7a0d-5026-48b4-9bbe-bfe2af69c545\") " pod="openshift-route-controller-manager/route-controller-manager-f5ff477dd-4bshk" Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.643773 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9ced7a0d-5026-48b4-9bbe-bfe2af69c545-client-ca\") pod \"route-controller-manager-f5ff477dd-4bshk\" (UID: \"9ced7a0d-5026-48b4-9bbe-bfe2af69c545\") " pod="openshift-route-controller-manager/route-controller-manager-f5ff477dd-4bshk" Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.744859 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ced7a0d-5026-48b4-9bbe-bfe2af69c545-serving-cert\") pod \"route-controller-manager-f5ff477dd-4bshk\" (UID: \"9ced7a0d-5026-48b4-9bbe-bfe2af69c545\") " pod="openshift-route-controller-manager/route-controller-manager-f5ff477dd-4bshk" Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.744932 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ld42c\" (UniqueName: \"kubernetes.io/projected/9ced7a0d-5026-48b4-9bbe-bfe2af69c545-kube-api-access-ld42c\") pod \"route-controller-manager-f5ff477dd-4bshk\" (UID: \"9ced7a0d-5026-48b4-9bbe-bfe2af69c545\") " pod="openshift-route-controller-manager/route-controller-manager-f5ff477dd-4bshk" Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.744961 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9ced7a0d-5026-48b4-9bbe-bfe2af69c545-client-ca\") pod \"route-controller-manager-f5ff477dd-4bshk\" (UID: \"9ced7a0d-5026-48b4-9bbe-bfe2af69c545\") " pod="openshift-route-controller-manager/route-controller-manager-f5ff477dd-4bshk" Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.744994 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/858eb55a-89cd-40d6-a8a1-beec97cc18a6-proxy-ca-bundles\") pod \"controller-manager-77b488fb6d-cl9wd\" (UID: \"858eb55a-89cd-40d6-a8a1-beec97cc18a6\") " pod="openshift-controller-manager/controller-manager-77b488fb6d-cl9wd" Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.745017 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/858eb55a-89cd-40d6-a8a1-beec97cc18a6-serving-cert\") pod \"controller-manager-77b488fb6d-cl9wd\" (UID: \"858eb55a-89cd-40d6-a8a1-beec97cc18a6\") " pod="openshift-controller-manager/controller-manager-77b488fb6d-cl9wd" Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.745041 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jgww\" (UniqueName: \"kubernetes.io/projected/858eb55a-89cd-40d6-a8a1-beec97cc18a6-kube-api-access-4jgww\") pod \"controller-manager-77b488fb6d-cl9wd\" (UID: \"858eb55a-89cd-40d6-a8a1-beec97cc18a6\") " pod="openshift-controller-manager/controller-manager-77b488fb6d-cl9wd" Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.745056 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/858eb55a-89cd-40d6-a8a1-beec97cc18a6-client-ca\") pod \"controller-manager-77b488fb6d-cl9wd\" (UID: \"858eb55a-89cd-40d6-a8a1-beec97cc18a6\") " pod="openshift-controller-manager/controller-manager-77b488fb6d-cl9wd" Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.745098 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ced7a0d-5026-48b4-9bbe-bfe2af69c545-config\") pod \"route-controller-manager-f5ff477dd-4bshk\" (UID: \"9ced7a0d-5026-48b4-9bbe-bfe2af69c545\") " pod="openshift-route-controller-manager/route-controller-manager-f5ff477dd-4bshk" Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.745117 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/858eb55a-89cd-40d6-a8a1-beec97cc18a6-config\") pod \"controller-manager-77b488fb6d-cl9wd\" (UID: \"858eb55a-89cd-40d6-a8a1-beec97cc18a6\") " pod="openshift-controller-manager/controller-manager-77b488fb6d-cl9wd" Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.746695 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9ced7a0d-5026-48b4-9bbe-bfe2af69c545-client-ca\") pod \"route-controller-manager-f5ff477dd-4bshk\" (UID: \"9ced7a0d-5026-48b4-9bbe-bfe2af69c545\") " pod="openshift-route-controller-manager/route-controller-manager-f5ff477dd-4bshk" Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.747449 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/858eb55a-89cd-40d6-a8a1-beec97cc18a6-client-ca\") pod \"controller-manager-77b488fb6d-cl9wd\" (UID: \"858eb55a-89cd-40d6-a8a1-beec97cc18a6\") " pod="openshift-controller-manager/controller-manager-77b488fb6d-cl9wd" Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.747456 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ced7a0d-5026-48b4-9bbe-bfe2af69c545-config\") pod \"route-controller-manager-f5ff477dd-4bshk\" (UID: \"9ced7a0d-5026-48b4-9bbe-bfe2af69c545\") " pod="openshift-route-controller-manager/route-controller-manager-f5ff477dd-4bshk" Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.747732 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/858eb55a-89cd-40d6-a8a1-beec97cc18a6-proxy-ca-bundles\") pod \"controller-manager-77b488fb6d-cl9wd\" (UID: \"858eb55a-89cd-40d6-a8a1-beec97cc18a6\") " pod="openshift-controller-manager/controller-manager-77b488fb6d-cl9wd" Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.748022 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/858eb55a-89cd-40d6-a8a1-beec97cc18a6-config\") pod \"controller-manager-77b488fb6d-cl9wd\" (UID: \"858eb55a-89cd-40d6-a8a1-beec97cc18a6\") " pod="openshift-controller-manager/controller-manager-77b488fb6d-cl9wd" Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.769469 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ced7a0d-5026-48b4-9bbe-bfe2af69c545-serving-cert\") pod \"route-controller-manager-f5ff477dd-4bshk\" (UID: \"9ced7a0d-5026-48b4-9bbe-bfe2af69c545\") " pod="openshift-route-controller-manager/route-controller-manager-f5ff477dd-4bshk" Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.769469 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/858eb55a-89cd-40d6-a8a1-beec97cc18a6-serving-cert\") pod \"controller-manager-77b488fb6d-cl9wd\" (UID: \"858eb55a-89cd-40d6-a8a1-beec97cc18a6\") " pod="openshift-controller-manager/controller-manager-77b488fb6d-cl9wd" Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.773872 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ld42c\" (UniqueName: \"kubernetes.io/projected/9ced7a0d-5026-48b4-9bbe-bfe2af69c545-kube-api-access-ld42c\") pod \"route-controller-manager-f5ff477dd-4bshk\" (UID: \"9ced7a0d-5026-48b4-9bbe-bfe2af69c545\") " pod="openshift-route-controller-manager/route-controller-manager-f5ff477dd-4bshk" Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.773941 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jgww\" (UniqueName: \"kubernetes.io/projected/858eb55a-89cd-40d6-a8a1-beec97cc18a6-kube-api-access-4jgww\") pod \"controller-manager-77b488fb6d-cl9wd\" (UID: \"858eb55a-89cd-40d6-a8a1-beec97cc18a6\") " pod="openshift-controller-manager/controller-manager-77b488fb6d-cl9wd" Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.901480 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f5ff477dd-4bshk" Feb 19 09:46:23 crc kubenswrapper[4965]: I0219 09:46:23.913103 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77b488fb6d-cl9wd" Feb 19 09:46:24 crc kubenswrapper[4965]: I0219 09:46:24.122843 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f5ff477dd-4bshk"] Feb 19 09:46:24 crc kubenswrapper[4965]: I0219 09:46:24.164482 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f5ff477dd-4bshk" event={"ID":"9ced7a0d-5026-48b4-9bbe-bfe2af69c545","Type":"ContainerStarted","Data":"ce0b054d2b9553dfed852e4dcce888210be6d53defda0298a76ffb1ab73e1be1"} Feb 19 09:46:24 crc kubenswrapper[4965]: I0219 09:46:24.171536 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77b488fb6d-cl9wd"] Feb 19 09:46:25 crc kubenswrapper[4965]: I0219 09:46:25.173544 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77b488fb6d-cl9wd" event={"ID":"858eb55a-89cd-40d6-a8a1-beec97cc18a6","Type":"ContainerStarted","Data":"51f26f531412e59a9693248cd9218022c99d5f520a14298a4823e11f94d9f215"} Feb 19 09:46:25 crc kubenswrapper[4965]: I0219 09:46:25.175735 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77b488fb6d-cl9wd" event={"ID":"858eb55a-89cd-40d6-a8a1-beec97cc18a6","Type":"ContainerStarted","Data":"93250687e3c3a7a776ce269133147bbb6334afdfc5e8efe241d75199446617b7"} Feb 19 09:46:25 crc kubenswrapper[4965]: I0219 09:46:25.177423 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-77b488fb6d-cl9wd" Feb 19 09:46:25 crc kubenswrapper[4965]: I0219 09:46:25.179461 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f5ff477dd-4bshk" event={"ID":"9ced7a0d-5026-48b4-9bbe-bfe2af69c545","Type":"ContainerStarted","Data":"5db790be9a61b35c203f7f1c15cbba095a0810d137d92cc9c03d6652992b38d5"} Feb 19 09:46:25 crc kubenswrapper[4965]: I0219 09:46:25.180032 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-f5ff477dd-4bshk" Feb 19 09:46:25 crc kubenswrapper[4965]: I0219 09:46:25.185569 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-77b488fb6d-cl9wd" Feb 19 09:46:25 crc kubenswrapper[4965]: I0219 09:46:25.186356 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-f5ff477dd-4bshk" Feb 19 09:46:25 crc kubenswrapper[4965]: I0219 09:46:25.199228 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-77b488fb6d-cl9wd" podStartSLOduration=3.199169554 podStartE2EDuration="3.199169554s" podCreationTimestamp="2026-02-19 09:46:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:46:25.195875169 +0000 UTC m=+240.817196479" watchObservedRunningTime="2026-02-19 09:46:25.199169554 +0000 UTC m=+240.820490864" Feb 19 09:46:25 crc kubenswrapper[4965]: I0219 09:46:25.207287 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df67c235-5c37-4aff-be16-64f660b26f32" path="/var/lib/kubelet/pods/df67c235-5c37-4aff-be16-64f660b26f32/volumes" Feb 19 09:46:25 crc kubenswrapper[4965]: I0219 09:46:25.238062 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-f5ff477dd-4bshk" podStartSLOduration=3.238028941 podStartE2EDuration="3.238028941s" podCreationTimestamp="2026-02-19 09:46:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:46:25.216448367 +0000 UTC m=+240.837769677" watchObservedRunningTime="2026-02-19 09:46:25.238028941 +0000 UTC m=+240.859350251" Feb 19 09:46:33 crc kubenswrapper[4965]: I0219 09:46:33.341343 4965 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 19 09:46:33 crc kubenswrapper[4965]: I0219 09:46:33.343185 4965 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 09:46:33 crc kubenswrapper[4965]: I0219 09:46:33.343256 4965 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 09:46:33 crc kubenswrapper[4965]: I0219 09:46:33.343297 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 09:46:33 crc kubenswrapper[4965]: E0219 09:46:33.343543 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 19 09:46:33 crc kubenswrapper[4965]: I0219 09:46:33.343575 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 19 09:46:33 crc kubenswrapper[4965]: E0219 09:46:33.343594 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 09:46:33 crc kubenswrapper[4965]: I0219 09:46:33.343607 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 09:46:33 crc kubenswrapper[4965]: E0219 09:46:33.343629 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 19 09:46:33 crc kubenswrapper[4965]: I0219 09:46:33.343640 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 19 09:46:33 crc kubenswrapper[4965]: E0219 09:46:33.343652 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 09:46:33 crc kubenswrapper[4965]: I0219 09:46:33.343662 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 09:46:33 crc kubenswrapper[4965]: E0219 09:46:33.343681 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 19 09:46:33 crc kubenswrapper[4965]: I0219 09:46:33.343693 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 19 09:46:33 crc kubenswrapper[4965]: I0219 09:46:33.343692 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://65b4ac45cd6766a2144308a99817f41ef69812519680ebae6edf2d6c72f1c97e" gracePeriod=15 Feb 19 09:46:33 crc kubenswrapper[4965]: I0219 09:46:33.343697 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://d403f58f80ef84bfe879f95390ffc186d6516ce879f1ca593e9d06b6fdaa9003" gracePeriod=15 Feb 19 09:46:33 crc kubenswrapper[4965]: I0219 09:46:33.343730 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://9d2dcb23379da52333bd115902957e4051df5472eb6d909d45f4781487e8d6e0" gracePeriod=15 Feb 19 09:46:33 crc kubenswrapper[4965]: I0219 09:46:33.343697 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://df86861693b1090906a34be8e134571c684c24b327ed62db509ca470e79c70c3" gracePeriod=15 Feb 19 09:46:33 crc kubenswrapper[4965]: E0219 09:46:33.343706 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 19 09:46:33 crc kubenswrapper[4965]: I0219 09:46:33.343830 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 19 09:46:33 crc kubenswrapper[4965]: E0219 09:46:33.343854 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 19 09:46:33 crc kubenswrapper[4965]: I0219 09:46:33.343868 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 19 09:46:33 crc kubenswrapper[4965]: I0219 09:46:33.343810 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://31cd85d70b58724b5ed5071b085e5b20b31083087fc7847dfadccc52cfd717c6" gracePeriod=15 Feb 19 09:46:33 crc kubenswrapper[4965]: I0219 09:46:33.344081 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 19 09:46:33 crc kubenswrapper[4965]: I0219 09:46:33.344102 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 19 09:46:33 crc kubenswrapper[4965]: I0219 09:46:33.344125 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 19 09:46:33 crc kubenswrapper[4965]: I0219 09:46:33.344142 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 09:46:33 crc kubenswrapper[4965]: I0219 09:46:33.344156 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 19 09:46:33 crc kubenswrapper[4965]: I0219 09:46:33.344174 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 09:46:33 crc kubenswrapper[4965]: I0219 09:46:33.348313 4965 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Feb 19 09:46:33 crc kubenswrapper[4965]: I0219 09:46:33.378449 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 09:46:33 crc kubenswrapper[4965]: I0219 09:46:33.378512 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 09:46:33 crc kubenswrapper[4965]: I0219 09:46:33.378539 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 09:46:33 crc kubenswrapper[4965]: I0219 09:46:33.378618 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 09:46:33 crc kubenswrapper[4965]: I0219 09:46:33.378649 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 09:46:33 crc kubenswrapper[4965]: I0219 09:46:33.383612 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 19 09:46:33 crc kubenswrapper[4965]: I0219 09:46:33.480138 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 09:46:33 crc kubenswrapper[4965]: I0219 09:46:33.480183 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 09:46:33 crc kubenswrapper[4965]: I0219 09:46:33.480253 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:46:33 crc kubenswrapper[4965]: I0219 09:46:33.480331 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 09:46:33 crc kubenswrapper[4965]: I0219 09:46:33.480387 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 09:46:33 crc kubenswrapper[4965]: I0219 09:46:33.480547 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 09:46:33 crc kubenswrapper[4965]: I0219 09:46:33.480581 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 09:46:33 crc kubenswrapper[4965]: I0219 09:46:33.480655 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:46:33 crc kubenswrapper[4965]: I0219 09:46:33.480699 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 09:46:33 crc kubenswrapper[4965]: I0219 09:46:33.480712 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 09:46:33 crc kubenswrapper[4965]: I0219 09:46:33.480734 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:46:33 crc kubenswrapper[4965]: I0219 09:46:33.480762 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 09:46:33 crc kubenswrapper[4965]: I0219 09:46:33.480805 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 09:46:33 crc kubenswrapper[4965]: I0219 09:46:33.582346 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:46:33 crc kubenswrapper[4965]: I0219 09:46:33.582412 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:46:33 crc kubenswrapper[4965]: I0219 09:46:33.582455 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:46:33 crc kubenswrapper[4965]: I0219 09:46:33.582542 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:46:33 crc kubenswrapper[4965]: I0219 09:46:33.582590 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:46:33 crc kubenswrapper[4965]: I0219 09:46:33.582645 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:46:33 crc kubenswrapper[4965]: I0219 09:46:33.680696 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 09:46:33 crc kubenswrapper[4965]: E0219 09:46:33.708109 4965 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.196:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18959cc3ea7a6b43 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 09:46:33.706605379 +0000 UTC m=+249.327926729,LastTimestamp:2026-02-19 09:46:33.706605379 +0000 UTC m=+249.327926729,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 09:46:34 crc kubenswrapper[4965]: I0219 09:46:34.247451 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"01eebcc34a1d516230195cd5cc7534001142c34966e15dae247d5f25726a9378"} Feb 19 09:46:34 crc kubenswrapper[4965]: I0219 09:46:34.248239 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"e6a032de029c53806360db993be7baf620a7fe4aa2ec19a16391262801d40eca"} Feb 19 09:46:34 crc kubenswrapper[4965]: I0219 09:46:34.248077 4965 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Feb 19 09:46:34 crc kubenswrapper[4965]: I0219 09:46:34.250570 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 19 09:46:34 crc kubenswrapper[4965]: I0219 09:46:34.251959 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 19 09:46:34 crc kubenswrapper[4965]: I0219 09:46:34.252696 4965 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="65b4ac45cd6766a2144308a99817f41ef69812519680ebae6edf2d6c72f1c97e" exitCode=0 Feb 19 09:46:34 crc kubenswrapper[4965]: I0219 09:46:34.252785 4965 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9d2dcb23379da52333bd115902957e4051df5472eb6d909d45f4781487e8d6e0" exitCode=0 Feb 19 09:46:34 crc kubenswrapper[4965]: I0219 09:46:34.252856 4965 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d403f58f80ef84bfe879f95390ffc186d6516ce879f1ca593e9d06b6fdaa9003" exitCode=0 Feb 19 09:46:34 crc kubenswrapper[4965]: I0219 09:46:34.252924 4965 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="df86861693b1090906a34be8e134571c684c24b327ed62db509ca470e79c70c3" exitCode=2 Feb 19 09:46:34 crc kubenswrapper[4965]: I0219 09:46:34.253037 4965 scope.go:117] "RemoveContainer" containerID="9c0288960f3e7739ec0587fcefc29e57c0e351c4903326474454df7b6b57a29c" Feb 19 09:46:34 crc kubenswrapper[4965]: I0219 09:46:34.255367 4965 generic.go:334] "Generic (PLEG): container finished" podID="cce58c03-973d-4b4b-8854-cf6d27c71d28" containerID="e1f96256fc4566f30ddeeed4acea99d14d2fd9c3fcbad319edca036182c83870" exitCode=0 Feb 19 09:46:34 crc kubenswrapper[4965]: I0219 09:46:34.255475 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"cce58c03-973d-4b4b-8854-cf6d27c71d28","Type":"ContainerDied","Data":"e1f96256fc4566f30ddeeed4acea99d14d2fd9c3fcbad319edca036182c83870"} Feb 19 09:46:34 crc kubenswrapper[4965]: I0219 09:46:34.256232 4965 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Feb 19 09:46:34 crc kubenswrapper[4965]: I0219 09:46:34.256486 4965 status_manager.go:851] "Failed to get status for pod" podUID="cce58c03-973d-4b4b-8854-cf6d27c71d28" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Feb 19 09:46:35 crc kubenswrapper[4965]: I0219 09:46:35.200330 4965 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Feb 19 09:46:35 crc kubenswrapper[4965]: I0219 09:46:35.200653 4965 status_manager.go:851] "Failed to get status for pod" podUID="cce58c03-973d-4b4b-8854-cf6d27c71d28" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Feb 19 09:46:35 crc kubenswrapper[4965]: I0219 09:46:35.263347 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 19 09:46:35 crc kubenswrapper[4965]: I0219 09:46:35.583826 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 19 09:46:35 crc kubenswrapper[4965]: I0219 09:46:35.585027 4965 status_manager.go:851] "Failed to get status for pod" podUID="cce58c03-973d-4b4b-8854-cf6d27c71d28" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Feb 19 09:46:35 crc kubenswrapper[4965]: I0219 09:46:35.585334 4965 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Feb 19 09:46:35 crc kubenswrapper[4965]: E0219 09:46:35.585268 4965 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.196:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18959cc3ea7a6b43 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 09:46:33.706605379 +0000 UTC m=+249.327926729,LastTimestamp:2026-02-19 09:46:33.706605379 +0000 UTC m=+249.327926729,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 09:46:35 crc kubenswrapper[4965]: I0219 09:46:35.614775 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cce58c03-973d-4b4b-8854-cf6d27c71d28-var-lock\") pod \"cce58c03-973d-4b4b-8854-cf6d27c71d28\" (UID: \"cce58c03-973d-4b4b-8854-cf6d27c71d28\") " Feb 19 09:46:35 crc kubenswrapper[4965]: I0219 09:46:35.614863 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cce58c03-973d-4b4b-8854-cf6d27c71d28-kubelet-dir\") pod \"cce58c03-973d-4b4b-8854-cf6d27c71d28\" (UID: \"cce58c03-973d-4b4b-8854-cf6d27c71d28\") " Feb 19 09:46:35 crc kubenswrapper[4965]: I0219 09:46:35.614917 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cce58c03-973d-4b4b-8854-cf6d27c71d28-kube-api-access\") pod \"cce58c03-973d-4b4b-8854-cf6d27c71d28\" (UID: \"cce58c03-973d-4b4b-8854-cf6d27c71d28\") " Feb 19 09:46:35 crc kubenswrapper[4965]: I0219 09:46:35.614971 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cce58c03-973d-4b4b-8854-cf6d27c71d28-var-lock" (OuterVolumeSpecName: "var-lock") pod "cce58c03-973d-4b4b-8854-cf6d27c71d28" (UID: "cce58c03-973d-4b4b-8854-cf6d27c71d28"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:46:35 crc kubenswrapper[4965]: I0219 09:46:35.615117 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cce58c03-973d-4b4b-8854-cf6d27c71d28-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "cce58c03-973d-4b4b-8854-cf6d27c71d28" (UID: "cce58c03-973d-4b4b-8854-cf6d27c71d28"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:46:35 crc kubenswrapper[4965]: I0219 09:46:35.615603 4965 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cce58c03-973d-4b4b-8854-cf6d27c71d28-var-lock\") on node \"crc\" DevicePath \"\"" Feb 19 09:46:35 crc kubenswrapper[4965]: I0219 09:46:35.615685 4965 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cce58c03-973d-4b4b-8854-cf6d27c71d28-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 09:46:35 crc kubenswrapper[4965]: I0219 09:46:35.620248 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cce58c03-973d-4b4b-8854-cf6d27c71d28-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "cce58c03-973d-4b4b-8854-cf6d27c71d28" (UID: "cce58c03-973d-4b4b-8854-cf6d27c71d28"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:46:35 crc kubenswrapper[4965]: I0219 09:46:35.716002 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cce58c03-973d-4b4b-8854-cf6d27c71d28-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 09:46:35 crc kubenswrapper[4965]: I0219 09:46:35.719454 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 19 09:46:35 crc kubenswrapper[4965]: I0219 09:46:35.720089 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:46:35 crc kubenswrapper[4965]: I0219 09:46:35.721043 4965 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Feb 19 09:46:35 crc kubenswrapper[4965]: I0219 09:46:35.721238 4965 status_manager.go:851] "Failed to get status for pod" podUID="cce58c03-973d-4b4b-8854-cf6d27c71d28" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Feb 19 09:46:35 crc kubenswrapper[4965]: I0219 09:46:35.721559 4965 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Feb 19 09:46:35 crc kubenswrapper[4965]: I0219 09:46:35.918495 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 19 09:46:35 crc kubenswrapper[4965]: I0219 09:46:35.918596 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:46:35 crc kubenswrapper[4965]: I0219 09:46:35.918695 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 19 09:46:35 crc kubenswrapper[4965]: I0219 09:46:35.918774 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:46:35 crc kubenswrapper[4965]: I0219 09:46:35.918907 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 19 09:46:35 crc kubenswrapper[4965]: I0219 09:46:35.918979 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:46:35 crc kubenswrapper[4965]: I0219 09:46:35.919542 4965 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 19 09:46:35 crc kubenswrapper[4965]: I0219 09:46:35.919633 4965 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 19 09:46:35 crc kubenswrapper[4965]: I0219 09:46:35.919685 4965 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 19 09:46:36 crc kubenswrapper[4965]: E0219 09:46:36.069077 4965 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" Feb 19 09:46:36 crc kubenswrapper[4965]: E0219 09:46:36.069831 4965 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" Feb 19 09:46:36 crc kubenswrapper[4965]: E0219 09:46:36.070104 4965 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" Feb 19 09:46:36 crc kubenswrapper[4965]: E0219 09:46:36.070546 4965 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" Feb 19 09:46:36 crc kubenswrapper[4965]: E0219 09:46:36.071264 4965 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" Feb 19 09:46:36 crc kubenswrapper[4965]: I0219 09:46:36.071343 4965 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 19 09:46:36 crc kubenswrapper[4965]: E0219 09:46:36.071915 4965 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" interval="200ms" Feb 19 09:46:36 crc kubenswrapper[4965]: E0219 09:46:36.272609 4965 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" interval="400ms" Feb 19 09:46:36 crc kubenswrapper[4965]: I0219 09:46:36.272700 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 19 09:46:36 crc kubenswrapper[4965]: I0219 09:46:36.273872 4965 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="31cd85d70b58724b5ed5071b085e5b20b31083087fc7847dfadccc52cfd717c6" exitCode=0 Feb 19 09:46:36 crc kubenswrapper[4965]: I0219 09:46:36.273935 4965 scope.go:117] "RemoveContainer" containerID="65b4ac45cd6766a2144308a99817f41ef69812519680ebae6edf2d6c72f1c97e" Feb 19 09:46:36 crc kubenswrapper[4965]: I0219 09:46:36.274052 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:46:36 crc kubenswrapper[4965]: I0219 09:46:36.276993 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 19 09:46:36 crc kubenswrapper[4965]: I0219 09:46:36.281574 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"cce58c03-973d-4b4b-8854-cf6d27c71d28","Type":"ContainerDied","Data":"2c98190457d2258c2fbc77d9a59b7518701919ce4c9b1c1dbadb9325699a9687"} Feb 19 09:46:36 crc kubenswrapper[4965]: I0219 09:46:36.281626 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c98190457d2258c2fbc77d9a59b7518701919ce4c9b1c1dbadb9325699a9687" Feb 19 09:46:36 crc kubenswrapper[4965]: I0219 09:46:36.289674 4965 scope.go:117] "RemoveContainer" containerID="9d2dcb23379da52333bd115902957e4051df5472eb6d909d45f4781487e8d6e0" Feb 19 09:46:36 crc kubenswrapper[4965]: I0219 09:46:36.296283 4965 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Feb 19 09:46:36 crc kubenswrapper[4965]: I0219 09:46:36.296471 4965 status_manager.go:851] "Failed to get status for pod" podUID="cce58c03-973d-4b4b-8854-cf6d27c71d28" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Feb 19 09:46:36 crc kubenswrapper[4965]: I0219 09:46:36.296613 4965 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Feb 19 09:46:36 crc kubenswrapper[4965]: I0219 09:46:36.299880 4965 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Feb 19 09:46:36 crc kubenswrapper[4965]: I0219 09:46:36.300660 4965 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Feb 19 09:46:36 crc kubenswrapper[4965]: I0219 09:46:36.300949 4965 status_manager.go:851] "Failed to get status for pod" podUID="cce58c03-973d-4b4b-8854-cf6d27c71d28" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Feb 19 09:46:36 crc kubenswrapper[4965]: I0219 09:46:36.307527 4965 scope.go:117] "RemoveContainer" containerID="d403f58f80ef84bfe879f95390ffc186d6516ce879f1ca593e9d06b6fdaa9003" Feb 19 09:46:36 crc kubenswrapper[4965]: I0219 09:46:36.321402 4965 scope.go:117] "RemoveContainer" containerID="df86861693b1090906a34be8e134571c684c24b327ed62db509ca470e79c70c3" Feb 19 09:46:36 crc kubenswrapper[4965]: I0219 09:46:36.336992 4965 scope.go:117] "RemoveContainer" containerID="31cd85d70b58724b5ed5071b085e5b20b31083087fc7847dfadccc52cfd717c6" Feb 19 09:46:36 crc kubenswrapper[4965]: I0219 09:46:36.351853 4965 scope.go:117] "RemoveContainer" containerID="21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4" Feb 19 09:46:36 crc kubenswrapper[4965]: I0219 09:46:36.390175 4965 scope.go:117] "RemoveContainer" containerID="65b4ac45cd6766a2144308a99817f41ef69812519680ebae6edf2d6c72f1c97e" Feb 19 09:46:36 crc kubenswrapper[4965]: E0219 09:46:36.391902 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65b4ac45cd6766a2144308a99817f41ef69812519680ebae6edf2d6c72f1c97e\": container with ID starting with 65b4ac45cd6766a2144308a99817f41ef69812519680ebae6edf2d6c72f1c97e not found: ID does not exist" containerID="65b4ac45cd6766a2144308a99817f41ef69812519680ebae6edf2d6c72f1c97e" Feb 19 09:46:36 crc kubenswrapper[4965]: I0219 09:46:36.391974 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65b4ac45cd6766a2144308a99817f41ef69812519680ebae6edf2d6c72f1c97e"} err="failed to get container status \"65b4ac45cd6766a2144308a99817f41ef69812519680ebae6edf2d6c72f1c97e\": rpc error: code = NotFound desc = could not find container \"65b4ac45cd6766a2144308a99817f41ef69812519680ebae6edf2d6c72f1c97e\": container with ID starting with 65b4ac45cd6766a2144308a99817f41ef69812519680ebae6edf2d6c72f1c97e not found: ID does not exist" Feb 19 09:46:36 crc kubenswrapper[4965]: I0219 09:46:36.392058 4965 scope.go:117] "RemoveContainer" containerID="9d2dcb23379da52333bd115902957e4051df5472eb6d909d45f4781487e8d6e0" Feb 19 09:46:36 crc kubenswrapper[4965]: E0219 09:46:36.392866 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d2dcb23379da52333bd115902957e4051df5472eb6d909d45f4781487e8d6e0\": container with ID starting with 9d2dcb23379da52333bd115902957e4051df5472eb6d909d45f4781487e8d6e0 not found: ID does not exist" containerID="9d2dcb23379da52333bd115902957e4051df5472eb6d909d45f4781487e8d6e0" Feb 19 09:46:36 crc kubenswrapper[4965]: I0219 09:46:36.392909 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d2dcb23379da52333bd115902957e4051df5472eb6d909d45f4781487e8d6e0"} err="failed to get container status \"9d2dcb23379da52333bd115902957e4051df5472eb6d909d45f4781487e8d6e0\": rpc error: code = NotFound desc = could not find container \"9d2dcb23379da52333bd115902957e4051df5472eb6d909d45f4781487e8d6e0\": container with ID starting with 9d2dcb23379da52333bd115902957e4051df5472eb6d909d45f4781487e8d6e0 not found: ID does not exist" Feb 19 09:46:36 crc kubenswrapper[4965]: I0219 09:46:36.392948 4965 scope.go:117] "RemoveContainer" containerID="d403f58f80ef84bfe879f95390ffc186d6516ce879f1ca593e9d06b6fdaa9003" Feb 19 09:46:36 crc kubenswrapper[4965]: E0219 09:46:36.393416 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d403f58f80ef84bfe879f95390ffc186d6516ce879f1ca593e9d06b6fdaa9003\": container with ID starting with d403f58f80ef84bfe879f95390ffc186d6516ce879f1ca593e9d06b6fdaa9003 not found: ID does not exist" containerID="d403f58f80ef84bfe879f95390ffc186d6516ce879f1ca593e9d06b6fdaa9003" Feb 19 09:46:36 crc kubenswrapper[4965]: I0219 09:46:36.393459 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d403f58f80ef84bfe879f95390ffc186d6516ce879f1ca593e9d06b6fdaa9003"} err="failed to get container status \"d403f58f80ef84bfe879f95390ffc186d6516ce879f1ca593e9d06b6fdaa9003\": rpc error: code = NotFound desc = could not find container \"d403f58f80ef84bfe879f95390ffc186d6516ce879f1ca593e9d06b6fdaa9003\": container with ID starting with d403f58f80ef84bfe879f95390ffc186d6516ce879f1ca593e9d06b6fdaa9003 not found: ID does not exist" Feb 19 09:46:36 crc kubenswrapper[4965]: I0219 09:46:36.393483 4965 scope.go:117] "RemoveContainer" containerID="df86861693b1090906a34be8e134571c684c24b327ed62db509ca470e79c70c3" Feb 19 09:46:36 crc kubenswrapper[4965]: E0219 09:46:36.393777 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df86861693b1090906a34be8e134571c684c24b327ed62db509ca470e79c70c3\": container with ID starting with df86861693b1090906a34be8e134571c684c24b327ed62db509ca470e79c70c3 not found: ID does not exist" containerID="df86861693b1090906a34be8e134571c684c24b327ed62db509ca470e79c70c3" Feb 19 09:46:36 crc kubenswrapper[4965]: I0219 09:46:36.393812 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df86861693b1090906a34be8e134571c684c24b327ed62db509ca470e79c70c3"} err="failed to get container status \"df86861693b1090906a34be8e134571c684c24b327ed62db509ca470e79c70c3\": rpc error: code = NotFound desc = could not find container \"df86861693b1090906a34be8e134571c684c24b327ed62db509ca470e79c70c3\": container with ID starting with df86861693b1090906a34be8e134571c684c24b327ed62db509ca470e79c70c3 not found: ID does not exist" Feb 19 09:46:36 crc kubenswrapper[4965]: I0219 09:46:36.393830 4965 scope.go:117] "RemoveContainer" containerID="31cd85d70b58724b5ed5071b085e5b20b31083087fc7847dfadccc52cfd717c6" Feb 19 09:46:36 crc kubenswrapper[4965]: E0219 09:46:36.394305 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31cd85d70b58724b5ed5071b085e5b20b31083087fc7847dfadccc52cfd717c6\": container with ID starting with 31cd85d70b58724b5ed5071b085e5b20b31083087fc7847dfadccc52cfd717c6 not found: ID does not exist" containerID="31cd85d70b58724b5ed5071b085e5b20b31083087fc7847dfadccc52cfd717c6" Feb 19 09:46:36 crc kubenswrapper[4965]: I0219 09:46:36.394337 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31cd85d70b58724b5ed5071b085e5b20b31083087fc7847dfadccc52cfd717c6"} err="failed to get container status \"31cd85d70b58724b5ed5071b085e5b20b31083087fc7847dfadccc52cfd717c6\": rpc error: code = NotFound desc = could not find container \"31cd85d70b58724b5ed5071b085e5b20b31083087fc7847dfadccc52cfd717c6\": container with ID starting with 31cd85d70b58724b5ed5071b085e5b20b31083087fc7847dfadccc52cfd717c6 not found: ID does not exist" Feb 19 09:46:36 crc kubenswrapper[4965]: I0219 09:46:36.394357 4965 scope.go:117] "RemoveContainer" containerID="21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4" Feb 19 09:46:36 crc kubenswrapper[4965]: E0219 09:46:36.394715 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\": container with ID starting with 21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4 not found: ID does not exist" containerID="21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4" Feb 19 09:46:36 crc kubenswrapper[4965]: I0219 09:46:36.394765 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4"} err="failed to get container status \"21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\": rpc error: code = NotFound desc = could not find container \"21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4\": container with ID starting with 21c7ad2167d9895ef78816e2d8222509ed2802f2901e9123fff9360c234309d4 not found: ID does not exist" Feb 19 09:46:36 crc kubenswrapper[4965]: E0219 09:46:36.673850 4965 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" interval="800ms" Feb 19 09:46:37 crc kubenswrapper[4965]: I0219 09:46:37.206367 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 19 09:46:37 crc kubenswrapper[4965]: E0219 09:46:37.475834 4965 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" interval="1.6s" Feb 19 09:46:39 crc kubenswrapper[4965]: E0219 09:46:39.077447 4965 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" interval="3.2s" Feb 19 09:46:42 crc kubenswrapper[4965]: E0219 09:46:42.278295 4965 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" interval="6.4s" Feb 19 09:46:45 crc kubenswrapper[4965]: I0219 09:46:45.202270 4965 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Feb 19 09:46:45 crc kubenswrapper[4965]: I0219 09:46:45.203041 4965 status_manager.go:851] "Failed to get status for pod" podUID="cce58c03-973d-4b4b-8854-cf6d27c71d28" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Feb 19 09:46:45 crc kubenswrapper[4965]: E0219 09:46:45.586598 4965 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.196:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18959cc3ea7a6b43 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 09:46:33.706605379 +0000 UTC m=+249.327926729,LastTimestamp:2026-02-19 09:46:33.706605379 +0000 UTC m=+249.327926729,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 09:46:46 crc kubenswrapper[4965]: E0219 09:46:46.282148 4965 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.196:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-842k4" volumeName="registry-storage" Feb 19 09:46:47 crc kubenswrapper[4965]: I0219 09:46:47.197672 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:46:47 crc kubenswrapper[4965]: I0219 09:46:47.198620 4965 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Feb 19 09:46:47 crc kubenswrapper[4965]: I0219 09:46:47.199141 4965 status_manager.go:851] "Failed to get status for pod" podUID="cce58c03-973d-4b4b-8854-cf6d27c71d28" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Feb 19 09:46:47 crc kubenswrapper[4965]: I0219 09:46:47.218560 4965 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="210f2216-544c-43a1-813b-68e47da7447e" Feb 19 09:46:47 crc kubenswrapper[4965]: I0219 09:46:47.218626 4965 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="210f2216-544c-43a1-813b-68e47da7447e" Feb 19 09:46:47 crc kubenswrapper[4965]: E0219 09:46:47.219085 4965 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:46:47 crc kubenswrapper[4965]: I0219 09:46:47.220061 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:46:47 crc kubenswrapper[4965]: I0219 09:46:47.341491 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4878556b56a3cd04febad1f0e19650b046d2e42232e5a06acbf2bc8a1e3b042f"} Feb 19 09:46:48 crc kubenswrapper[4965]: I0219 09:46:48.349662 4965 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="2ad19d6b0eed3b2a5b8e01c72dec897c5da478f6601b590dc7703c3a6f703902" exitCode=0 Feb 19 09:46:48 crc kubenswrapper[4965]: I0219 09:46:48.349732 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"2ad19d6b0eed3b2a5b8e01c72dec897c5da478f6601b590dc7703c3a6f703902"} Feb 19 09:46:48 crc kubenswrapper[4965]: I0219 09:46:48.350375 4965 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="210f2216-544c-43a1-813b-68e47da7447e" Feb 19 09:46:48 crc kubenswrapper[4965]: I0219 09:46:48.350411 4965 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="210f2216-544c-43a1-813b-68e47da7447e" Feb 19 09:46:48 crc kubenswrapper[4965]: I0219 09:46:48.350743 4965 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Feb 19 09:46:48 crc kubenswrapper[4965]: E0219 09:46:48.350999 4965 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:46:48 crc kubenswrapper[4965]: I0219 09:46:48.351171 4965 status_manager.go:851] "Failed to get status for pod" podUID="cce58c03-973d-4b4b-8854-cf6d27c71d28" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Feb 19 09:46:48 crc kubenswrapper[4965]: I0219 09:46:48.353093 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 19 09:46:48 crc kubenswrapper[4965]: I0219 09:46:48.353131 4965 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="b5d4ac252f5069500eef4e1579559c883095bf1c21a29cb96a36a4aab507a4de" exitCode=1 Feb 19 09:46:48 crc kubenswrapper[4965]: I0219 09:46:48.353158 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"b5d4ac252f5069500eef4e1579559c883095bf1c21a29cb96a36a4aab507a4de"} Feb 19 09:46:48 crc kubenswrapper[4965]: I0219 09:46:48.353663 4965 scope.go:117] "RemoveContainer" containerID="b5d4ac252f5069500eef4e1579559c883095bf1c21a29cb96a36a4aab507a4de" Feb 19 09:46:48 crc kubenswrapper[4965]: I0219 09:46:48.354788 4965 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Feb 19 09:46:48 crc kubenswrapper[4965]: I0219 09:46:48.355250 4965 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Feb 19 09:46:48 crc kubenswrapper[4965]: I0219 09:46:48.356367 4965 status_manager.go:851] "Failed to get status for pod" podUID="cce58c03-973d-4b4b-8854-cf6d27c71d28" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Feb 19 09:46:48 crc kubenswrapper[4965]: E0219 09:46:48.680179 4965 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" interval="7s" Feb 19 09:46:49 crc kubenswrapper[4965]: I0219 09:46:49.363792 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9e2314877e38dd54f0313b159178d4a3e3cf11747c0dfc422399fa1a91d7e8dc"} Feb 19 09:46:49 crc kubenswrapper[4965]: I0219 09:46:49.364324 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b408d25f24b8c9a7c899929378ff01d31d430753b19df5a355ebc8b7046d2482"} Feb 19 09:46:49 crc kubenswrapper[4965]: I0219 09:46:49.364336 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e5440d23bceb95411ce1729ac15821786c39c011cac92970be2d26973b7ffc4e"} Feb 19 09:46:49 crc kubenswrapper[4965]: I0219 09:46:49.364346 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"168e3d9aa06baafa4a26c6706c4d4b74d77fb3c26ccbecedf40a9dc91892087d"} Feb 19 09:46:49 crc kubenswrapper[4965]: I0219 09:46:49.373840 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 19 09:46:49 crc kubenswrapper[4965]: I0219 09:46:49.373910 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"fde05b39565efd0c16535f3967be499f5ab546b0c21a3aea53e43a4537646db3"} Feb 19 09:46:50 crc kubenswrapper[4965]: I0219 09:46:50.387016 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e15ab313c29cb398664ab2ad7396ca8a80f10d21c881b551cd7ef7cc0b03672a"} Feb 19 09:46:50 crc kubenswrapper[4965]: I0219 09:46:50.387269 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:46:50 crc kubenswrapper[4965]: I0219 09:46:50.387401 4965 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="210f2216-544c-43a1-813b-68e47da7447e" Feb 19 09:46:50 crc kubenswrapper[4965]: I0219 09:46:50.387437 4965 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="210f2216-544c-43a1-813b-68e47da7447e" Feb 19 09:46:51 crc kubenswrapper[4965]: I0219 09:46:51.226551 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 09:46:51 crc kubenswrapper[4965]: I0219 09:46:51.231412 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 09:46:51 crc kubenswrapper[4965]: I0219 09:46:51.394990 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 09:46:52 crc kubenswrapper[4965]: I0219 09:46:52.220698 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:46:52 crc kubenswrapper[4965]: I0219 09:46:52.221138 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:46:52 crc kubenswrapper[4965]: I0219 09:46:52.228277 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:46:55 crc kubenswrapper[4965]: I0219 09:46:55.395878 4965 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:46:55 crc kubenswrapper[4965]: I0219 09:46:55.419278 4965 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="210f2216-544c-43a1-813b-68e47da7447e" Feb 19 09:46:55 crc kubenswrapper[4965]: I0219 09:46:55.419311 4965 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="210f2216-544c-43a1-813b-68e47da7447e" Feb 19 09:46:55 crc kubenswrapper[4965]: I0219 09:46:55.423630 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:46:55 crc kubenswrapper[4965]: I0219 09:46:55.426123 4965 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="bb7132c6-b68d-49f5-b5ca-e0d624f0641f" Feb 19 09:46:56 crc kubenswrapper[4965]: I0219 09:46:56.423985 4965 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="210f2216-544c-43a1-813b-68e47da7447e" Feb 19 09:46:56 crc kubenswrapper[4965]: I0219 09:46:56.424021 4965 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="210f2216-544c-43a1-813b-68e47da7447e" Feb 19 09:47:04 crc kubenswrapper[4965]: I0219 09:47:04.451186 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 09:47:04 crc kubenswrapper[4965]: I0219 09:47:04.746853 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 09:47:04 crc kubenswrapper[4965]: I0219 09:47:04.964227 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 19 09:47:05 crc kubenswrapper[4965]: I0219 09:47:05.207568 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 19 09:47:05 crc kubenswrapper[4965]: I0219 09:47:05.218433 4965 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="bb7132c6-b68d-49f5-b5ca-e0d624f0641f" Feb 19 09:47:05 crc kubenswrapper[4965]: I0219 09:47:05.385918 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 19 09:47:05 crc kubenswrapper[4965]: I0219 09:47:05.521531 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 19 09:47:05 crc kubenswrapper[4965]: I0219 09:47:05.530843 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 19 09:47:06 crc kubenswrapper[4965]: I0219 09:47:06.200882 4965 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 19 09:47:06 crc kubenswrapper[4965]: I0219 09:47:06.341920 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 19 09:47:06 crc kubenswrapper[4965]: I0219 09:47:06.457496 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 19 09:47:06 crc kubenswrapper[4965]: I0219 09:47:06.476010 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 19 09:47:06 crc kubenswrapper[4965]: I0219 09:47:06.560162 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 19 09:47:06 crc kubenswrapper[4965]: I0219 09:47:06.646270 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 19 09:47:06 crc kubenswrapper[4965]: I0219 09:47:06.684873 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 09:47:06 crc kubenswrapper[4965]: I0219 09:47:06.833341 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 19 09:47:06 crc kubenswrapper[4965]: I0219 09:47:06.997755 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 19 09:47:07 crc kubenswrapper[4965]: I0219 09:47:07.077816 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 19 09:47:07 crc kubenswrapper[4965]: I0219 09:47:07.189689 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 19 09:47:07 crc kubenswrapper[4965]: I0219 09:47:07.446791 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 19 09:47:07 crc kubenswrapper[4965]: I0219 09:47:07.552654 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 19 09:47:07 crc kubenswrapper[4965]: I0219 09:47:07.610258 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 19 09:47:07 crc kubenswrapper[4965]: I0219 09:47:07.923821 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 19 09:47:08 crc kubenswrapper[4965]: I0219 09:47:08.041863 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 19 09:47:08 crc kubenswrapper[4965]: I0219 09:47:08.067498 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 19 09:47:08 crc kubenswrapper[4965]: I0219 09:47:08.463315 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 19 09:47:08 crc kubenswrapper[4965]: I0219 09:47:08.577963 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 19 09:47:08 crc kubenswrapper[4965]: I0219 09:47:08.665268 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 19 09:47:08 crc kubenswrapper[4965]: I0219 09:47:08.676709 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 19 09:47:08 crc kubenswrapper[4965]: I0219 09:47:08.706290 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 19 09:47:08 crc kubenswrapper[4965]: I0219 09:47:08.756619 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 19 09:47:08 crc kubenswrapper[4965]: I0219 09:47:08.762640 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 19 09:47:08 crc kubenswrapper[4965]: I0219 09:47:08.795152 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 09:47:08 crc kubenswrapper[4965]: I0219 09:47:08.845562 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 19 09:47:09 crc kubenswrapper[4965]: I0219 09:47:09.079473 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 19 09:47:09 crc kubenswrapper[4965]: I0219 09:47:09.221996 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 19 09:47:09 crc kubenswrapper[4965]: I0219 09:47:09.230497 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 19 09:47:09 crc kubenswrapper[4965]: I0219 09:47:09.241210 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 09:47:09 crc kubenswrapper[4965]: I0219 09:47:09.261064 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 19 09:47:09 crc kubenswrapper[4965]: I0219 09:47:09.270007 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 19 09:47:09 crc kubenswrapper[4965]: I0219 09:47:09.365253 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 19 09:47:09 crc kubenswrapper[4965]: I0219 09:47:09.378722 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 19 09:47:09 crc kubenswrapper[4965]: I0219 09:47:09.424812 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 19 09:47:09 crc kubenswrapper[4965]: I0219 09:47:09.524369 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 19 09:47:09 crc kubenswrapper[4965]: I0219 09:47:09.537257 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 19 09:47:09 crc kubenswrapper[4965]: I0219 09:47:09.561186 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 19 09:47:09 crc kubenswrapper[4965]: I0219 09:47:09.562721 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 19 09:47:09 crc kubenswrapper[4965]: I0219 09:47:09.710537 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 19 09:47:09 crc kubenswrapper[4965]: I0219 09:47:09.756720 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 19 09:47:09 crc kubenswrapper[4965]: I0219 09:47:09.767187 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 09:47:09 crc kubenswrapper[4965]: I0219 09:47:09.769814 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 19 09:47:09 crc kubenswrapper[4965]: I0219 09:47:09.949402 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 09:47:09 crc kubenswrapper[4965]: I0219 09:47:09.997246 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 19 09:47:10 crc kubenswrapper[4965]: I0219 09:47:10.043695 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 19 09:47:10 crc kubenswrapper[4965]: I0219 09:47:10.121075 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 19 09:47:10 crc kubenswrapper[4965]: I0219 09:47:10.217260 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 19 09:47:10 crc kubenswrapper[4965]: I0219 09:47:10.252987 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 19 09:47:10 crc kubenswrapper[4965]: I0219 09:47:10.317492 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 19 09:47:10 crc kubenswrapper[4965]: I0219 09:47:10.379114 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 19 09:47:10 crc kubenswrapper[4965]: I0219 09:47:10.419367 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 19 09:47:10 crc kubenswrapper[4965]: I0219 09:47:10.507300 4965 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 19 09:47:10 crc kubenswrapper[4965]: I0219 09:47:10.580542 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 19 09:47:10 crc kubenswrapper[4965]: I0219 09:47:10.607834 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 19 09:47:10 crc kubenswrapper[4965]: I0219 09:47:10.662496 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 19 09:47:10 crc kubenswrapper[4965]: I0219 09:47:10.700622 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 19 09:47:10 crc kubenswrapper[4965]: I0219 09:47:10.710247 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 19 09:47:10 crc kubenswrapper[4965]: I0219 09:47:10.849846 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 19 09:47:10 crc kubenswrapper[4965]: I0219 09:47:10.910400 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 19 09:47:10 crc kubenswrapper[4965]: I0219 09:47:10.960328 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 19 09:47:11 crc kubenswrapper[4965]: I0219 09:47:11.100926 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 19 09:47:11 crc kubenswrapper[4965]: I0219 09:47:11.101655 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 19 09:47:11 crc kubenswrapper[4965]: I0219 09:47:11.175977 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 19 09:47:11 crc kubenswrapper[4965]: I0219 09:47:11.179904 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 19 09:47:11 crc kubenswrapper[4965]: I0219 09:47:11.195531 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 19 09:47:11 crc kubenswrapper[4965]: I0219 09:47:11.200250 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 19 09:47:11 crc kubenswrapper[4965]: I0219 09:47:11.223569 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 19 09:47:11 crc kubenswrapper[4965]: I0219 09:47:11.250492 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 19 09:47:11 crc kubenswrapper[4965]: I0219 09:47:11.266423 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 19 09:47:11 crc kubenswrapper[4965]: I0219 09:47:11.286383 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 19 09:47:11 crc kubenswrapper[4965]: I0219 09:47:11.367756 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 19 09:47:11 crc kubenswrapper[4965]: I0219 09:47:11.468762 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 19 09:47:11 crc kubenswrapper[4965]: I0219 09:47:11.500361 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 19 09:47:11 crc kubenswrapper[4965]: I0219 09:47:11.559834 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 19 09:47:11 crc kubenswrapper[4965]: I0219 09:47:11.567679 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 19 09:47:11 crc kubenswrapper[4965]: I0219 09:47:11.574359 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 19 09:47:11 crc kubenswrapper[4965]: I0219 09:47:11.641408 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 19 09:47:11 crc kubenswrapper[4965]: I0219 09:47:11.651389 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 19 09:47:11 crc kubenswrapper[4965]: I0219 09:47:11.707728 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 19 09:47:11 crc kubenswrapper[4965]: I0219 09:47:11.718149 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 19 09:47:11 crc kubenswrapper[4965]: I0219 09:47:11.761040 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 19 09:47:11 crc kubenswrapper[4965]: I0219 09:47:11.793710 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 19 09:47:11 crc kubenswrapper[4965]: I0219 09:47:11.808307 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 19 09:47:11 crc kubenswrapper[4965]: I0219 09:47:11.828128 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 19 09:47:11 crc kubenswrapper[4965]: I0219 09:47:11.943831 4965 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 19 09:47:11 crc kubenswrapper[4965]: I0219 09:47:11.950016 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 19 09:47:12 crc kubenswrapper[4965]: I0219 09:47:12.061284 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 19 09:47:12 crc kubenswrapper[4965]: I0219 09:47:12.138857 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 19 09:47:12 crc kubenswrapper[4965]: I0219 09:47:12.216136 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 19 09:47:12 crc kubenswrapper[4965]: I0219 09:47:12.366976 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 19 09:47:12 crc kubenswrapper[4965]: I0219 09:47:12.388620 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 19 09:47:12 crc kubenswrapper[4965]: I0219 09:47:12.412897 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 19 09:47:12 crc kubenswrapper[4965]: I0219 09:47:12.448895 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 19 09:47:12 crc kubenswrapper[4965]: I0219 09:47:12.553090 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 19 09:47:12 crc kubenswrapper[4965]: I0219 09:47:12.687996 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 19 09:47:12 crc kubenswrapper[4965]: I0219 09:47:12.743615 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 19 09:47:12 crc kubenswrapper[4965]: I0219 09:47:12.815606 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 19 09:47:12 crc kubenswrapper[4965]: I0219 09:47:12.930397 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 19 09:47:12 crc kubenswrapper[4965]: I0219 09:47:12.955820 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 19 09:47:13 crc kubenswrapper[4965]: I0219 09:47:13.122386 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 19 09:47:13 crc kubenswrapper[4965]: I0219 09:47:13.147277 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 19 09:47:13 crc kubenswrapper[4965]: I0219 09:47:13.321998 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 19 09:47:13 crc kubenswrapper[4965]: I0219 09:47:13.347048 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 19 09:47:13 crc kubenswrapper[4965]: I0219 09:47:13.348262 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 19 09:47:13 crc kubenswrapper[4965]: I0219 09:47:13.486441 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 19 09:47:13 crc kubenswrapper[4965]: I0219 09:47:13.506870 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 19 09:47:13 crc kubenswrapper[4965]: I0219 09:47:13.522580 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 19 09:47:13 crc kubenswrapper[4965]: I0219 09:47:13.532382 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 19 09:47:13 crc kubenswrapper[4965]: I0219 09:47:13.662962 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 19 09:47:13 crc kubenswrapper[4965]: I0219 09:47:13.703742 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 19 09:47:13 crc kubenswrapper[4965]: I0219 09:47:13.705479 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 19 09:47:13 crc kubenswrapper[4965]: I0219 09:47:13.720112 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 19 09:47:13 crc kubenswrapper[4965]: I0219 09:47:13.766452 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 19 09:47:13 crc kubenswrapper[4965]: I0219 09:47:13.830348 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 19 09:47:13 crc kubenswrapper[4965]: I0219 09:47:13.879577 4965 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 19 09:47:14 crc kubenswrapper[4965]: I0219 09:47:14.081655 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 09:47:14 crc kubenswrapper[4965]: I0219 09:47:14.113430 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 19 09:47:14 crc kubenswrapper[4965]: I0219 09:47:14.181175 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 19 09:47:14 crc kubenswrapper[4965]: I0219 09:47:14.195456 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 19 09:47:14 crc kubenswrapper[4965]: I0219 09:47:14.205041 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 19 09:47:14 crc kubenswrapper[4965]: I0219 09:47:14.220544 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 19 09:47:14 crc kubenswrapper[4965]: I0219 09:47:14.267502 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 19 09:47:14 crc kubenswrapper[4965]: I0219 09:47:14.385332 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 19 09:47:14 crc kubenswrapper[4965]: I0219 09:47:14.444250 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 19 09:47:14 crc kubenswrapper[4965]: I0219 09:47:14.454950 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 19 09:47:14 crc kubenswrapper[4965]: I0219 09:47:14.691819 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 19 09:47:14 crc kubenswrapper[4965]: I0219 09:47:14.717492 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 19 09:47:14 crc kubenswrapper[4965]: I0219 09:47:14.766539 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 19 09:47:14 crc kubenswrapper[4965]: I0219 09:47:14.828169 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 19 09:47:14 crc kubenswrapper[4965]: I0219 09:47:14.850071 4965 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 19 09:47:14 crc kubenswrapper[4965]: I0219 09:47:14.852535 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 19 09:47:14 crc kubenswrapper[4965]: I0219 09:47:14.854264 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 19 09:47:14 crc kubenswrapper[4965]: I0219 09:47:14.855170 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=41.855148904 podStartE2EDuration="41.855148904s" podCreationTimestamp="2026-02-19 09:46:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:46:55.234171863 +0000 UTC m=+270.855493193" watchObservedRunningTime="2026-02-19 09:47:14.855148904 +0000 UTC m=+290.476470214" Feb 19 09:47:14 crc kubenswrapper[4965]: I0219 09:47:14.855480 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 09:47:14 crc kubenswrapper[4965]: I0219 09:47:14.855533 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 09:47:14 crc kubenswrapper[4965]: I0219 09:47:14.860887 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 09:47:14 crc kubenswrapper[4965]: I0219 09:47:14.873438 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=19.873411725 podStartE2EDuration="19.873411725s" podCreationTimestamp="2026-02-19 09:46:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:47:14.871739797 +0000 UTC m=+290.493061107" watchObservedRunningTime="2026-02-19 09:47:14.873411725 +0000 UTC m=+290.494733035" Feb 19 09:47:14 crc kubenswrapper[4965]: I0219 09:47:14.934844 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 19 09:47:14 crc kubenswrapper[4965]: I0219 09:47:14.936014 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 19 09:47:14 crc kubenswrapper[4965]: I0219 09:47:14.954590 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 19 09:47:15 crc kubenswrapper[4965]: I0219 09:47:15.088553 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 19 09:47:15 crc kubenswrapper[4965]: I0219 09:47:15.147214 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 19 09:47:15 crc kubenswrapper[4965]: I0219 09:47:15.152732 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 19 09:47:15 crc kubenswrapper[4965]: I0219 09:47:15.176257 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 19 09:47:15 crc kubenswrapper[4965]: I0219 09:47:15.250928 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 19 09:47:15 crc kubenswrapper[4965]: I0219 09:47:15.266682 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 19 09:47:15 crc kubenswrapper[4965]: I0219 09:47:15.330318 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 19 09:47:15 crc kubenswrapper[4965]: I0219 09:47:15.354285 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 19 09:47:15 crc kubenswrapper[4965]: I0219 09:47:15.421733 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 19 09:47:15 crc kubenswrapper[4965]: I0219 09:47:15.447967 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 19 09:47:15 crc kubenswrapper[4965]: I0219 09:47:15.471984 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 19 09:47:15 crc kubenswrapper[4965]: I0219 09:47:15.535286 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 19 09:47:15 crc kubenswrapper[4965]: I0219 09:47:15.546384 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 19 09:47:15 crc kubenswrapper[4965]: I0219 09:47:15.651349 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 19 09:47:15 crc kubenswrapper[4965]: I0219 09:47:15.659247 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 19 09:47:15 crc kubenswrapper[4965]: I0219 09:47:15.689968 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 19 09:47:15 crc kubenswrapper[4965]: I0219 09:47:15.760145 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 19 09:47:15 crc kubenswrapper[4965]: I0219 09:47:15.770370 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 19 09:47:15 crc kubenswrapper[4965]: I0219 09:47:15.816061 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 19 09:47:15 crc kubenswrapper[4965]: I0219 09:47:15.843551 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 19 09:47:15 crc kubenswrapper[4965]: I0219 09:47:15.856924 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 19 09:47:15 crc kubenswrapper[4965]: I0219 09:47:15.890639 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 19 09:47:15 crc kubenswrapper[4965]: I0219 09:47:15.899023 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 19 09:47:15 crc kubenswrapper[4965]: I0219 09:47:15.946231 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 19 09:47:15 crc kubenswrapper[4965]: I0219 09:47:15.955151 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 19 09:47:15 crc kubenswrapper[4965]: I0219 09:47:15.960802 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 19 09:47:15 crc kubenswrapper[4965]: I0219 09:47:15.991926 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 19 09:47:16 crc kubenswrapper[4965]: I0219 09:47:16.007369 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 19 09:47:16 crc kubenswrapper[4965]: I0219 09:47:16.031718 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 19 09:47:16 crc kubenswrapper[4965]: I0219 09:47:16.033096 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 19 09:47:16 crc kubenswrapper[4965]: I0219 09:47:16.098846 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 19 09:47:16 crc kubenswrapper[4965]: I0219 09:47:16.166145 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 19 09:47:16 crc kubenswrapper[4965]: I0219 09:47:16.249680 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 19 09:47:16 crc kubenswrapper[4965]: I0219 09:47:16.256401 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 19 09:47:16 crc kubenswrapper[4965]: I0219 09:47:16.282346 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 19 09:47:16 crc kubenswrapper[4965]: I0219 09:47:16.306910 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 19 09:47:16 crc kubenswrapper[4965]: I0219 09:47:16.347394 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 19 09:47:16 crc kubenswrapper[4965]: I0219 09:47:16.349152 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 19 09:47:16 crc kubenswrapper[4965]: I0219 09:47:16.416883 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 19 09:47:16 crc kubenswrapper[4965]: I0219 09:47:16.502313 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 19 09:47:16 crc kubenswrapper[4965]: I0219 09:47:16.506969 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 19 09:47:16 crc kubenswrapper[4965]: I0219 09:47:16.539996 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 19 09:47:16 crc kubenswrapper[4965]: I0219 09:47:16.579492 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 19 09:47:16 crc kubenswrapper[4965]: I0219 09:47:16.588654 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 19 09:47:16 crc kubenswrapper[4965]: I0219 09:47:16.627059 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 19 09:47:16 crc kubenswrapper[4965]: I0219 09:47:16.656166 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 19 09:47:16 crc kubenswrapper[4965]: I0219 09:47:16.935803 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 19 09:47:16 crc kubenswrapper[4965]: I0219 09:47:16.943721 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 19 09:47:16 crc kubenswrapper[4965]: I0219 09:47:16.947870 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 19 09:47:16 crc kubenswrapper[4965]: I0219 09:47:16.984794 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 19 09:47:17 crc kubenswrapper[4965]: I0219 09:47:17.052775 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 19 09:47:17 crc kubenswrapper[4965]: I0219 09:47:17.099026 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 19 09:47:17 crc kubenswrapper[4965]: I0219 09:47:17.213535 4965 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 19 09:47:17 crc kubenswrapper[4965]: I0219 09:47:17.350022 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 19 09:47:17 crc kubenswrapper[4965]: I0219 09:47:17.362885 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 19 09:47:17 crc kubenswrapper[4965]: I0219 09:47:17.375682 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 19 09:47:17 crc kubenswrapper[4965]: I0219 09:47:17.414162 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 19 09:47:17 crc kubenswrapper[4965]: I0219 09:47:17.442075 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 19 09:47:17 crc kubenswrapper[4965]: I0219 09:47:17.508158 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 19 09:47:17 crc kubenswrapper[4965]: I0219 09:47:17.598342 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 09:47:17 crc kubenswrapper[4965]: I0219 09:47:17.606782 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 19 09:47:17 crc kubenswrapper[4965]: I0219 09:47:17.645691 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 19 09:47:17 crc kubenswrapper[4965]: I0219 09:47:17.800737 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 19 09:47:17 crc kubenswrapper[4965]: I0219 09:47:17.814614 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 19 09:47:17 crc kubenswrapper[4965]: I0219 09:47:17.857681 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 19 09:47:17 crc kubenswrapper[4965]: I0219 09:47:17.865745 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 19 09:47:17 crc kubenswrapper[4965]: I0219 09:47:17.891662 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 19 09:47:17 crc kubenswrapper[4965]: I0219 09:47:17.913945 4965 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 19 09:47:17 crc kubenswrapper[4965]: I0219 09:47:17.914227 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://01eebcc34a1d516230195cd5cc7534001142c34966e15dae247d5f25726a9378" gracePeriod=5 Feb 19 09:47:17 crc kubenswrapper[4965]: I0219 09:47:17.989539 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 19 09:47:17 crc kubenswrapper[4965]: I0219 09:47:17.991566 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 09:47:18 crc kubenswrapper[4965]: I0219 09:47:18.002402 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 19 09:47:18 crc kubenswrapper[4965]: I0219 09:47:18.016223 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 19 09:47:18 crc kubenswrapper[4965]: I0219 09:47:18.074158 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 19 09:47:18 crc kubenswrapper[4965]: I0219 09:47:18.092056 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 19 09:47:18 crc kubenswrapper[4965]: I0219 09:47:18.111551 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 19 09:47:18 crc kubenswrapper[4965]: I0219 09:47:18.231949 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 19 09:47:18 crc kubenswrapper[4965]: I0219 09:47:18.281153 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 19 09:47:18 crc kubenswrapper[4965]: I0219 09:47:18.292695 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 19 09:47:18 crc kubenswrapper[4965]: I0219 09:47:18.357919 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 19 09:47:18 crc kubenswrapper[4965]: I0219 09:47:18.363117 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 19 09:47:18 crc kubenswrapper[4965]: I0219 09:47:18.387684 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 19 09:47:18 crc kubenswrapper[4965]: I0219 09:47:18.391358 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 19 09:47:18 crc kubenswrapper[4965]: I0219 09:47:18.420433 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 09:47:18 crc kubenswrapper[4965]: I0219 09:47:18.869156 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 19 09:47:18 crc kubenswrapper[4965]: I0219 09:47:18.889857 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 19 09:47:18 crc kubenswrapper[4965]: I0219 09:47:18.907563 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 19 09:47:18 crc kubenswrapper[4965]: I0219 09:47:18.958373 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 19 09:47:19 crc kubenswrapper[4965]: I0219 09:47:19.050440 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 19 09:47:19 crc kubenswrapper[4965]: I0219 09:47:19.137382 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 19 09:47:19 crc kubenswrapper[4965]: I0219 09:47:19.164778 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 19 09:47:19 crc kubenswrapper[4965]: I0219 09:47:19.278929 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 19 09:47:19 crc kubenswrapper[4965]: I0219 09:47:19.394066 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 19 09:47:19 crc kubenswrapper[4965]: I0219 09:47:19.433227 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 19 09:47:19 crc kubenswrapper[4965]: I0219 09:47:19.486725 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 19 09:47:19 crc kubenswrapper[4965]: I0219 09:47:19.504787 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 19 09:47:19 crc kubenswrapper[4965]: I0219 09:47:19.529089 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 19 09:47:19 crc kubenswrapper[4965]: I0219 09:47:19.549520 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 09:47:19 crc kubenswrapper[4965]: I0219 09:47:19.633434 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 19 09:47:20 crc kubenswrapper[4965]: I0219 09:47:20.074307 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 19 09:47:20 crc kubenswrapper[4965]: I0219 09:47:20.321608 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 19 09:47:20 crc kubenswrapper[4965]: I0219 09:47:20.517217 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 19 09:47:20 crc kubenswrapper[4965]: I0219 09:47:20.569979 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 19 09:47:20 crc kubenswrapper[4965]: I0219 09:47:20.617335 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 19 09:47:20 crc kubenswrapper[4965]: I0219 09:47:20.754781 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 19 09:47:20 crc kubenswrapper[4965]: I0219 09:47:20.755163 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 19 09:47:20 crc kubenswrapper[4965]: I0219 09:47:20.820061 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 19 09:47:20 crc kubenswrapper[4965]: I0219 09:47:20.839906 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 19 09:47:21 crc kubenswrapper[4965]: I0219 09:47:21.316515 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 09:47:21 crc kubenswrapper[4965]: I0219 09:47:21.390104 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 19 09:47:22 crc kubenswrapper[4965]: I0219 09:47:22.042836 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 19 09:47:22 crc kubenswrapper[4965]: I0219 09:47:22.099468 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 19 09:47:23 crc kubenswrapper[4965]: I0219 09:47:23.502157 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 19 09:47:23 crc kubenswrapper[4965]: I0219 09:47:23.502324 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 09:47:23 crc kubenswrapper[4965]: I0219 09:47:23.601423 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 19 09:47:23 crc kubenswrapper[4965]: I0219 09:47:23.601479 4965 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="01eebcc34a1d516230195cd5cc7534001142c34966e15dae247d5f25726a9378" exitCode=137 Feb 19 09:47:23 crc kubenswrapper[4965]: I0219 09:47:23.601524 4965 scope.go:117] "RemoveContainer" containerID="01eebcc34a1d516230195cd5cc7534001142c34966e15dae247d5f25726a9378" Feb 19 09:47:23 crc kubenswrapper[4965]: I0219 09:47:23.601548 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 09:47:23 crc kubenswrapper[4965]: I0219 09:47:23.623121 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 09:47:23 crc kubenswrapper[4965]: I0219 09:47:23.623169 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 09:47:23 crc kubenswrapper[4965]: I0219 09:47:23.623206 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 09:47:23 crc kubenswrapper[4965]: I0219 09:47:23.623294 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 09:47:23 crc kubenswrapper[4965]: I0219 09:47:23.623340 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 09:47:23 crc kubenswrapper[4965]: I0219 09:47:23.623942 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:47:23 crc kubenswrapper[4965]: I0219 09:47:23.623990 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:47:23 crc kubenswrapper[4965]: I0219 09:47:23.624010 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:47:23 crc kubenswrapper[4965]: I0219 09:47:23.624029 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:47:23 crc kubenswrapper[4965]: I0219 09:47:23.630043 4965 scope.go:117] "RemoveContainer" containerID="01eebcc34a1d516230195cd5cc7534001142c34966e15dae247d5f25726a9378" Feb 19 09:47:23 crc kubenswrapper[4965]: E0219 09:47:23.630585 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01eebcc34a1d516230195cd5cc7534001142c34966e15dae247d5f25726a9378\": container with ID starting with 01eebcc34a1d516230195cd5cc7534001142c34966e15dae247d5f25726a9378 not found: ID does not exist" containerID="01eebcc34a1d516230195cd5cc7534001142c34966e15dae247d5f25726a9378" Feb 19 09:47:23 crc kubenswrapper[4965]: I0219 09:47:23.630650 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01eebcc34a1d516230195cd5cc7534001142c34966e15dae247d5f25726a9378"} err="failed to get container status \"01eebcc34a1d516230195cd5cc7534001142c34966e15dae247d5f25726a9378\": rpc error: code = NotFound desc = could not find container \"01eebcc34a1d516230195cd5cc7534001142c34966e15dae247d5f25726a9378\": container with ID starting with 01eebcc34a1d516230195cd5cc7534001142c34966e15dae247d5f25726a9378 not found: ID does not exist" Feb 19 09:47:23 crc kubenswrapper[4965]: I0219 09:47:23.633417 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:47:23 crc kubenswrapper[4965]: I0219 09:47:23.724931 4965 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 19 09:47:23 crc kubenswrapper[4965]: I0219 09:47:23.724981 4965 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 19 09:47:23 crc kubenswrapper[4965]: I0219 09:47:23.725000 4965 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 19 09:47:23 crc kubenswrapper[4965]: I0219 09:47:23.725011 4965 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 19 09:47:23 crc kubenswrapper[4965]: I0219 09:47:23.725022 4965 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 19 09:47:24 crc kubenswrapper[4965]: I0219 09:47:24.916358 4965 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 19 09:47:25 crc kubenswrapper[4965]: I0219 09:47:25.209473 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 19 09:47:25 crc kubenswrapper[4965]: I0219 09:47:25.210224 4965 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Feb 19 09:47:25 crc kubenswrapper[4965]: I0219 09:47:25.222313 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 19 09:47:25 crc kubenswrapper[4965]: I0219 09:47:25.222367 4965 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="a4b15392-0b1a-4c57-a256-6d37b83a0a64" Feb 19 09:47:25 crc kubenswrapper[4965]: I0219 09:47:25.228425 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 19 09:47:25 crc kubenswrapper[4965]: I0219 09:47:25.228506 4965 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="a4b15392-0b1a-4c57-a256-6d37b83a0a64" Feb 19 09:47:36 crc kubenswrapper[4965]: I0219 09:47:36.687806 4965 generic.go:334] "Generic (PLEG): container finished" podID="190603fe-6420-4d17-91f5-c37c9038002c" containerID="7178023f0380d6235cb79ddd0cd42412ad0ab55cdb1fb062a5ef0b2216b15cf0" exitCode=0 Feb 19 09:47:36 crc kubenswrapper[4965]: I0219 09:47:36.687883 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vkrsj" event={"ID":"190603fe-6420-4d17-91f5-c37c9038002c","Type":"ContainerDied","Data":"7178023f0380d6235cb79ddd0cd42412ad0ab55cdb1fb062a5ef0b2216b15cf0"} Feb 19 09:47:36 crc kubenswrapper[4965]: I0219 09:47:36.689829 4965 scope.go:117] "RemoveContainer" containerID="7178023f0380d6235cb79ddd0cd42412ad0ab55cdb1fb062a5ef0b2216b15cf0" Feb 19 09:47:37 crc kubenswrapper[4965]: I0219 09:47:37.696905 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vkrsj" event={"ID":"190603fe-6420-4d17-91f5-c37c9038002c","Type":"ContainerStarted","Data":"08f17e9c35d3d6e156ac4bfb50ed365c14a0090cf9d43cf075454888c661de19"} Feb 19 09:47:37 crc kubenswrapper[4965]: I0219 09:47:37.697495 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-vkrsj" Feb 19 09:47:37 crc kubenswrapper[4965]: I0219 09:47:37.704562 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-vkrsj" Feb 19 09:48:06 crc kubenswrapper[4965]: I0219 09:48:06.094247 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-2v5s2"] Feb 19 09:48:06 crc kubenswrapper[4965]: E0219 09:48:06.095476 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cce58c03-973d-4b4b-8854-cf6d27c71d28" containerName="installer" Feb 19 09:48:06 crc kubenswrapper[4965]: I0219 09:48:06.095493 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="cce58c03-973d-4b4b-8854-cf6d27c71d28" containerName="installer" Feb 19 09:48:06 crc kubenswrapper[4965]: E0219 09:48:06.095507 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 19 09:48:06 crc kubenswrapper[4965]: I0219 09:48:06.095514 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 19 09:48:06 crc kubenswrapper[4965]: I0219 09:48:06.095620 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 19 09:48:06 crc kubenswrapper[4965]: I0219 09:48:06.095634 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="cce58c03-973d-4b4b-8854-cf6d27c71d28" containerName="installer" Feb 19 09:48:06 crc kubenswrapper[4965]: I0219 09:48:06.096120 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-2v5s2" Feb 19 09:48:06 crc kubenswrapper[4965]: I0219 09:48:06.136046 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-2v5s2"] Feb 19 09:48:06 crc kubenswrapper[4965]: I0219 09:48:06.166440 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-2v5s2\" (UID: \"f9ee7bb9-ae17-486a-af74-a6e85d72a74f\") " pod="openshift-image-registry/image-registry-66df7c8f76-2v5s2" Feb 19 09:48:06 crc kubenswrapper[4965]: I0219 09:48:06.166507 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f9ee7bb9-ae17-486a-af74-a6e85d72a74f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-2v5s2\" (UID: \"f9ee7bb9-ae17-486a-af74-a6e85d72a74f\") " pod="openshift-image-registry/image-registry-66df7c8f76-2v5s2" Feb 19 09:48:06 crc kubenswrapper[4965]: I0219 09:48:06.166620 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f9ee7bb9-ae17-486a-af74-a6e85d72a74f-bound-sa-token\") pod \"image-registry-66df7c8f76-2v5s2\" (UID: \"f9ee7bb9-ae17-486a-af74-a6e85d72a74f\") " pod="openshift-image-registry/image-registry-66df7c8f76-2v5s2" Feb 19 09:48:06 crc kubenswrapper[4965]: I0219 09:48:06.166917 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9ee7bb9-ae17-486a-af74-a6e85d72a74f-trusted-ca\") pod \"image-registry-66df7c8f76-2v5s2\" (UID: \"f9ee7bb9-ae17-486a-af74-a6e85d72a74f\") " pod="openshift-image-registry/image-registry-66df7c8f76-2v5s2" Feb 19 09:48:06 crc kubenswrapper[4965]: I0219 09:48:06.166997 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9ee7bb9-ae17-486a-af74-a6e85d72a74f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-2v5s2\" (UID: \"f9ee7bb9-ae17-486a-af74-a6e85d72a74f\") " pod="openshift-image-registry/image-registry-66df7c8f76-2v5s2" Feb 19 09:48:06 crc kubenswrapper[4965]: I0219 09:48:06.167055 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9ee7bb9-ae17-486a-af74-a6e85d72a74f-registry-tls\") pod \"image-registry-66df7c8f76-2v5s2\" (UID: \"f9ee7bb9-ae17-486a-af74-a6e85d72a74f\") " pod="openshift-image-registry/image-registry-66df7c8f76-2v5s2" Feb 19 09:48:06 crc kubenswrapper[4965]: I0219 09:48:06.167076 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f9ee7bb9-ae17-486a-af74-a6e85d72a74f-registry-certificates\") pod \"image-registry-66df7c8f76-2v5s2\" (UID: \"f9ee7bb9-ae17-486a-af74-a6e85d72a74f\") " pod="openshift-image-registry/image-registry-66df7c8f76-2v5s2" Feb 19 09:48:06 crc kubenswrapper[4965]: I0219 09:48:06.167095 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5pth\" (UniqueName: \"kubernetes.io/projected/f9ee7bb9-ae17-486a-af74-a6e85d72a74f-kube-api-access-j5pth\") pod \"image-registry-66df7c8f76-2v5s2\" (UID: \"f9ee7bb9-ae17-486a-af74-a6e85d72a74f\") " pod="openshift-image-registry/image-registry-66df7c8f76-2v5s2" Feb 19 09:48:06 crc kubenswrapper[4965]: I0219 09:48:06.201878 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-2v5s2\" (UID: \"f9ee7bb9-ae17-486a-af74-a6e85d72a74f\") " pod="openshift-image-registry/image-registry-66df7c8f76-2v5s2" Feb 19 09:48:06 crc kubenswrapper[4965]: I0219 09:48:06.269123 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9ee7bb9-ae17-486a-af74-a6e85d72a74f-registry-tls\") pod \"image-registry-66df7c8f76-2v5s2\" (UID: \"f9ee7bb9-ae17-486a-af74-a6e85d72a74f\") " pod="openshift-image-registry/image-registry-66df7c8f76-2v5s2" Feb 19 09:48:06 crc kubenswrapper[4965]: I0219 09:48:06.269253 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f9ee7bb9-ae17-486a-af74-a6e85d72a74f-registry-certificates\") pod \"image-registry-66df7c8f76-2v5s2\" (UID: \"f9ee7bb9-ae17-486a-af74-a6e85d72a74f\") " pod="openshift-image-registry/image-registry-66df7c8f76-2v5s2" Feb 19 09:48:06 crc kubenswrapper[4965]: I0219 09:48:06.269289 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5pth\" (UniqueName: \"kubernetes.io/projected/f9ee7bb9-ae17-486a-af74-a6e85d72a74f-kube-api-access-j5pth\") pod \"image-registry-66df7c8f76-2v5s2\" (UID: \"f9ee7bb9-ae17-486a-af74-a6e85d72a74f\") " pod="openshift-image-registry/image-registry-66df7c8f76-2v5s2" Feb 19 09:48:06 crc kubenswrapper[4965]: I0219 09:48:06.269350 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f9ee7bb9-ae17-486a-af74-a6e85d72a74f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-2v5s2\" (UID: \"f9ee7bb9-ae17-486a-af74-a6e85d72a74f\") " pod="openshift-image-registry/image-registry-66df7c8f76-2v5s2" Feb 19 09:48:06 crc kubenswrapper[4965]: I0219 09:48:06.269409 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f9ee7bb9-ae17-486a-af74-a6e85d72a74f-bound-sa-token\") pod \"image-registry-66df7c8f76-2v5s2\" (UID: \"f9ee7bb9-ae17-486a-af74-a6e85d72a74f\") " pod="openshift-image-registry/image-registry-66df7c8f76-2v5s2" Feb 19 09:48:06 crc kubenswrapper[4965]: I0219 09:48:06.269443 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9ee7bb9-ae17-486a-af74-a6e85d72a74f-trusted-ca\") pod \"image-registry-66df7c8f76-2v5s2\" (UID: \"f9ee7bb9-ae17-486a-af74-a6e85d72a74f\") " pod="openshift-image-registry/image-registry-66df7c8f76-2v5s2" Feb 19 09:48:06 crc kubenswrapper[4965]: I0219 09:48:06.269463 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9ee7bb9-ae17-486a-af74-a6e85d72a74f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-2v5s2\" (UID: \"f9ee7bb9-ae17-486a-af74-a6e85d72a74f\") " pod="openshift-image-registry/image-registry-66df7c8f76-2v5s2" Feb 19 09:48:06 crc kubenswrapper[4965]: I0219 09:48:06.275793 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f9ee7bb9-ae17-486a-af74-a6e85d72a74f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-2v5s2\" (UID: \"f9ee7bb9-ae17-486a-af74-a6e85d72a74f\") " pod="openshift-image-registry/image-registry-66df7c8f76-2v5s2" Feb 19 09:48:06 crc kubenswrapper[4965]: I0219 09:48:06.276837 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9ee7bb9-ae17-486a-af74-a6e85d72a74f-trusted-ca\") pod \"image-registry-66df7c8f76-2v5s2\" (UID: \"f9ee7bb9-ae17-486a-af74-a6e85d72a74f\") " pod="openshift-image-registry/image-registry-66df7c8f76-2v5s2" Feb 19 09:48:06 crc kubenswrapper[4965]: I0219 09:48:06.277079 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f9ee7bb9-ae17-486a-af74-a6e85d72a74f-registry-certificates\") pod \"image-registry-66df7c8f76-2v5s2\" (UID: \"f9ee7bb9-ae17-486a-af74-a6e85d72a74f\") " pod="openshift-image-registry/image-registry-66df7c8f76-2v5s2" Feb 19 09:48:06 crc kubenswrapper[4965]: I0219 09:48:06.279518 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9ee7bb9-ae17-486a-af74-a6e85d72a74f-registry-tls\") pod \"image-registry-66df7c8f76-2v5s2\" (UID: \"f9ee7bb9-ae17-486a-af74-a6e85d72a74f\") " pod="openshift-image-registry/image-registry-66df7c8f76-2v5s2" Feb 19 09:48:06 crc kubenswrapper[4965]: I0219 09:48:06.279590 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9ee7bb9-ae17-486a-af74-a6e85d72a74f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-2v5s2\" (UID: \"f9ee7bb9-ae17-486a-af74-a6e85d72a74f\") " pod="openshift-image-registry/image-registry-66df7c8f76-2v5s2" Feb 19 09:48:06 crc kubenswrapper[4965]: I0219 09:48:06.294313 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f9ee7bb9-ae17-486a-af74-a6e85d72a74f-bound-sa-token\") pod \"image-registry-66df7c8f76-2v5s2\" (UID: \"f9ee7bb9-ae17-486a-af74-a6e85d72a74f\") " pod="openshift-image-registry/image-registry-66df7c8f76-2v5s2" Feb 19 09:48:06 crc kubenswrapper[4965]: I0219 09:48:06.298959 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5pth\" (UniqueName: \"kubernetes.io/projected/f9ee7bb9-ae17-486a-af74-a6e85d72a74f-kube-api-access-j5pth\") pod \"image-registry-66df7c8f76-2v5s2\" (UID: \"f9ee7bb9-ae17-486a-af74-a6e85d72a74f\") " pod="openshift-image-registry/image-registry-66df7c8f76-2v5s2" Feb 19 09:48:06 crc kubenswrapper[4965]: I0219 09:48:06.416389 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-2v5s2" Feb 19 09:48:06 crc kubenswrapper[4965]: I0219 09:48:06.651225 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-2v5s2"] Feb 19 09:48:06 crc kubenswrapper[4965]: I0219 09:48:06.895116 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-2v5s2" event={"ID":"f9ee7bb9-ae17-486a-af74-a6e85d72a74f","Type":"ContainerStarted","Data":"474550741dd617a19b3998940b90b4a5fa242d3cf7b38a71b26ee55d65d7229c"} Feb 19 09:48:06 crc kubenswrapper[4965]: I0219 09:48:06.895590 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-2v5s2" event={"ID":"f9ee7bb9-ae17-486a-af74-a6e85d72a74f","Type":"ContainerStarted","Data":"8e3c9612a40ac95ff83a90ba45579361791d7981b3e0df231f4a19317c72a281"} Feb 19 09:48:06 crc kubenswrapper[4965]: I0219 09:48:06.896491 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-2v5s2" Feb 19 09:48:06 crc kubenswrapper[4965]: I0219 09:48:06.920247 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-2v5s2" podStartSLOduration=0.920224754 podStartE2EDuration="920.224754ms" podCreationTimestamp="2026-02-19 09:48:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:48:06.919143036 +0000 UTC m=+342.540464366" watchObservedRunningTime="2026-02-19 09:48:06.920224754 +0000 UTC m=+342.541546074" Feb 19 09:48:16 crc kubenswrapper[4965]: I0219 09:48:16.601775 4965 patch_prober.go:28] interesting pod/machine-config-daemon-7mhh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 09:48:16 crc kubenswrapper[4965]: I0219 09:48:16.603712 4965 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 09:48:26 crc kubenswrapper[4965]: I0219 09:48:26.428979 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-2v5s2" Feb 19 09:48:26 crc kubenswrapper[4965]: I0219 09:48:26.540407 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-842k4"] Feb 19 09:48:46 crc kubenswrapper[4965]: I0219 09:48:46.601047 4965 patch_prober.go:28] interesting pod/machine-config-daemon-7mhh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 09:48:46 crc kubenswrapper[4965]: I0219 09:48:46.603887 4965 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 09:48:51 crc kubenswrapper[4965]: I0219 09:48:51.598879 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-842k4" podUID="22aed16a-0375-45f1-8762-8d5afddf848a" containerName="registry" containerID="cri-o://f6fcf298219a38a8a4cdf7848a9faf7a45835e13a724f0074bcb959825282cc5" gracePeriod=30 Feb 19 09:48:51 crc kubenswrapper[4965]: I0219 09:48:51.968490 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:48:52 crc kubenswrapper[4965]: I0219 09:48:52.132652 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/22aed16a-0375-45f1-8762-8d5afddf848a-bound-sa-token\") pod \"22aed16a-0375-45f1-8762-8d5afddf848a\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " Feb 19 09:48:52 crc kubenswrapper[4965]: I0219 09:48:52.132984 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"22aed16a-0375-45f1-8762-8d5afddf848a\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " Feb 19 09:48:52 crc kubenswrapper[4965]: I0219 09:48:52.133024 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/22aed16a-0375-45f1-8762-8d5afddf848a-installation-pull-secrets\") pod \"22aed16a-0375-45f1-8762-8d5afddf848a\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " Feb 19 09:48:52 crc kubenswrapper[4965]: I0219 09:48:52.133049 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/22aed16a-0375-45f1-8762-8d5afddf848a-registry-certificates\") pod \"22aed16a-0375-45f1-8762-8d5afddf848a\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " Feb 19 09:48:52 crc kubenswrapper[4965]: I0219 09:48:52.133080 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltxjm\" (UniqueName: \"kubernetes.io/projected/22aed16a-0375-45f1-8762-8d5afddf848a-kube-api-access-ltxjm\") pod \"22aed16a-0375-45f1-8762-8d5afddf848a\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " Feb 19 09:48:52 crc kubenswrapper[4965]: I0219 09:48:52.133139 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/22aed16a-0375-45f1-8762-8d5afddf848a-trusted-ca\") pod \"22aed16a-0375-45f1-8762-8d5afddf848a\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " Feb 19 09:48:52 crc kubenswrapper[4965]: I0219 09:48:52.133169 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/22aed16a-0375-45f1-8762-8d5afddf848a-ca-trust-extracted\") pod \"22aed16a-0375-45f1-8762-8d5afddf848a\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " Feb 19 09:48:52 crc kubenswrapper[4965]: I0219 09:48:52.133224 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/22aed16a-0375-45f1-8762-8d5afddf848a-registry-tls\") pod \"22aed16a-0375-45f1-8762-8d5afddf848a\" (UID: \"22aed16a-0375-45f1-8762-8d5afddf848a\") " Feb 19 09:48:52 crc kubenswrapper[4965]: I0219 09:48:52.134390 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22aed16a-0375-45f1-8762-8d5afddf848a-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "22aed16a-0375-45f1-8762-8d5afddf848a" (UID: "22aed16a-0375-45f1-8762-8d5afddf848a"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:48:52 crc kubenswrapper[4965]: I0219 09:48:52.141595 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22aed16a-0375-45f1-8762-8d5afddf848a-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "22aed16a-0375-45f1-8762-8d5afddf848a" (UID: "22aed16a-0375-45f1-8762-8d5afddf848a"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:48:52 crc kubenswrapper[4965]: I0219 09:48:52.141687 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22aed16a-0375-45f1-8762-8d5afddf848a-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "22aed16a-0375-45f1-8762-8d5afddf848a" (UID: "22aed16a-0375-45f1-8762-8d5afddf848a"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:48:52 crc kubenswrapper[4965]: I0219 09:48:52.141708 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22aed16a-0375-45f1-8762-8d5afddf848a-kube-api-access-ltxjm" (OuterVolumeSpecName: "kube-api-access-ltxjm") pod "22aed16a-0375-45f1-8762-8d5afddf848a" (UID: "22aed16a-0375-45f1-8762-8d5afddf848a"). InnerVolumeSpecName "kube-api-access-ltxjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:48:52 crc kubenswrapper[4965]: I0219 09:48:52.142415 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22aed16a-0375-45f1-8762-8d5afddf848a-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "22aed16a-0375-45f1-8762-8d5afddf848a" (UID: "22aed16a-0375-45f1-8762-8d5afddf848a"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:48:52 crc kubenswrapper[4965]: I0219 09:48:52.143055 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22aed16a-0375-45f1-8762-8d5afddf848a-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "22aed16a-0375-45f1-8762-8d5afddf848a" (UID: "22aed16a-0375-45f1-8762-8d5afddf848a"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:48:52 crc kubenswrapper[4965]: I0219 09:48:52.150127 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "22aed16a-0375-45f1-8762-8d5afddf848a" (UID: "22aed16a-0375-45f1-8762-8d5afddf848a"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 09:48:52 crc kubenswrapper[4965]: I0219 09:48:52.153929 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22aed16a-0375-45f1-8762-8d5afddf848a-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "22aed16a-0375-45f1-8762-8d5afddf848a" (UID: "22aed16a-0375-45f1-8762-8d5afddf848a"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:48:52 crc kubenswrapper[4965]: I0219 09:48:52.178181 4965 generic.go:334] "Generic (PLEG): container finished" podID="22aed16a-0375-45f1-8762-8d5afddf848a" containerID="f6fcf298219a38a8a4cdf7848a9faf7a45835e13a724f0074bcb959825282cc5" exitCode=0 Feb 19 09:48:52 crc kubenswrapper[4965]: I0219 09:48:52.178279 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-842k4" event={"ID":"22aed16a-0375-45f1-8762-8d5afddf848a","Type":"ContainerDied","Data":"f6fcf298219a38a8a4cdf7848a9faf7a45835e13a724f0074bcb959825282cc5"} Feb 19 09:48:52 crc kubenswrapper[4965]: I0219 09:48:52.178308 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-842k4" event={"ID":"22aed16a-0375-45f1-8762-8d5afddf848a","Type":"ContainerDied","Data":"e4441b8dc7142e1452908c8d31eca19f2bde14eb629afd1838130863ef4b1d7a"} Feb 19 09:48:52 crc kubenswrapper[4965]: I0219 09:48:52.178326 4965 scope.go:117] "RemoveContainer" containerID="f6fcf298219a38a8a4cdf7848a9faf7a45835e13a724f0074bcb959825282cc5" Feb 19 09:48:52 crc kubenswrapper[4965]: I0219 09:48:52.178441 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-842k4" Feb 19 09:48:52 crc kubenswrapper[4965]: I0219 09:48:52.208790 4965 scope.go:117] "RemoveContainer" containerID="f6fcf298219a38a8a4cdf7848a9faf7a45835e13a724f0074bcb959825282cc5" Feb 19 09:48:52 crc kubenswrapper[4965]: E0219 09:48:52.209433 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6fcf298219a38a8a4cdf7848a9faf7a45835e13a724f0074bcb959825282cc5\": container with ID starting with f6fcf298219a38a8a4cdf7848a9faf7a45835e13a724f0074bcb959825282cc5 not found: ID does not exist" containerID="f6fcf298219a38a8a4cdf7848a9faf7a45835e13a724f0074bcb959825282cc5" Feb 19 09:48:52 crc kubenswrapper[4965]: I0219 09:48:52.209481 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6fcf298219a38a8a4cdf7848a9faf7a45835e13a724f0074bcb959825282cc5"} err="failed to get container status \"f6fcf298219a38a8a4cdf7848a9faf7a45835e13a724f0074bcb959825282cc5\": rpc error: code = NotFound desc = could not find container \"f6fcf298219a38a8a4cdf7848a9faf7a45835e13a724f0074bcb959825282cc5\": container with ID starting with f6fcf298219a38a8a4cdf7848a9faf7a45835e13a724f0074bcb959825282cc5 not found: ID does not exist" Feb 19 09:48:52 crc kubenswrapper[4965]: I0219 09:48:52.224571 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-842k4"] Feb 19 09:48:52 crc kubenswrapper[4965]: I0219 09:48:52.228817 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-842k4"] Feb 19 09:48:52 crc kubenswrapper[4965]: I0219 09:48:52.234449 4965 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/22aed16a-0375-45f1-8762-8d5afddf848a-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:52 crc kubenswrapper[4965]: I0219 09:48:52.234494 4965 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/22aed16a-0375-45f1-8762-8d5afddf848a-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:52 crc kubenswrapper[4965]: I0219 09:48:52.234507 4965 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/22aed16a-0375-45f1-8762-8d5afddf848a-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:52 crc kubenswrapper[4965]: I0219 09:48:52.234520 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltxjm\" (UniqueName: \"kubernetes.io/projected/22aed16a-0375-45f1-8762-8d5afddf848a-kube-api-access-ltxjm\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:52 crc kubenswrapper[4965]: I0219 09:48:52.234533 4965 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/22aed16a-0375-45f1-8762-8d5afddf848a-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:52 crc kubenswrapper[4965]: I0219 09:48:52.234545 4965 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/22aed16a-0375-45f1-8762-8d5afddf848a-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:52 crc kubenswrapper[4965]: I0219 09:48:52.234556 4965 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/22aed16a-0375-45f1-8762-8d5afddf848a-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:52 crc kubenswrapper[4965]: I0219 09:48:52.714782 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fxnw5"] Feb 19 09:48:52 crc kubenswrapper[4965]: I0219 09:48:52.715501 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fxnw5" podUID="c2ea1b40-1bc8-462a-a2a2-218c24c27584" containerName="registry-server" containerID="cri-o://971cb5acd33203801bcec353714e16d855edfdbd368fc8ce536a258a4a1af14b" gracePeriod=30 Feb 19 09:48:52 crc kubenswrapper[4965]: I0219 09:48:52.730818 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tlmst"] Feb 19 09:48:52 crc kubenswrapper[4965]: I0219 09:48:52.747444 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vkrsj"] Feb 19 09:48:52 crc kubenswrapper[4965]: I0219 09:48:52.747705 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-vkrsj" podUID="190603fe-6420-4d17-91f5-c37c9038002c" containerName="marketplace-operator" containerID="cri-o://08f17e9c35d3d6e156ac4bfb50ed365c14a0090cf9d43cf075454888c661de19" gracePeriod=30 Feb 19 09:48:52 crc kubenswrapper[4965]: I0219 09:48:52.757270 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c55hf"] Feb 19 09:48:52 crc kubenswrapper[4965]: I0219 09:48:52.757576 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-c55hf" podUID="b1832525-d3f5-47bc-879b-4d4e4f3c14bd" containerName="registry-server" containerID="cri-o://b6ced0454e0dc5fb5ccbcc5e5378ccb5ff7f72fcdcfb20212bcbdb2d3d7e2809" gracePeriod=30 Feb 19 09:48:52 crc kubenswrapper[4965]: I0219 09:48:52.764640 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s8pfp"] Feb 19 09:48:52 crc kubenswrapper[4965]: I0219 09:48:52.764974 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-s8pfp" podUID="e428e472-401e-45b3-b70b-d2e0f19b52f9" containerName="registry-server" containerID="cri-o://672d235e141d03c0b7695ab7aaeabf0cf31a3ba4c6a63fbafad80b099dd50a13" gracePeriod=30 Feb 19 09:48:52 crc kubenswrapper[4965]: I0219 09:48:52.769592 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pbfkw"] Feb 19 09:48:52 crc kubenswrapper[4965]: E0219 09:48:52.769881 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22aed16a-0375-45f1-8762-8d5afddf848a" containerName="registry" Feb 19 09:48:52 crc kubenswrapper[4965]: I0219 09:48:52.769894 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="22aed16a-0375-45f1-8762-8d5afddf848a" containerName="registry" Feb 19 09:48:52 crc kubenswrapper[4965]: I0219 09:48:52.769996 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="22aed16a-0375-45f1-8762-8d5afddf848a" containerName="registry" Feb 19 09:48:52 crc kubenswrapper[4965]: I0219 09:48:52.770818 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-pbfkw" Feb 19 09:48:52 crc kubenswrapper[4965]: I0219 09:48:52.780740 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pbfkw"] Feb 19 09:48:52 crc kubenswrapper[4965]: E0219 09:48:52.877564 4965 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 971cb5acd33203801bcec353714e16d855edfdbd368fc8ce536a258a4a1af14b is running failed: container process not found" containerID="971cb5acd33203801bcec353714e16d855edfdbd368fc8ce536a258a4a1af14b" cmd=["grpc_health_probe","-addr=:50051"] Feb 19 09:48:52 crc kubenswrapper[4965]: E0219 09:48:52.879154 4965 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 971cb5acd33203801bcec353714e16d855edfdbd368fc8ce536a258a4a1af14b is running failed: container process not found" containerID="971cb5acd33203801bcec353714e16d855edfdbd368fc8ce536a258a4a1af14b" cmd=["grpc_health_probe","-addr=:50051"] Feb 19 09:48:52 crc kubenswrapper[4965]: E0219 09:48:52.880048 4965 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 971cb5acd33203801bcec353714e16d855edfdbd368fc8ce536a258a4a1af14b is running failed: container process not found" containerID="971cb5acd33203801bcec353714e16d855edfdbd368fc8ce536a258a4a1af14b" cmd=["grpc_health_probe","-addr=:50051"] Feb 19 09:48:52 crc kubenswrapper[4965]: E0219 09:48:52.880087 4965 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 971cb5acd33203801bcec353714e16d855edfdbd368fc8ce536a258a4a1af14b is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-fxnw5" podUID="c2ea1b40-1bc8-462a-a2a2-218c24c27584" containerName="registry-server" Feb 19 09:48:52 crc kubenswrapper[4965]: I0219 09:48:52.943661 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7rdn\" (UniqueName: \"kubernetes.io/projected/16a589f2-57f9-460f-9802-1c63bd877a05-kube-api-access-z7rdn\") pod \"marketplace-operator-79b997595-pbfkw\" (UID: \"16a589f2-57f9-460f-9802-1c63bd877a05\") " pod="openshift-marketplace/marketplace-operator-79b997595-pbfkw" Feb 19 09:48:52 crc kubenswrapper[4965]: I0219 09:48:52.943729 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/16a589f2-57f9-460f-9802-1c63bd877a05-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-pbfkw\" (UID: \"16a589f2-57f9-460f-9802-1c63bd877a05\") " pod="openshift-marketplace/marketplace-operator-79b997595-pbfkw" Feb 19 09:48:52 crc kubenswrapper[4965]: I0219 09:48:52.943799 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/16a589f2-57f9-460f-9802-1c63bd877a05-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-pbfkw\" (UID: \"16a589f2-57f9-460f-9802-1c63bd877a05\") " pod="openshift-marketplace/marketplace-operator-79b997595-pbfkw" Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.045267 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/16a589f2-57f9-460f-9802-1c63bd877a05-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-pbfkw\" (UID: \"16a589f2-57f9-460f-9802-1c63bd877a05\") " pod="openshift-marketplace/marketplace-operator-79b997595-pbfkw" Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.047626 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/16a589f2-57f9-460f-9802-1c63bd877a05-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-pbfkw\" (UID: \"16a589f2-57f9-460f-9802-1c63bd877a05\") " pod="openshift-marketplace/marketplace-operator-79b997595-pbfkw" Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.047848 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7rdn\" (UniqueName: \"kubernetes.io/projected/16a589f2-57f9-460f-9802-1c63bd877a05-kube-api-access-z7rdn\") pod \"marketplace-operator-79b997595-pbfkw\" (UID: \"16a589f2-57f9-460f-9802-1c63bd877a05\") " pod="openshift-marketplace/marketplace-operator-79b997595-pbfkw" Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.048647 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/16a589f2-57f9-460f-9802-1c63bd877a05-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-pbfkw\" (UID: \"16a589f2-57f9-460f-9802-1c63bd877a05\") " pod="openshift-marketplace/marketplace-operator-79b997595-pbfkw" Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.079620 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7rdn\" (UniqueName: \"kubernetes.io/projected/16a589f2-57f9-460f-9802-1c63bd877a05-kube-api-access-z7rdn\") pod \"marketplace-operator-79b997595-pbfkw\" (UID: \"16a589f2-57f9-460f-9802-1c63bd877a05\") " pod="openshift-marketplace/marketplace-operator-79b997595-pbfkw" Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.090800 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/16a589f2-57f9-460f-9802-1c63bd877a05-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-pbfkw\" (UID: \"16a589f2-57f9-460f-9802-1c63bd877a05\") " pod="openshift-marketplace/marketplace-operator-79b997595-pbfkw" Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.108723 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-pbfkw" Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.164088 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s8pfp" Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.217156 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22aed16a-0375-45f1-8762-8d5afddf848a" path="/var/lib/kubelet/pods/22aed16a-0375-45f1-8762-8d5afddf848a/volumes" Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.218232 4965 generic.go:334] "Generic (PLEG): container finished" podID="b1832525-d3f5-47bc-879b-4d4e4f3c14bd" containerID="b6ced0454e0dc5fb5ccbcc5e5378ccb5ff7f72fcdcfb20212bcbdb2d3d7e2809" exitCode=0 Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.222423 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c55hf" event={"ID":"b1832525-d3f5-47bc-879b-4d4e4f3c14bd","Type":"ContainerDied","Data":"b6ced0454e0dc5fb5ccbcc5e5378ccb5ff7f72fcdcfb20212bcbdb2d3d7e2809"} Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.229166 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vkrsj" Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.239743 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fxnw5" event={"ID":"c2ea1b40-1bc8-462a-a2a2-218c24c27584","Type":"ContainerDied","Data":"971cb5acd33203801bcec353714e16d855edfdbd368fc8ce536a258a4a1af14b"} Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.239754 4965 generic.go:334] "Generic (PLEG): container finished" podID="c2ea1b40-1bc8-462a-a2a2-218c24c27584" containerID="971cb5acd33203801bcec353714e16d855edfdbd368fc8ce536a258a4a1af14b" exitCode=0 Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.246663 4965 generic.go:334] "Generic (PLEG): container finished" podID="190603fe-6420-4d17-91f5-c37c9038002c" containerID="08f17e9c35d3d6e156ac4bfb50ed365c14a0090cf9d43cf075454888c661de19" exitCode=0 Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.246757 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vkrsj" Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.246761 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vkrsj" event={"ID":"190603fe-6420-4d17-91f5-c37c9038002c","Type":"ContainerDied","Data":"08f17e9c35d3d6e156ac4bfb50ed365c14a0090cf9d43cf075454888c661de19"} Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.246892 4965 scope.go:117] "RemoveContainer" containerID="08f17e9c35d3d6e156ac4bfb50ed365c14a0090cf9d43cf075454888c661de19" Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.257700 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e428e472-401e-45b3-b70b-d2e0f19b52f9-catalog-content\") pod \"e428e472-401e-45b3-b70b-d2e0f19b52f9\" (UID: \"e428e472-401e-45b3-b70b-d2e0f19b52f9\") " Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.257768 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e428e472-401e-45b3-b70b-d2e0f19b52f9-utilities\") pod \"e428e472-401e-45b3-b70b-d2e0f19b52f9\" (UID: \"e428e472-401e-45b3-b70b-d2e0f19b52f9\") " Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.257906 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfljm\" (UniqueName: \"kubernetes.io/projected/e428e472-401e-45b3-b70b-d2e0f19b52f9-kube-api-access-lfljm\") pod \"e428e472-401e-45b3-b70b-d2e0f19b52f9\" (UID: \"e428e472-401e-45b3-b70b-d2e0f19b52f9\") " Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.260407 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e428e472-401e-45b3-b70b-d2e0f19b52f9-utilities" (OuterVolumeSpecName: "utilities") pod "e428e472-401e-45b3-b70b-d2e0f19b52f9" (UID: "e428e472-401e-45b3-b70b-d2e0f19b52f9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.266294 4965 generic.go:334] "Generic (PLEG): container finished" podID="e428e472-401e-45b3-b70b-d2e0f19b52f9" containerID="672d235e141d03c0b7695ab7aaeabf0cf31a3ba4c6a63fbafad80b099dd50a13" exitCode=0 Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.267097 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s8pfp" event={"ID":"e428e472-401e-45b3-b70b-d2e0f19b52f9","Type":"ContainerDied","Data":"672d235e141d03c0b7695ab7aaeabf0cf31a3ba4c6a63fbafad80b099dd50a13"} Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.267176 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s8pfp" event={"ID":"e428e472-401e-45b3-b70b-d2e0f19b52f9","Type":"ContainerDied","Data":"f4492a55226f9991aff63051d538d280d8b1acc8bdf1982928b3eb0c3ec27e6e"} Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.267207 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tlmst" podUID="badd7c24-44c3-4853-9611-aeb49c3df0ab" containerName="registry-server" containerID="cri-o://c8cdf3f9015872c95f56f22ba2c0ad8547f033744ccae299a507f76f28a9e61c" gracePeriod=30 Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.267292 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s8pfp" Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.267950 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e428e472-401e-45b3-b70b-d2e0f19b52f9-kube-api-access-lfljm" (OuterVolumeSpecName: "kube-api-access-lfljm") pod "e428e472-401e-45b3-b70b-d2e0f19b52f9" (UID: "e428e472-401e-45b3-b70b-d2e0f19b52f9"). InnerVolumeSpecName "kube-api-access-lfljm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.282956 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c55hf" Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.290450 4965 scope.go:117] "RemoveContainer" containerID="7178023f0380d6235cb79ddd0cd42412ad0ab55cdb1fb062a5ef0b2216b15cf0" Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.339702 4965 scope.go:117] "RemoveContainer" containerID="672d235e141d03c0b7695ab7aaeabf0cf31a3ba4c6a63fbafad80b099dd50a13" Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.362599 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/190603fe-6420-4d17-91f5-c37c9038002c-marketplace-operator-metrics\") pod \"190603fe-6420-4d17-91f5-c37c9038002c\" (UID: \"190603fe-6420-4d17-91f5-c37c9038002c\") " Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.362745 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lj27w\" (UniqueName: \"kubernetes.io/projected/190603fe-6420-4d17-91f5-c37c9038002c-kube-api-access-lj27w\") pod \"190603fe-6420-4d17-91f5-c37c9038002c\" (UID: \"190603fe-6420-4d17-91f5-c37c9038002c\") " Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.362783 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/190603fe-6420-4d17-91f5-c37c9038002c-marketplace-trusted-ca\") pod \"190603fe-6420-4d17-91f5-c37c9038002c\" (UID: \"190603fe-6420-4d17-91f5-c37c9038002c\") " Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.363113 4965 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e428e472-401e-45b3-b70b-d2e0f19b52f9-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.363130 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfljm\" (UniqueName: \"kubernetes.io/projected/e428e472-401e-45b3-b70b-d2e0f19b52f9-kube-api-access-lfljm\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.365874 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/190603fe-6420-4d17-91f5-c37c9038002c-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "190603fe-6420-4d17-91f5-c37c9038002c" (UID: "190603fe-6420-4d17-91f5-c37c9038002c"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.371847 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/190603fe-6420-4d17-91f5-c37c9038002c-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "190603fe-6420-4d17-91f5-c37c9038002c" (UID: "190603fe-6420-4d17-91f5-c37c9038002c"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.373069 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/190603fe-6420-4d17-91f5-c37c9038002c-kube-api-access-lj27w" (OuterVolumeSpecName: "kube-api-access-lj27w") pod "190603fe-6420-4d17-91f5-c37c9038002c" (UID: "190603fe-6420-4d17-91f5-c37c9038002c"). InnerVolumeSpecName "kube-api-access-lj27w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.380913 4965 scope.go:117] "RemoveContainer" containerID="1f344d4aee174b55e3016e7c4c616414d8c4772f71de32818708fed004947cb0" Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.385444 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fxnw5" Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.423821 4965 scope.go:117] "RemoveContainer" containerID="4a72fe077eaedeec97ae4df87b84a7eb2a078ddcd277f35a656ab35710592180" Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.464387 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1832525-d3f5-47bc-879b-4d4e4f3c14bd-catalog-content\") pod \"b1832525-d3f5-47bc-879b-4d4e4f3c14bd\" (UID: \"b1832525-d3f5-47bc-879b-4d4e4f3c14bd\") " Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.464744 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1832525-d3f5-47bc-879b-4d4e4f3c14bd-utilities\") pod \"b1832525-d3f5-47bc-879b-4d4e4f3c14bd\" (UID: \"b1832525-d3f5-47bc-879b-4d4e4f3c14bd\") " Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.464843 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pbqz\" (UniqueName: \"kubernetes.io/projected/b1832525-d3f5-47bc-879b-4d4e4f3c14bd-kube-api-access-4pbqz\") pod \"b1832525-d3f5-47bc-879b-4d4e4f3c14bd\" (UID: \"b1832525-d3f5-47bc-879b-4d4e4f3c14bd\") " Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.465371 4965 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/190603fe-6420-4d17-91f5-c37c9038002c-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.465391 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lj27w\" (UniqueName: \"kubernetes.io/projected/190603fe-6420-4d17-91f5-c37c9038002c-kube-api-access-lj27w\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.465404 4965 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/190603fe-6420-4d17-91f5-c37c9038002c-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.466300 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1832525-d3f5-47bc-879b-4d4e4f3c14bd-utilities" (OuterVolumeSpecName: "utilities") pod "b1832525-d3f5-47bc-879b-4d4e4f3c14bd" (UID: "b1832525-d3f5-47bc-879b-4d4e4f3c14bd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.467608 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pbfkw"] Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.470739 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1832525-d3f5-47bc-879b-4d4e4f3c14bd-kube-api-access-4pbqz" (OuterVolumeSpecName: "kube-api-access-4pbqz") pod "b1832525-d3f5-47bc-879b-4d4e4f3c14bd" (UID: "b1832525-d3f5-47bc-879b-4d4e4f3c14bd"). InnerVolumeSpecName "kube-api-access-4pbqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.502388 4965 scope.go:117] "RemoveContainer" containerID="672d235e141d03c0b7695ab7aaeabf0cf31a3ba4c6a63fbafad80b099dd50a13" Feb 19 09:48:53 crc kubenswrapper[4965]: E0219 09:48:53.503138 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"672d235e141d03c0b7695ab7aaeabf0cf31a3ba4c6a63fbafad80b099dd50a13\": container with ID starting with 672d235e141d03c0b7695ab7aaeabf0cf31a3ba4c6a63fbafad80b099dd50a13 not found: ID does not exist" containerID="672d235e141d03c0b7695ab7aaeabf0cf31a3ba4c6a63fbafad80b099dd50a13" Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.503243 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"672d235e141d03c0b7695ab7aaeabf0cf31a3ba4c6a63fbafad80b099dd50a13"} err="failed to get container status \"672d235e141d03c0b7695ab7aaeabf0cf31a3ba4c6a63fbafad80b099dd50a13\": rpc error: code = NotFound desc = could not find container \"672d235e141d03c0b7695ab7aaeabf0cf31a3ba4c6a63fbafad80b099dd50a13\": container with ID starting with 672d235e141d03c0b7695ab7aaeabf0cf31a3ba4c6a63fbafad80b099dd50a13 not found: ID does not exist" Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.503283 4965 scope.go:117] "RemoveContainer" containerID="1f344d4aee174b55e3016e7c4c616414d8c4772f71de32818708fed004947cb0" Feb 19 09:48:53 crc kubenswrapper[4965]: E0219 09:48:53.507700 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f344d4aee174b55e3016e7c4c616414d8c4772f71de32818708fed004947cb0\": container with ID starting with 1f344d4aee174b55e3016e7c4c616414d8c4772f71de32818708fed004947cb0 not found: ID does not exist" containerID="1f344d4aee174b55e3016e7c4c616414d8c4772f71de32818708fed004947cb0" Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.507766 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f344d4aee174b55e3016e7c4c616414d8c4772f71de32818708fed004947cb0"} err="failed to get container status \"1f344d4aee174b55e3016e7c4c616414d8c4772f71de32818708fed004947cb0\": rpc error: code = NotFound desc = could not find container \"1f344d4aee174b55e3016e7c4c616414d8c4772f71de32818708fed004947cb0\": container with ID starting with 1f344d4aee174b55e3016e7c4c616414d8c4772f71de32818708fed004947cb0 not found: ID does not exist" Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.507812 4965 scope.go:117] "RemoveContainer" containerID="4a72fe077eaedeec97ae4df87b84a7eb2a078ddcd277f35a656ab35710592180" Feb 19 09:48:53 crc kubenswrapper[4965]: E0219 09:48:53.509107 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a72fe077eaedeec97ae4df87b84a7eb2a078ddcd277f35a656ab35710592180\": container with ID starting with 4a72fe077eaedeec97ae4df87b84a7eb2a078ddcd277f35a656ab35710592180 not found: ID does not exist" containerID="4a72fe077eaedeec97ae4df87b84a7eb2a078ddcd277f35a656ab35710592180" Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.512787 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a72fe077eaedeec97ae4df87b84a7eb2a078ddcd277f35a656ab35710592180"} err="failed to get container status \"4a72fe077eaedeec97ae4df87b84a7eb2a078ddcd277f35a656ab35710592180\": rpc error: code = NotFound desc = could not find container \"4a72fe077eaedeec97ae4df87b84a7eb2a078ddcd277f35a656ab35710592180\": container with ID starting with 4a72fe077eaedeec97ae4df87b84a7eb2a078ddcd277f35a656ab35710592180 not found: ID does not exist" Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.509150 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1832525-d3f5-47bc-879b-4d4e4f3c14bd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b1832525-d3f5-47bc-879b-4d4e4f3c14bd" (UID: "b1832525-d3f5-47bc-879b-4d4e4f3c14bd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.537133 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e428e472-401e-45b3-b70b-d2e0f19b52f9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e428e472-401e-45b3-b70b-d2e0f19b52f9" (UID: "e428e472-401e-45b3-b70b-d2e0f19b52f9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.566694 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2ea1b40-1bc8-462a-a2a2-218c24c27584-utilities\") pod \"c2ea1b40-1bc8-462a-a2a2-218c24c27584\" (UID: \"c2ea1b40-1bc8-462a-a2a2-218c24c27584\") " Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.567326 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2ea1b40-1bc8-462a-a2a2-218c24c27584-catalog-content\") pod \"c2ea1b40-1bc8-462a-a2a2-218c24c27584\" (UID: \"c2ea1b40-1bc8-462a-a2a2-218c24c27584\") " Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.567592 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbdk8\" (UniqueName: \"kubernetes.io/projected/c2ea1b40-1bc8-462a-a2a2-218c24c27584-kube-api-access-hbdk8\") pod \"c2ea1b40-1bc8-462a-a2a2-218c24c27584\" (UID: \"c2ea1b40-1bc8-462a-a2a2-218c24c27584\") " Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.567906 4965 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1832525-d3f5-47bc-879b-4d4e4f3c14bd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.568035 4965 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e428e472-401e-45b3-b70b-d2e0f19b52f9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.568151 4965 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1832525-d3f5-47bc-879b-4d4e4f3c14bd-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.568292 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pbqz\" (UniqueName: \"kubernetes.io/projected/b1832525-d3f5-47bc-879b-4d4e4f3c14bd-kube-api-access-4pbqz\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.570116 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2ea1b40-1bc8-462a-a2a2-218c24c27584-utilities" (OuterVolumeSpecName: "utilities") pod "c2ea1b40-1bc8-462a-a2a2-218c24c27584" (UID: "c2ea1b40-1bc8-462a-a2a2-218c24c27584"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.571486 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2ea1b40-1bc8-462a-a2a2-218c24c27584-kube-api-access-hbdk8" (OuterVolumeSpecName: "kube-api-access-hbdk8") pod "c2ea1b40-1bc8-462a-a2a2-218c24c27584" (UID: "c2ea1b40-1bc8-462a-a2a2-218c24c27584"). InnerVolumeSpecName "kube-api-access-hbdk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.627598 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2ea1b40-1bc8-462a-a2a2-218c24c27584-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c2ea1b40-1bc8-462a-a2a2-218c24c27584" (UID: "c2ea1b40-1bc8-462a-a2a2-218c24c27584"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.674234 4965 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2ea1b40-1bc8-462a-a2a2-218c24c27584-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.674271 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbdk8\" (UniqueName: \"kubernetes.io/projected/c2ea1b40-1bc8-462a-a2a2-218c24c27584-kube-api-access-hbdk8\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.674283 4965 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2ea1b40-1bc8-462a-a2a2-218c24c27584-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.678517 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s8pfp"] Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.684455 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-s8pfp"] Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.693578 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vkrsj"] Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.701545 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vkrsj"] Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.805788 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tlmst" Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.978816 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/badd7c24-44c3-4853-9611-aeb49c3df0ab-catalog-content\") pod \"badd7c24-44c3-4853-9611-aeb49c3df0ab\" (UID: \"badd7c24-44c3-4853-9611-aeb49c3df0ab\") " Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.978978 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/badd7c24-44c3-4853-9611-aeb49c3df0ab-utilities\") pod \"badd7c24-44c3-4853-9611-aeb49c3df0ab\" (UID: \"badd7c24-44c3-4853-9611-aeb49c3df0ab\") " Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.979080 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-db2vx\" (UniqueName: \"kubernetes.io/projected/badd7c24-44c3-4853-9611-aeb49c3df0ab-kube-api-access-db2vx\") pod \"badd7c24-44c3-4853-9611-aeb49c3df0ab\" (UID: \"badd7c24-44c3-4853-9611-aeb49c3df0ab\") " Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.979775 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/badd7c24-44c3-4853-9611-aeb49c3df0ab-utilities" (OuterVolumeSpecName: "utilities") pod "badd7c24-44c3-4853-9611-aeb49c3df0ab" (UID: "badd7c24-44c3-4853-9611-aeb49c3df0ab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:48:53 crc kubenswrapper[4965]: I0219 09:48:53.986556 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/badd7c24-44c3-4853-9611-aeb49c3df0ab-kube-api-access-db2vx" (OuterVolumeSpecName: "kube-api-access-db2vx") pod "badd7c24-44c3-4853-9611-aeb49c3df0ab" (UID: "badd7c24-44c3-4853-9611-aeb49c3df0ab"). InnerVolumeSpecName "kube-api-access-db2vx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:48:54 crc kubenswrapper[4965]: I0219 09:48:54.042814 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/badd7c24-44c3-4853-9611-aeb49c3df0ab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "badd7c24-44c3-4853-9611-aeb49c3df0ab" (UID: "badd7c24-44c3-4853-9611-aeb49c3df0ab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:48:54 crc kubenswrapper[4965]: I0219 09:48:54.080412 4965 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/badd7c24-44c3-4853-9611-aeb49c3df0ab-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:54 crc kubenswrapper[4965]: I0219 09:48:54.080457 4965 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/badd7c24-44c3-4853-9611-aeb49c3df0ab-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:54 crc kubenswrapper[4965]: I0219 09:48:54.080471 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-db2vx\" (UniqueName: \"kubernetes.io/projected/badd7c24-44c3-4853-9611-aeb49c3df0ab-kube-api-access-db2vx\") on node \"crc\" DevicePath \"\"" Feb 19 09:48:54 crc kubenswrapper[4965]: I0219 09:48:54.275330 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-pbfkw" event={"ID":"16a589f2-57f9-460f-9802-1c63bd877a05","Type":"ContainerStarted","Data":"17bc274975da656425cba243312e03b0f080ba30975725a8902de6ae4ecb1a96"} Feb 19 09:48:54 crc kubenswrapper[4965]: I0219 09:48:54.275397 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-pbfkw" event={"ID":"16a589f2-57f9-460f-9802-1c63bd877a05","Type":"ContainerStarted","Data":"26ab037d9c75b40f9b053f58093d1edd2d711bfda55641d8696b67907dbeceef"} Feb 19 09:48:54 crc kubenswrapper[4965]: I0219 09:48:54.275935 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-pbfkw" Feb 19 09:48:54 crc kubenswrapper[4965]: I0219 09:48:54.280829 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-pbfkw" Feb 19 09:48:54 crc kubenswrapper[4965]: I0219 09:48:54.283802 4965 generic.go:334] "Generic (PLEG): container finished" podID="badd7c24-44c3-4853-9611-aeb49c3df0ab" containerID="c8cdf3f9015872c95f56f22ba2c0ad8547f033744ccae299a507f76f28a9e61c" exitCode=0 Feb 19 09:48:54 crc kubenswrapper[4965]: I0219 09:48:54.283885 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tlmst" event={"ID":"badd7c24-44c3-4853-9611-aeb49c3df0ab","Type":"ContainerDied","Data":"c8cdf3f9015872c95f56f22ba2c0ad8547f033744ccae299a507f76f28a9e61c"} Feb 19 09:48:54 crc kubenswrapper[4965]: I0219 09:48:54.283913 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tlmst" event={"ID":"badd7c24-44c3-4853-9611-aeb49c3df0ab","Type":"ContainerDied","Data":"a82bb1a06ca6e1f8e6b53476b4e05d1e214d47d5d42523bfa113a2ec08db7362"} Feb 19 09:48:54 crc kubenswrapper[4965]: I0219 09:48:54.283917 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tlmst" Feb 19 09:48:54 crc kubenswrapper[4965]: I0219 09:48:54.283938 4965 scope.go:117] "RemoveContainer" containerID="c8cdf3f9015872c95f56f22ba2c0ad8547f033744ccae299a507f76f28a9e61c" Feb 19 09:48:54 crc kubenswrapper[4965]: I0219 09:48:54.289286 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c55hf" event={"ID":"b1832525-d3f5-47bc-879b-4d4e4f3c14bd","Type":"ContainerDied","Data":"04b20bac64110b159ef127b8cec2c6414fd3727d547eb620289aa6de722364d4"} Feb 19 09:48:54 crc kubenswrapper[4965]: I0219 09:48:54.289618 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c55hf" Feb 19 09:48:54 crc kubenswrapper[4965]: I0219 09:48:54.292106 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fxnw5" event={"ID":"c2ea1b40-1bc8-462a-a2a2-218c24c27584","Type":"ContainerDied","Data":"356bc9fc4963daa169e51788c873906cb35206f2efdd76fff51e547267ca86d6"} Feb 19 09:48:54 crc kubenswrapper[4965]: I0219 09:48:54.292290 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fxnw5" Feb 19 09:48:54 crc kubenswrapper[4965]: I0219 09:48:54.299678 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-pbfkw" podStartSLOduration=2.29965539 podStartE2EDuration="2.29965539s" podCreationTimestamp="2026-02-19 09:48:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:48:54.297124263 +0000 UTC m=+389.918445573" watchObservedRunningTime="2026-02-19 09:48:54.29965539 +0000 UTC m=+389.920976700" Feb 19 09:48:54 crc kubenswrapper[4965]: I0219 09:48:54.313642 4965 scope.go:117] "RemoveContainer" containerID="b80a4fe5ec583fd45130619237e34ff8c7890bbf1ace972a3ef2884ab2dc17f8" Feb 19 09:48:54 crc kubenswrapper[4965]: I0219 09:48:54.359460 4965 scope.go:117] "RemoveContainer" containerID="78deb2d1dc7c2770edaea43ae2d5b815f4042f8588f74e7705759234299dbe8d" Feb 19 09:48:54 crc kubenswrapper[4965]: I0219 09:48:54.360957 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c55hf"] Feb 19 09:48:54 crc kubenswrapper[4965]: I0219 09:48:54.365683 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-c55hf"] Feb 19 09:48:54 crc kubenswrapper[4965]: I0219 09:48:54.378715 4965 scope.go:117] "RemoveContainer" containerID="c8cdf3f9015872c95f56f22ba2c0ad8547f033744ccae299a507f76f28a9e61c" Feb 19 09:48:54 crc kubenswrapper[4965]: E0219 09:48:54.379118 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8cdf3f9015872c95f56f22ba2c0ad8547f033744ccae299a507f76f28a9e61c\": container with ID starting with c8cdf3f9015872c95f56f22ba2c0ad8547f033744ccae299a507f76f28a9e61c not found: ID does not exist" containerID="c8cdf3f9015872c95f56f22ba2c0ad8547f033744ccae299a507f76f28a9e61c" Feb 19 09:48:54 crc kubenswrapper[4965]: I0219 09:48:54.379154 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8cdf3f9015872c95f56f22ba2c0ad8547f033744ccae299a507f76f28a9e61c"} err="failed to get container status \"c8cdf3f9015872c95f56f22ba2c0ad8547f033744ccae299a507f76f28a9e61c\": rpc error: code = NotFound desc = could not find container \"c8cdf3f9015872c95f56f22ba2c0ad8547f033744ccae299a507f76f28a9e61c\": container with ID starting with c8cdf3f9015872c95f56f22ba2c0ad8547f033744ccae299a507f76f28a9e61c not found: ID does not exist" Feb 19 09:48:54 crc kubenswrapper[4965]: I0219 09:48:54.379219 4965 scope.go:117] "RemoveContainer" containerID="b80a4fe5ec583fd45130619237e34ff8c7890bbf1ace972a3ef2884ab2dc17f8" Feb 19 09:48:54 crc kubenswrapper[4965]: E0219 09:48:54.379599 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b80a4fe5ec583fd45130619237e34ff8c7890bbf1ace972a3ef2884ab2dc17f8\": container with ID starting with b80a4fe5ec583fd45130619237e34ff8c7890bbf1ace972a3ef2884ab2dc17f8 not found: ID does not exist" containerID="b80a4fe5ec583fd45130619237e34ff8c7890bbf1ace972a3ef2884ab2dc17f8" Feb 19 09:48:54 crc kubenswrapper[4965]: I0219 09:48:54.379650 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b80a4fe5ec583fd45130619237e34ff8c7890bbf1ace972a3ef2884ab2dc17f8"} err="failed to get container status \"b80a4fe5ec583fd45130619237e34ff8c7890bbf1ace972a3ef2884ab2dc17f8\": rpc error: code = NotFound desc = could not find container \"b80a4fe5ec583fd45130619237e34ff8c7890bbf1ace972a3ef2884ab2dc17f8\": container with ID starting with b80a4fe5ec583fd45130619237e34ff8c7890bbf1ace972a3ef2884ab2dc17f8 not found: ID does not exist" Feb 19 09:48:54 crc kubenswrapper[4965]: I0219 09:48:54.379669 4965 scope.go:117] "RemoveContainer" containerID="78deb2d1dc7c2770edaea43ae2d5b815f4042f8588f74e7705759234299dbe8d" Feb 19 09:48:54 crc kubenswrapper[4965]: E0219 09:48:54.380677 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78deb2d1dc7c2770edaea43ae2d5b815f4042f8588f74e7705759234299dbe8d\": container with ID starting with 78deb2d1dc7c2770edaea43ae2d5b815f4042f8588f74e7705759234299dbe8d not found: ID does not exist" containerID="78deb2d1dc7c2770edaea43ae2d5b815f4042f8588f74e7705759234299dbe8d" Feb 19 09:48:54 crc kubenswrapper[4965]: I0219 09:48:54.380734 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78deb2d1dc7c2770edaea43ae2d5b815f4042f8588f74e7705759234299dbe8d"} err="failed to get container status \"78deb2d1dc7c2770edaea43ae2d5b815f4042f8588f74e7705759234299dbe8d\": rpc error: code = NotFound desc = could not find container \"78deb2d1dc7c2770edaea43ae2d5b815f4042f8588f74e7705759234299dbe8d\": container with ID starting with 78deb2d1dc7c2770edaea43ae2d5b815f4042f8588f74e7705759234299dbe8d not found: ID does not exist" Feb 19 09:48:54 crc kubenswrapper[4965]: I0219 09:48:54.380769 4965 scope.go:117] "RemoveContainer" containerID="b6ced0454e0dc5fb5ccbcc5e5378ccb5ff7f72fcdcfb20212bcbdb2d3d7e2809" Feb 19 09:48:54 crc kubenswrapper[4965]: I0219 09:48:54.391387 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tlmst"] Feb 19 09:48:54 crc kubenswrapper[4965]: I0219 09:48:54.395148 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tlmst"] Feb 19 09:48:54 crc kubenswrapper[4965]: I0219 09:48:54.397888 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fxnw5"] Feb 19 09:48:54 crc kubenswrapper[4965]: I0219 09:48:54.400817 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fxnw5"] Feb 19 09:48:54 crc kubenswrapper[4965]: I0219 09:48:54.406042 4965 scope.go:117] "RemoveContainer" containerID="444591c8dd8cbddf28839dc5a52d988ad48ea2f9cbec9153f472bf50ed5b05e4" Feb 19 09:48:54 crc kubenswrapper[4965]: I0219 09:48:54.426307 4965 scope.go:117] "RemoveContainer" containerID="c49f21393ac45879a0bc1cb39fb0dbc55824e5b4cc347116fff8a53e4f81cc0e" Feb 19 09:48:54 crc kubenswrapper[4965]: I0219 09:48:54.443351 4965 scope.go:117] "RemoveContainer" containerID="971cb5acd33203801bcec353714e16d855edfdbd368fc8ce536a258a4a1af14b" Feb 19 09:48:54 crc kubenswrapper[4965]: I0219 09:48:54.459476 4965 scope.go:117] "RemoveContainer" containerID="b2c72d57f947aa51bc030460dee6b59f0d877cf0cf098a9ba26312ee10fe00f5" Feb 19 09:48:54 crc kubenswrapper[4965]: I0219 09:48:54.475088 4965 scope.go:117] "RemoveContainer" containerID="21e4ea1808ec358fa098edc41aacf9ea07b3de663c30f9619cac5cfbefe48704" Feb 19 09:48:55 crc kubenswrapper[4965]: I0219 09:48:55.207417 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="190603fe-6420-4d17-91f5-c37c9038002c" path="/var/lib/kubelet/pods/190603fe-6420-4d17-91f5-c37c9038002c/volumes" Feb 19 09:48:55 crc kubenswrapper[4965]: I0219 09:48:55.210229 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1832525-d3f5-47bc-879b-4d4e4f3c14bd" path="/var/lib/kubelet/pods/b1832525-d3f5-47bc-879b-4d4e4f3c14bd/volumes" Feb 19 09:48:55 crc kubenswrapper[4965]: I0219 09:48:55.214015 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="badd7c24-44c3-4853-9611-aeb49c3df0ab" path="/var/lib/kubelet/pods/badd7c24-44c3-4853-9611-aeb49c3df0ab/volumes" Feb 19 09:48:55 crc kubenswrapper[4965]: I0219 09:48:55.215318 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2ea1b40-1bc8-462a-a2a2-218c24c27584" path="/var/lib/kubelet/pods/c2ea1b40-1bc8-462a-a2a2-218c24c27584/volumes" Feb 19 09:48:55 crc kubenswrapper[4965]: I0219 09:48:55.216135 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e428e472-401e-45b3-b70b-d2e0f19b52f9" path="/var/lib/kubelet/pods/e428e472-401e-45b3-b70b-d2e0f19b52f9/volumes" Feb 19 09:48:55 crc kubenswrapper[4965]: I0219 09:48:55.554833 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-p94wp"] Feb 19 09:48:55 crc kubenswrapper[4965]: E0219 09:48:55.556053 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="badd7c24-44c3-4853-9611-aeb49c3df0ab" containerName="registry-server" Feb 19 09:48:55 crc kubenswrapper[4965]: I0219 09:48:55.556152 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="badd7c24-44c3-4853-9611-aeb49c3df0ab" containerName="registry-server" Feb 19 09:48:55 crc kubenswrapper[4965]: E0219 09:48:55.556337 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1832525-d3f5-47bc-879b-4d4e4f3c14bd" containerName="extract-utilities" Feb 19 09:48:55 crc kubenswrapper[4965]: I0219 09:48:55.556428 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1832525-d3f5-47bc-879b-4d4e4f3c14bd" containerName="extract-utilities" Feb 19 09:48:55 crc kubenswrapper[4965]: E0219 09:48:55.556530 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e428e472-401e-45b3-b70b-d2e0f19b52f9" containerName="extract-utilities" Feb 19 09:48:55 crc kubenswrapper[4965]: I0219 09:48:55.556611 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="e428e472-401e-45b3-b70b-d2e0f19b52f9" containerName="extract-utilities" Feb 19 09:48:55 crc kubenswrapper[4965]: E0219 09:48:55.556782 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="190603fe-6420-4d17-91f5-c37c9038002c" containerName="marketplace-operator" Feb 19 09:48:55 crc kubenswrapper[4965]: I0219 09:48:55.556855 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="190603fe-6420-4d17-91f5-c37c9038002c" containerName="marketplace-operator" Feb 19 09:48:55 crc kubenswrapper[4965]: E0219 09:48:55.556918 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2ea1b40-1bc8-462a-a2a2-218c24c27584" containerName="extract-content" Feb 19 09:48:55 crc kubenswrapper[4965]: I0219 09:48:55.556971 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2ea1b40-1bc8-462a-a2a2-218c24c27584" containerName="extract-content" Feb 19 09:48:55 crc kubenswrapper[4965]: E0219 09:48:55.557026 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1832525-d3f5-47bc-879b-4d4e4f3c14bd" containerName="extract-content" Feb 19 09:48:55 crc kubenswrapper[4965]: I0219 09:48:55.557078 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1832525-d3f5-47bc-879b-4d4e4f3c14bd" containerName="extract-content" Feb 19 09:48:55 crc kubenswrapper[4965]: E0219 09:48:55.557135 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="badd7c24-44c3-4853-9611-aeb49c3df0ab" containerName="extract-content" Feb 19 09:48:55 crc kubenswrapper[4965]: I0219 09:48:55.557208 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="badd7c24-44c3-4853-9611-aeb49c3df0ab" containerName="extract-content" Feb 19 09:48:55 crc kubenswrapper[4965]: E0219 09:48:55.557282 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e428e472-401e-45b3-b70b-d2e0f19b52f9" containerName="registry-server" Feb 19 09:48:55 crc kubenswrapper[4965]: I0219 09:48:55.557353 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="e428e472-401e-45b3-b70b-d2e0f19b52f9" containerName="registry-server" Feb 19 09:48:55 crc kubenswrapper[4965]: E0219 09:48:55.557433 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2ea1b40-1bc8-462a-a2a2-218c24c27584" containerName="extract-utilities" Feb 19 09:48:55 crc kubenswrapper[4965]: I0219 09:48:55.557510 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2ea1b40-1bc8-462a-a2a2-218c24c27584" containerName="extract-utilities" Feb 19 09:48:55 crc kubenswrapper[4965]: E0219 09:48:55.557591 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="badd7c24-44c3-4853-9611-aeb49c3df0ab" containerName="extract-utilities" Feb 19 09:48:55 crc kubenswrapper[4965]: I0219 09:48:55.557679 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="badd7c24-44c3-4853-9611-aeb49c3df0ab" containerName="extract-utilities" Feb 19 09:48:55 crc kubenswrapper[4965]: E0219 09:48:55.557772 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e428e472-401e-45b3-b70b-d2e0f19b52f9" containerName="extract-content" Feb 19 09:48:55 crc kubenswrapper[4965]: I0219 09:48:55.557871 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="e428e472-401e-45b3-b70b-d2e0f19b52f9" containerName="extract-content" Feb 19 09:48:55 crc kubenswrapper[4965]: E0219 09:48:55.557959 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1832525-d3f5-47bc-879b-4d4e4f3c14bd" containerName="registry-server" Feb 19 09:48:55 crc kubenswrapper[4965]: I0219 09:48:55.558039 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1832525-d3f5-47bc-879b-4d4e4f3c14bd" containerName="registry-server" Feb 19 09:48:55 crc kubenswrapper[4965]: E0219 09:48:55.558123 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2ea1b40-1bc8-462a-a2a2-218c24c27584" containerName="registry-server" Feb 19 09:48:55 crc kubenswrapper[4965]: I0219 09:48:55.558218 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2ea1b40-1bc8-462a-a2a2-218c24c27584" containerName="registry-server" Feb 19 09:48:55 crc kubenswrapper[4965]: I0219 09:48:55.558412 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="e428e472-401e-45b3-b70b-d2e0f19b52f9" containerName="registry-server" Feb 19 09:48:55 crc kubenswrapper[4965]: I0219 09:48:55.558516 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="190603fe-6420-4d17-91f5-c37c9038002c" containerName="marketplace-operator" Feb 19 09:48:55 crc kubenswrapper[4965]: I0219 09:48:55.558611 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2ea1b40-1bc8-462a-a2a2-218c24c27584" containerName="registry-server" Feb 19 09:48:55 crc kubenswrapper[4965]: I0219 09:48:55.558693 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="190603fe-6420-4d17-91f5-c37c9038002c" containerName="marketplace-operator" Feb 19 09:48:55 crc kubenswrapper[4965]: I0219 09:48:55.558803 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1832525-d3f5-47bc-879b-4d4e4f3c14bd" containerName="registry-server" Feb 19 09:48:55 crc kubenswrapper[4965]: I0219 09:48:55.558887 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="badd7c24-44c3-4853-9611-aeb49c3df0ab" containerName="registry-server" Feb 19 09:48:55 crc kubenswrapper[4965]: E0219 09:48:55.559158 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="190603fe-6420-4d17-91f5-c37c9038002c" containerName="marketplace-operator" Feb 19 09:48:55 crc kubenswrapper[4965]: I0219 09:48:55.559914 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="190603fe-6420-4d17-91f5-c37c9038002c" containerName="marketplace-operator" Feb 19 09:48:55 crc kubenswrapper[4965]: I0219 09:48:55.561228 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p94wp" Feb 19 09:48:55 crc kubenswrapper[4965]: I0219 09:48:55.564176 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 19 09:48:55 crc kubenswrapper[4965]: I0219 09:48:55.567354 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p94wp"] Feb 19 09:48:55 crc kubenswrapper[4965]: I0219 09:48:55.705268 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8tsb\" (UniqueName: \"kubernetes.io/projected/f8b960f6-0e57-4ebd-83e9-b245cbcd3b9d-kube-api-access-v8tsb\") pod \"redhat-marketplace-p94wp\" (UID: \"f8b960f6-0e57-4ebd-83e9-b245cbcd3b9d\") " pod="openshift-marketplace/redhat-marketplace-p94wp" Feb 19 09:48:55 crc kubenswrapper[4965]: I0219 09:48:55.705376 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8b960f6-0e57-4ebd-83e9-b245cbcd3b9d-utilities\") pod \"redhat-marketplace-p94wp\" (UID: \"f8b960f6-0e57-4ebd-83e9-b245cbcd3b9d\") " pod="openshift-marketplace/redhat-marketplace-p94wp" Feb 19 09:48:55 crc kubenswrapper[4965]: I0219 09:48:55.705441 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8b960f6-0e57-4ebd-83e9-b245cbcd3b9d-catalog-content\") pod \"redhat-marketplace-p94wp\" (UID: \"f8b960f6-0e57-4ebd-83e9-b245cbcd3b9d\") " pod="openshift-marketplace/redhat-marketplace-p94wp" Feb 19 09:48:55 crc kubenswrapper[4965]: I0219 09:48:55.806316 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8b960f6-0e57-4ebd-83e9-b245cbcd3b9d-catalog-content\") pod \"redhat-marketplace-p94wp\" (UID: \"f8b960f6-0e57-4ebd-83e9-b245cbcd3b9d\") " pod="openshift-marketplace/redhat-marketplace-p94wp" Feb 19 09:48:55 crc kubenswrapper[4965]: I0219 09:48:55.806373 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8tsb\" (UniqueName: \"kubernetes.io/projected/f8b960f6-0e57-4ebd-83e9-b245cbcd3b9d-kube-api-access-v8tsb\") pod \"redhat-marketplace-p94wp\" (UID: \"f8b960f6-0e57-4ebd-83e9-b245cbcd3b9d\") " pod="openshift-marketplace/redhat-marketplace-p94wp" Feb 19 09:48:55 crc kubenswrapper[4965]: I0219 09:48:55.806427 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8b960f6-0e57-4ebd-83e9-b245cbcd3b9d-utilities\") pod \"redhat-marketplace-p94wp\" (UID: \"f8b960f6-0e57-4ebd-83e9-b245cbcd3b9d\") " pod="openshift-marketplace/redhat-marketplace-p94wp" Feb 19 09:48:55 crc kubenswrapper[4965]: I0219 09:48:55.807250 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8b960f6-0e57-4ebd-83e9-b245cbcd3b9d-utilities\") pod \"redhat-marketplace-p94wp\" (UID: \"f8b960f6-0e57-4ebd-83e9-b245cbcd3b9d\") " pod="openshift-marketplace/redhat-marketplace-p94wp" Feb 19 09:48:55 crc kubenswrapper[4965]: I0219 09:48:55.808509 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8b960f6-0e57-4ebd-83e9-b245cbcd3b9d-catalog-content\") pod \"redhat-marketplace-p94wp\" (UID: \"f8b960f6-0e57-4ebd-83e9-b245cbcd3b9d\") " pod="openshift-marketplace/redhat-marketplace-p94wp" Feb 19 09:48:55 crc kubenswrapper[4965]: I0219 09:48:55.832116 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8tsb\" (UniqueName: \"kubernetes.io/projected/f8b960f6-0e57-4ebd-83e9-b245cbcd3b9d-kube-api-access-v8tsb\") pod \"redhat-marketplace-p94wp\" (UID: \"f8b960f6-0e57-4ebd-83e9-b245cbcd3b9d\") " pod="openshift-marketplace/redhat-marketplace-p94wp" Feb 19 09:48:55 crc kubenswrapper[4965]: I0219 09:48:55.884968 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p94wp" Feb 19 09:48:56 crc kubenswrapper[4965]: I0219 09:48:56.091542 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p94wp"] Feb 19 09:48:56 crc kubenswrapper[4965]: W0219 09:48:56.105733 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf8b960f6_0e57_4ebd_83e9_b245cbcd3b9d.slice/crio-98a808b9a78ae221fd6e1136b4bf85b7f2cc0bc66d87c5ad8320875a3dc3ddb9 WatchSource:0}: Error finding container 98a808b9a78ae221fd6e1136b4bf85b7f2cc0bc66d87c5ad8320875a3dc3ddb9: Status 404 returned error can't find the container with id 98a808b9a78ae221fd6e1136b4bf85b7f2cc0bc66d87c5ad8320875a3dc3ddb9 Feb 19 09:48:56 crc kubenswrapper[4965]: I0219 09:48:56.313094 4965 generic.go:334] "Generic (PLEG): container finished" podID="f8b960f6-0e57-4ebd-83e9-b245cbcd3b9d" containerID="3865afefb7490dbb9c66cbd00471368e0e7d3888d1d81a6d6b9b1582c8d85236" exitCode=0 Feb 19 09:48:56 crc kubenswrapper[4965]: I0219 09:48:56.313658 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p94wp" event={"ID":"f8b960f6-0e57-4ebd-83e9-b245cbcd3b9d","Type":"ContainerDied","Data":"3865afefb7490dbb9c66cbd00471368e0e7d3888d1d81a6d6b9b1582c8d85236"} Feb 19 09:48:56 crc kubenswrapper[4965]: I0219 09:48:56.313709 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p94wp" event={"ID":"f8b960f6-0e57-4ebd-83e9-b245cbcd3b9d","Type":"ContainerStarted","Data":"98a808b9a78ae221fd6e1136b4bf85b7f2cc0bc66d87c5ad8320875a3dc3ddb9"} Feb 19 09:48:56 crc kubenswrapper[4965]: I0219 09:48:56.735560 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-w6fn7"] Feb 19 09:48:56 crc kubenswrapper[4965]: I0219 09:48:56.737526 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w6fn7" Feb 19 09:48:56 crc kubenswrapper[4965]: I0219 09:48:56.740325 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 19 09:48:56 crc kubenswrapper[4965]: I0219 09:48:56.745023 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w6fn7"] Feb 19 09:48:56 crc kubenswrapper[4965]: I0219 09:48:56.829308 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59eb38d1-a115-462c-b054-4660ec8e6ac1-catalog-content\") pod \"redhat-operators-w6fn7\" (UID: \"59eb38d1-a115-462c-b054-4660ec8e6ac1\") " pod="openshift-marketplace/redhat-operators-w6fn7" Feb 19 09:48:56 crc kubenswrapper[4965]: I0219 09:48:56.829578 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59eb38d1-a115-462c-b054-4660ec8e6ac1-utilities\") pod \"redhat-operators-w6fn7\" (UID: \"59eb38d1-a115-462c-b054-4660ec8e6ac1\") " pod="openshift-marketplace/redhat-operators-w6fn7" Feb 19 09:48:56 crc kubenswrapper[4965]: I0219 09:48:56.829620 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqjr5\" (UniqueName: \"kubernetes.io/projected/59eb38d1-a115-462c-b054-4660ec8e6ac1-kube-api-access-xqjr5\") pod \"redhat-operators-w6fn7\" (UID: \"59eb38d1-a115-462c-b054-4660ec8e6ac1\") " pod="openshift-marketplace/redhat-operators-w6fn7" Feb 19 09:48:56 crc kubenswrapper[4965]: I0219 09:48:56.931037 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59eb38d1-a115-462c-b054-4660ec8e6ac1-utilities\") pod \"redhat-operators-w6fn7\" (UID: \"59eb38d1-a115-462c-b054-4660ec8e6ac1\") " pod="openshift-marketplace/redhat-operators-w6fn7" Feb 19 09:48:56 crc kubenswrapper[4965]: I0219 09:48:56.931130 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqjr5\" (UniqueName: \"kubernetes.io/projected/59eb38d1-a115-462c-b054-4660ec8e6ac1-kube-api-access-xqjr5\") pod \"redhat-operators-w6fn7\" (UID: \"59eb38d1-a115-462c-b054-4660ec8e6ac1\") " pod="openshift-marketplace/redhat-operators-w6fn7" Feb 19 09:48:56 crc kubenswrapper[4965]: I0219 09:48:56.931218 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59eb38d1-a115-462c-b054-4660ec8e6ac1-catalog-content\") pod \"redhat-operators-w6fn7\" (UID: \"59eb38d1-a115-462c-b054-4660ec8e6ac1\") " pod="openshift-marketplace/redhat-operators-w6fn7" Feb 19 09:48:56 crc kubenswrapper[4965]: I0219 09:48:56.931765 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59eb38d1-a115-462c-b054-4660ec8e6ac1-catalog-content\") pod \"redhat-operators-w6fn7\" (UID: \"59eb38d1-a115-462c-b054-4660ec8e6ac1\") " pod="openshift-marketplace/redhat-operators-w6fn7" Feb 19 09:48:56 crc kubenswrapper[4965]: I0219 09:48:56.932562 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59eb38d1-a115-462c-b054-4660ec8e6ac1-utilities\") pod \"redhat-operators-w6fn7\" (UID: \"59eb38d1-a115-462c-b054-4660ec8e6ac1\") " pod="openshift-marketplace/redhat-operators-w6fn7" Feb 19 09:48:56 crc kubenswrapper[4965]: I0219 09:48:56.972299 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqjr5\" (UniqueName: \"kubernetes.io/projected/59eb38d1-a115-462c-b054-4660ec8e6ac1-kube-api-access-xqjr5\") pod \"redhat-operators-w6fn7\" (UID: \"59eb38d1-a115-462c-b054-4660ec8e6ac1\") " pod="openshift-marketplace/redhat-operators-w6fn7" Feb 19 09:48:57 crc kubenswrapper[4965]: I0219 09:48:57.056766 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w6fn7" Feb 19 09:48:57 crc kubenswrapper[4965]: I0219 09:48:57.267777 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w6fn7"] Feb 19 09:48:57 crc kubenswrapper[4965]: I0219 09:48:57.326756 4965 generic.go:334] "Generic (PLEG): container finished" podID="f8b960f6-0e57-4ebd-83e9-b245cbcd3b9d" containerID="d996362c32219fa02c9d718baf20291362050d6b256f26aef902f48e58c7791d" exitCode=0 Feb 19 09:48:57 crc kubenswrapper[4965]: I0219 09:48:57.326922 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p94wp" event={"ID":"f8b960f6-0e57-4ebd-83e9-b245cbcd3b9d","Type":"ContainerDied","Data":"d996362c32219fa02c9d718baf20291362050d6b256f26aef902f48e58c7791d"} Feb 19 09:48:57 crc kubenswrapper[4965]: I0219 09:48:57.328648 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w6fn7" event={"ID":"59eb38d1-a115-462c-b054-4660ec8e6ac1","Type":"ContainerStarted","Data":"64c234f3546aaa63a063d23ce2e53640d16c7507b0cb0ed8cc9f289c27a00b5a"} Feb 19 09:48:58 crc kubenswrapper[4965]: I0219 09:48:58.136577 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6ml7g"] Feb 19 09:48:58 crc kubenswrapper[4965]: I0219 09:48:58.137902 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6ml7g" Feb 19 09:48:58 crc kubenswrapper[4965]: I0219 09:48:58.141705 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 19 09:48:58 crc kubenswrapper[4965]: I0219 09:48:58.145815 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmzw6\" (UniqueName: \"kubernetes.io/projected/32baa37b-a196-447f-af2a-0f1cc92785d8-kube-api-access-bmzw6\") pod \"community-operators-6ml7g\" (UID: \"32baa37b-a196-447f-af2a-0f1cc92785d8\") " pod="openshift-marketplace/community-operators-6ml7g" Feb 19 09:48:58 crc kubenswrapper[4965]: I0219 09:48:58.145858 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32baa37b-a196-447f-af2a-0f1cc92785d8-catalog-content\") pod \"community-operators-6ml7g\" (UID: \"32baa37b-a196-447f-af2a-0f1cc92785d8\") " pod="openshift-marketplace/community-operators-6ml7g" Feb 19 09:48:58 crc kubenswrapper[4965]: I0219 09:48:58.145896 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32baa37b-a196-447f-af2a-0f1cc92785d8-utilities\") pod \"community-operators-6ml7g\" (UID: \"32baa37b-a196-447f-af2a-0f1cc92785d8\") " pod="openshift-marketplace/community-operators-6ml7g" Feb 19 09:48:58 crc kubenswrapper[4965]: I0219 09:48:58.153596 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6ml7g"] Feb 19 09:48:58 crc kubenswrapper[4965]: I0219 09:48:58.247065 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmzw6\" (UniqueName: \"kubernetes.io/projected/32baa37b-a196-447f-af2a-0f1cc92785d8-kube-api-access-bmzw6\") pod \"community-operators-6ml7g\" (UID: \"32baa37b-a196-447f-af2a-0f1cc92785d8\") " pod="openshift-marketplace/community-operators-6ml7g" Feb 19 09:48:58 crc kubenswrapper[4965]: I0219 09:48:58.247120 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32baa37b-a196-447f-af2a-0f1cc92785d8-catalog-content\") pod \"community-operators-6ml7g\" (UID: \"32baa37b-a196-447f-af2a-0f1cc92785d8\") " pod="openshift-marketplace/community-operators-6ml7g" Feb 19 09:48:58 crc kubenswrapper[4965]: I0219 09:48:58.247170 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32baa37b-a196-447f-af2a-0f1cc92785d8-utilities\") pod \"community-operators-6ml7g\" (UID: \"32baa37b-a196-447f-af2a-0f1cc92785d8\") " pod="openshift-marketplace/community-operators-6ml7g" Feb 19 09:48:58 crc kubenswrapper[4965]: I0219 09:48:58.248078 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32baa37b-a196-447f-af2a-0f1cc92785d8-catalog-content\") pod \"community-operators-6ml7g\" (UID: \"32baa37b-a196-447f-af2a-0f1cc92785d8\") " pod="openshift-marketplace/community-operators-6ml7g" Feb 19 09:48:58 crc kubenswrapper[4965]: I0219 09:48:58.248699 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32baa37b-a196-447f-af2a-0f1cc92785d8-utilities\") pod \"community-operators-6ml7g\" (UID: \"32baa37b-a196-447f-af2a-0f1cc92785d8\") " pod="openshift-marketplace/community-operators-6ml7g" Feb 19 09:48:58 crc kubenswrapper[4965]: I0219 09:48:58.270875 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmzw6\" (UniqueName: \"kubernetes.io/projected/32baa37b-a196-447f-af2a-0f1cc92785d8-kube-api-access-bmzw6\") pod \"community-operators-6ml7g\" (UID: \"32baa37b-a196-447f-af2a-0f1cc92785d8\") " pod="openshift-marketplace/community-operators-6ml7g" Feb 19 09:48:58 crc kubenswrapper[4965]: I0219 09:48:58.338233 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p94wp" event={"ID":"f8b960f6-0e57-4ebd-83e9-b245cbcd3b9d","Type":"ContainerStarted","Data":"93132315e4ae498e2f1f9ca443e7c357f0ec95ce073a8a26bb2c1a7d17240a9d"} Feb 19 09:48:58 crc kubenswrapper[4965]: I0219 09:48:58.340635 4965 generic.go:334] "Generic (PLEG): container finished" podID="59eb38d1-a115-462c-b054-4660ec8e6ac1" containerID="45c57c19f5d5e59a82f61fda14eef87f2b3bc277cbfbbf086b6fcdc8df76e796" exitCode=0 Feb 19 09:48:58 crc kubenswrapper[4965]: I0219 09:48:58.340712 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w6fn7" event={"ID":"59eb38d1-a115-462c-b054-4660ec8e6ac1","Type":"ContainerDied","Data":"45c57c19f5d5e59a82f61fda14eef87f2b3bc277cbfbbf086b6fcdc8df76e796"} Feb 19 09:48:58 crc kubenswrapper[4965]: I0219 09:48:58.371112 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-p94wp" podStartSLOduration=1.9423542029999998 podStartE2EDuration="3.371087771s" podCreationTimestamp="2026-02-19 09:48:55 +0000 UTC" firstStartedPulling="2026-02-19 09:48:56.317254384 +0000 UTC m=+391.938575704" lastFinishedPulling="2026-02-19 09:48:57.745987962 +0000 UTC m=+393.367309272" observedRunningTime="2026-02-19 09:48:58.364486646 +0000 UTC m=+393.985807976" watchObservedRunningTime="2026-02-19 09:48:58.371087771 +0000 UTC m=+393.992409081" Feb 19 09:48:58 crc kubenswrapper[4965]: I0219 09:48:58.456840 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6ml7g" Feb 19 09:48:58 crc kubenswrapper[4965]: I0219 09:48:58.699899 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6ml7g"] Feb 19 09:48:59 crc kubenswrapper[4965]: I0219 09:48:59.136437 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7ct5s"] Feb 19 09:48:59 crc kubenswrapper[4965]: I0219 09:48:59.138066 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7ct5s" Feb 19 09:48:59 crc kubenswrapper[4965]: I0219 09:48:59.140496 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 19 09:48:59 crc kubenswrapper[4965]: I0219 09:48:59.152042 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7ct5s"] Feb 19 09:48:59 crc kubenswrapper[4965]: I0219 09:48:59.258850 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5aa07bb4-7540-437b-9720-9cf4b8b3af65-utilities\") pod \"certified-operators-7ct5s\" (UID: \"5aa07bb4-7540-437b-9720-9cf4b8b3af65\") " pod="openshift-marketplace/certified-operators-7ct5s" Feb 19 09:48:59 crc kubenswrapper[4965]: I0219 09:48:59.258916 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8m2p2\" (UniqueName: \"kubernetes.io/projected/5aa07bb4-7540-437b-9720-9cf4b8b3af65-kube-api-access-8m2p2\") pod \"certified-operators-7ct5s\" (UID: \"5aa07bb4-7540-437b-9720-9cf4b8b3af65\") " pod="openshift-marketplace/certified-operators-7ct5s" Feb 19 09:48:59 crc kubenswrapper[4965]: I0219 09:48:59.259188 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5aa07bb4-7540-437b-9720-9cf4b8b3af65-catalog-content\") pod \"certified-operators-7ct5s\" (UID: \"5aa07bb4-7540-437b-9720-9cf4b8b3af65\") " pod="openshift-marketplace/certified-operators-7ct5s" Feb 19 09:48:59 crc kubenswrapper[4965]: I0219 09:48:59.348534 4965 generic.go:334] "Generic (PLEG): container finished" podID="32baa37b-a196-447f-af2a-0f1cc92785d8" containerID="1fdb32f1fae5681e2d557f0a752ec396d9260d9461701c9aabd2c3c7d138b72e" exitCode=0 Feb 19 09:48:59 crc kubenswrapper[4965]: I0219 09:48:59.348608 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6ml7g" event={"ID":"32baa37b-a196-447f-af2a-0f1cc92785d8","Type":"ContainerDied","Data":"1fdb32f1fae5681e2d557f0a752ec396d9260d9461701c9aabd2c3c7d138b72e"} Feb 19 09:48:59 crc kubenswrapper[4965]: I0219 09:48:59.348676 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6ml7g" event={"ID":"32baa37b-a196-447f-af2a-0f1cc92785d8","Type":"ContainerStarted","Data":"bf586d920abc6862045c96db49d207341c49a0ba60e1986158b815e1a131d0f6"} Feb 19 09:48:59 crc kubenswrapper[4965]: I0219 09:48:59.352829 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w6fn7" event={"ID":"59eb38d1-a115-462c-b054-4660ec8e6ac1","Type":"ContainerStarted","Data":"58f875e700b36a612582fc2201acc9f91bc211dc06e914869a8a7c5911290c06"} Feb 19 09:48:59 crc kubenswrapper[4965]: I0219 09:48:59.361104 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5aa07bb4-7540-437b-9720-9cf4b8b3af65-catalog-content\") pod \"certified-operators-7ct5s\" (UID: \"5aa07bb4-7540-437b-9720-9cf4b8b3af65\") " pod="openshift-marketplace/certified-operators-7ct5s" Feb 19 09:48:59 crc kubenswrapper[4965]: I0219 09:48:59.361227 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5aa07bb4-7540-437b-9720-9cf4b8b3af65-utilities\") pod \"certified-operators-7ct5s\" (UID: \"5aa07bb4-7540-437b-9720-9cf4b8b3af65\") " pod="openshift-marketplace/certified-operators-7ct5s" Feb 19 09:48:59 crc kubenswrapper[4965]: I0219 09:48:59.361258 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8m2p2\" (UniqueName: \"kubernetes.io/projected/5aa07bb4-7540-437b-9720-9cf4b8b3af65-kube-api-access-8m2p2\") pod \"certified-operators-7ct5s\" (UID: \"5aa07bb4-7540-437b-9720-9cf4b8b3af65\") " pod="openshift-marketplace/certified-operators-7ct5s" Feb 19 09:48:59 crc kubenswrapper[4965]: I0219 09:48:59.362871 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5aa07bb4-7540-437b-9720-9cf4b8b3af65-utilities\") pod \"certified-operators-7ct5s\" (UID: \"5aa07bb4-7540-437b-9720-9cf4b8b3af65\") " pod="openshift-marketplace/certified-operators-7ct5s" Feb 19 09:48:59 crc kubenswrapper[4965]: I0219 09:48:59.362949 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5aa07bb4-7540-437b-9720-9cf4b8b3af65-catalog-content\") pod \"certified-operators-7ct5s\" (UID: \"5aa07bb4-7540-437b-9720-9cf4b8b3af65\") " pod="openshift-marketplace/certified-operators-7ct5s" Feb 19 09:48:59 crc kubenswrapper[4965]: I0219 09:48:59.391342 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8m2p2\" (UniqueName: \"kubernetes.io/projected/5aa07bb4-7540-437b-9720-9cf4b8b3af65-kube-api-access-8m2p2\") pod \"certified-operators-7ct5s\" (UID: \"5aa07bb4-7540-437b-9720-9cf4b8b3af65\") " pod="openshift-marketplace/certified-operators-7ct5s" Feb 19 09:48:59 crc kubenswrapper[4965]: I0219 09:48:59.513774 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7ct5s" Feb 19 09:48:59 crc kubenswrapper[4965]: I0219 09:48:59.759400 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7ct5s"] Feb 19 09:49:00 crc kubenswrapper[4965]: I0219 09:49:00.360393 4965 generic.go:334] "Generic (PLEG): container finished" podID="5aa07bb4-7540-437b-9720-9cf4b8b3af65" containerID="79b5df30d728f68bc78d0bbfd80895e14ea436d46b2086306f6322a82e7eb34e" exitCode=0 Feb 19 09:49:00 crc kubenswrapper[4965]: I0219 09:49:00.360504 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7ct5s" event={"ID":"5aa07bb4-7540-437b-9720-9cf4b8b3af65","Type":"ContainerDied","Data":"79b5df30d728f68bc78d0bbfd80895e14ea436d46b2086306f6322a82e7eb34e"} Feb 19 09:49:00 crc kubenswrapper[4965]: I0219 09:49:00.360823 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7ct5s" event={"ID":"5aa07bb4-7540-437b-9720-9cf4b8b3af65","Type":"ContainerStarted","Data":"e0d390437982ec942bc6d801f8f73c6a3d391575bcac381b0ab7e3e9e58bd17f"} Feb 19 09:49:00 crc kubenswrapper[4965]: I0219 09:49:00.366066 4965 generic.go:334] "Generic (PLEG): container finished" podID="59eb38d1-a115-462c-b054-4660ec8e6ac1" containerID="58f875e700b36a612582fc2201acc9f91bc211dc06e914869a8a7c5911290c06" exitCode=0 Feb 19 09:49:00 crc kubenswrapper[4965]: I0219 09:49:00.366148 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w6fn7" event={"ID":"59eb38d1-a115-462c-b054-4660ec8e6ac1","Type":"ContainerDied","Data":"58f875e700b36a612582fc2201acc9f91bc211dc06e914869a8a7c5911290c06"} Feb 19 09:49:00 crc kubenswrapper[4965]: I0219 09:49:00.370553 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6ml7g" event={"ID":"32baa37b-a196-447f-af2a-0f1cc92785d8","Type":"ContainerStarted","Data":"1e7d539004b58e2f76e934fc605668114a9d70eaeebbbe1a401c1c15edaf6b30"} Feb 19 09:49:01 crc kubenswrapper[4965]: I0219 09:49:01.381661 4965 generic.go:334] "Generic (PLEG): container finished" podID="32baa37b-a196-447f-af2a-0f1cc92785d8" containerID="1e7d539004b58e2f76e934fc605668114a9d70eaeebbbe1a401c1c15edaf6b30" exitCode=0 Feb 19 09:49:01 crc kubenswrapper[4965]: I0219 09:49:01.381762 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6ml7g" event={"ID":"32baa37b-a196-447f-af2a-0f1cc92785d8","Type":"ContainerDied","Data":"1e7d539004b58e2f76e934fc605668114a9d70eaeebbbe1a401c1c15edaf6b30"} Feb 19 09:49:01 crc kubenswrapper[4965]: I0219 09:49:01.388222 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7ct5s" event={"ID":"5aa07bb4-7540-437b-9720-9cf4b8b3af65","Type":"ContainerStarted","Data":"ea2f4307e1dbe35d6af39cb06bbf396afd323100da9ecf6d3c82628174fcbc82"} Feb 19 09:49:01 crc kubenswrapper[4965]: I0219 09:49:01.392169 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w6fn7" event={"ID":"59eb38d1-a115-462c-b054-4660ec8e6ac1","Type":"ContainerStarted","Data":"1777d28822381358bdb9088b9ebb3fab550e2303cde0ecb7840da57c29730d58"} Feb 19 09:49:01 crc kubenswrapper[4965]: I0219 09:49:01.424521 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-w6fn7" podStartSLOduration=2.995581464 podStartE2EDuration="5.424499472s" podCreationTimestamp="2026-02-19 09:48:56 +0000 UTC" firstStartedPulling="2026-02-19 09:48:58.34350575 +0000 UTC m=+393.964827090" lastFinishedPulling="2026-02-19 09:49:00.772423798 +0000 UTC m=+396.393745098" observedRunningTime="2026-02-19 09:49:01.421765429 +0000 UTC m=+397.043086759" watchObservedRunningTime="2026-02-19 09:49:01.424499472 +0000 UTC m=+397.045820772" Feb 19 09:49:02 crc kubenswrapper[4965]: I0219 09:49:02.402004 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6ml7g" event={"ID":"32baa37b-a196-447f-af2a-0f1cc92785d8","Type":"ContainerStarted","Data":"794de4a0de05fdbb9d866a289780cbaf2fb0b9ab86f4dceb9ae7724d9230df93"} Feb 19 09:49:02 crc kubenswrapper[4965]: I0219 09:49:02.405560 4965 generic.go:334] "Generic (PLEG): container finished" podID="5aa07bb4-7540-437b-9720-9cf4b8b3af65" containerID="ea2f4307e1dbe35d6af39cb06bbf396afd323100da9ecf6d3c82628174fcbc82" exitCode=0 Feb 19 09:49:02 crc kubenswrapper[4965]: I0219 09:49:02.405677 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7ct5s" event={"ID":"5aa07bb4-7540-437b-9720-9cf4b8b3af65","Type":"ContainerDied","Data":"ea2f4307e1dbe35d6af39cb06bbf396afd323100da9ecf6d3c82628174fcbc82"} Feb 19 09:49:02 crc kubenswrapper[4965]: I0219 09:49:02.433627 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6ml7g" podStartSLOduration=2.020323943 podStartE2EDuration="4.433601837s" podCreationTimestamp="2026-02-19 09:48:58 +0000 UTC" firstStartedPulling="2026-02-19 09:48:59.350266764 +0000 UTC m=+394.971588084" lastFinishedPulling="2026-02-19 09:49:01.763544668 +0000 UTC m=+397.384865978" observedRunningTime="2026-02-19 09:49:02.425700769 +0000 UTC m=+398.047022079" watchObservedRunningTime="2026-02-19 09:49:02.433601837 +0000 UTC m=+398.054923157" Feb 19 09:49:04 crc kubenswrapper[4965]: I0219 09:49:04.420916 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7ct5s" event={"ID":"5aa07bb4-7540-437b-9720-9cf4b8b3af65","Type":"ContainerStarted","Data":"c1c1898ca2f01380bc70bb3fec828bea453bd40c05302d69590fe041db5f128a"} Feb 19 09:49:04 crc kubenswrapper[4965]: I0219 09:49:04.448161 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7ct5s" podStartSLOduration=2.992957278 podStartE2EDuration="5.448136933s" podCreationTimestamp="2026-02-19 09:48:59 +0000 UTC" firstStartedPulling="2026-02-19 09:49:00.363954392 +0000 UTC m=+395.985275702" lastFinishedPulling="2026-02-19 09:49:02.819134047 +0000 UTC m=+398.440455357" observedRunningTime="2026-02-19 09:49:04.443659554 +0000 UTC m=+400.064980864" watchObservedRunningTime="2026-02-19 09:49:04.448136933 +0000 UTC m=+400.069458243" Feb 19 09:49:05 crc kubenswrapper[4965]: I0219 09:49:05.886695 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-p94wp" Feb 19 09:49:05 crc kubenswrapper[4965]: I0219 09:49:05.886772 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-p94wp" Feb 19 09:49:05 crc kubenswrapper[4965]: I0219 09:49:05.947877 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-p94wp" Feb 19 09:49:06 crc kubenswrapper[4965]: I0219 09:49:06.475539 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-p94wp" Feb 19 09:49:07 crc kubenswrapper[4965]: I0219 09:49:07.057145 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-w6fn7" Feb 19 09:49:07 crc kubenswrapper[4965]: I0219 09:49:07.057218 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-w6fn7" Feb 19 09:49:07 crc kubenswrapper[4965]: I0219 09:49:07.102812 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-w6fn7" Feb 19 09:49:07 crc kubenswrapper[4965]: I0219 09:49:07.480369 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-w6fn7" Feb 19 09:49:08 crc kubenswrapper[4965]: I0219 09:49:08.457584 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6ml7g" Feb 19 09:49:08 crc kubenswrapper[4965]: I0219 09:49:08.457782 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6ml7g" Feb 19 09:49:08 crc kubenswrapper[4965]: I0219 09:49:08.498373 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6ml7g" Feb 19 09:49:09 crc kubenswrapper[4965]: I0219 09:49:09.505443 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6ml7g" Feb 19 09:49:09 crc kubenswrapper[4965]: I0219 09:49:09.514893 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7ct5s" Feb 19 09:49:09 crc kubenswrapper[4965]: I0219 09:49:09.514941 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7ct5s" Feb 19 09:49:09 crc kubenswrapper[4965]: I0219 09:49:09.562244 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7ct5s" Feb 19 09:49:10 crc kubenswrapper[4965]: I0219 09:49:10.524450 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7ct5s" Feb 19 09:49:16 crc kubenswrapper[4965]: I0219 09:49:16.601697 4965 patch_prober.go:28] interesting pod/machine-config-daemon-7mhh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 09:49:16 crc kubenswrapper[4965]: I0219 09:49:16.602130 4965 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 09:49:16 crc kubenswrapper[4965]: I0219 09:49:16.602212 4965 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" Feb 19 09:49:16 crc kubenswrapper[4965]: I0219 09:49:16.602921 4965 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a3a16677e101e0014d7e0c43b5b3a431fd87db479114715ef53b03062691e273"} pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 09:49:16 crc kubenswrapper[4965]: I0219 09:49:16.602983 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerName="machine-config-daemon" containerID="cri-o://a3a16677e101e0014d7e0c43b5b3a431fd87db479114715ef53b03062691e273" gracePeriod=600 Feb 19 09:49:17 crc kubenswrapper[4965]: I0219 09:49:17.503515 4965 generic.go:334] "Generic (PLEG): container finished" podID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerID="a3a16677e101e0014d7e0c43b5b3a431fd87db479114715ef53b03062691e273" exitCode=0 Feb 19 09:49:17 crc kubenswrapper[4965]: I0219 09:49:17.503588 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" event={"ID":"63ef3eb8-6103-492d-b6ef-f16081d15e83","Type":"ContainerDied","Data":"a3a16677e101e0014d7e0c43b5b3a431fd87db479114715ef53b03062691e273"} Feb 19 09:49:17 crc kubenswrapper[4965]: I0219 09:49:17.503639 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" event={"ID":"63ef3eb8-6103-492d-b6ef-f16081d15e83","Type":"ContainerStarted","Data":"aeea13d7baceae3d38efbdd04018cfdc27f75d2c326225193a932aad7bc7bcd2"} Feb 19 09:49:17 crc kubenswrapper[4965]: I0219 09:49:17.503673 4965 scope.go:117] "RemoveContainer" containerID="a1ff237da7e509d3b4a25e8042c384a768ef0123d1687b574502f769bde3121b" Feb 19 09:51:16 crc kubenswrapper[4965]: I0219 09:51:16.601781 4965 patch_prober.go:28] interesting pod/machine-config-daemon-7mhh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 09:51:16 crc kubenswrapper[4965]: I0219 09:51:16.602916 4965 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 09:51:46 crc kubenswrapper[4965]: I0219 09:51:46.601327 4965 patch_prober.go:28] interesting pod/machine-config-daemon-7mhh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 09:51:46 crc kubenswrapper[4965]: I0219 09:51:46.602299 4965 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 09:52:16 crc kubenswrapper[4965]: I0219 09:52:16.601455 4965 patch_prober.go:28] interesting pod/machine-config-daemon-7mhh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 09:52:16 crc kubenswrapper[4965]: I0219 09:52:16.602366 4965 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 09:52:16 crc kubenswrapper[4965]: I0219 09:52:16.602432 4965 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" Feb 19 09:52:16 crc kubenswrapper[4965]: I0219 09:52:16.603182 4965 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"aeea13d7baceae3d38efbdd04018cfdc27f75d2c326225193a932aad7bc7bcd2"} pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 09:52:16 crc kubenswrapper[4965]: I0219 09:52:16.603278 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerName="machine-config-daemon" containerID="cri-o://aeea13d7baceae3d38efbdd04018cfdc27f75d2c326225193a932aad7bc7bcd2" gracePeriod=600 Feb 19 09:52:16 crc kubenswrapper[4965]: I0219 09:52:16.732654 4965 generic.go:334] "Generic (PLEG): container finished" podID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerID="aeea13d7baceae3d38efbdd04018cfdc27f75d2c326225193a932aad7bc7bcd2" exitCode=0 Feb 19 09:52:16 crc kubenswrapper[4965]: I0219 09:52:16.732791 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" event={"ID":"63ef3eb8-6103-492d-b6ef-f16081d15e83","Type":"ContainerDied","Data":"aeea13d7baceae3d38efbdd04018cfdc27f75d2c326225193a932aad7bc7bcd2"} Feb 19 09:52:16 crc kubenswrapper[4965]: I0219 09:52:16.733342 4965 scope.go:117] "RemoveContainer" containerID="a3a16677e101e0014d7e0c43b5b3a431fd87db479114715ef53b03062691e273" Feb 19 09:52:17 crc kubenswrapper[4965]: I0219 09:52:17.742012 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" event={"ID":"63ef3eb8-6103-492d-b6ef-f16081d15e83","Type":"ContainerStarted","Data":"b59d24bc3fa01905164aa2b246a7f2c9309e5d002a2ffc3bd7f13562cf306e5b"} Feb 19 09:54:15 crc kubenswrapper[4965]: I0219 09:54:15.109568 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lsxc7"] Feb 19 09:54:15 crc kubenswrapper[4965]: I0219 09:54:15.111990 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lsxc7" Feb 19 09:54:15 crc kubenswrapper[4965]: I0219 09:54:15.114561 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 19 09:54:15 crc kubenswrapper[4965]: I0219 09:54:15.128056 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lsxc7"] Feb 19 09:54:15 crc kubenswrapper[4965]: I0219 09:54:15.288341 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9ctl\" (UniqueName: \"kubernetes.io/projected/73c31c1a-7233-4c2c-b79b-70abd832d746-kube-api-access-r9ctl\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lsxc7\" (UID: \"73c31c1a-7233-4c2c-b79b-70abd832d746\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lsxc7" Feb 19 09:54:15 crc kubenswrapper[4965]: I0219 09:54:15.288987 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/73c31c1a-7233-4c2c-b79b-70abd832d746-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lsxc7\" (UID: \"73c31c1a-7233-4c2c-b79b-70abd832d746\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lsxc7" Feb 19 09:54:15 crc kubenswrapper[4965]: I0219 09:54:15.289172 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/73c31c1a-7233-4c2c-b79b-70abd832d746-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lsxc7\" (UID: \"73c31c1a-7233-4c2c-b79b-70abd832d746\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lsxc7" Feb 19 09:54:15 crc kubenswrapper[4965]: I0219 09:54:15.390691 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9ctl\" (UniqueName: \"kubernetes.io/projected/73c31c1a-7233-4c2c-b79b-70abd832d746-kube-api-access-r9ctl\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lsxc7\" (UID: \"73c31c1a-7233-4c2c-b79b-70abd832d746\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lsxc7" Feb 19 09:54:15 crc kubenswrapper[4965]: I0219 09:54:15.390768 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/73c31c1a-7233-4c2c-b79b-70abd832d746-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lsxc7\" (UID: \"73c31c1a-7233-4c2c-b79b-70abd832d746\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lsxc7" Feb 19 09:54:15 crc kubenswrapper[4965]: I0219 09:54:15.390813 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/73c31c1a-7233-4c2c-b79b-70abd832d746-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lsxc7\" (UID: \"73c31c1a-7233-4c2c-b79b-70abd832d746\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lsxc7" Feb 19 09:54:15 crc kubenswrapper[4965]: I0219 09:54:15.391887 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/73c31c1a-7233-4c2c-b79b-70abd832d746-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lsxc7\" (UID: \"73c31c1a-7233-4c2c-b79b-70abd832d746\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lsxc7" Feb 19 09:54:15 crc kubenswrapper[4965]: I0219 09:54:15.391944 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/73c31c1a-7233-4c2c-b79b-70abd832d746-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lsxc7\" (UID: \"73c31c1a-7233-4c2c-b79b-70abd832d746\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lsxc7" Feb 19 09:54:15 crc kubenswrapper[4965]: I0219 09:54:15.418887 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9ctl\" (UniqueName: \"kubernetes.io/projected/73c31c1a-7233-4c2c-b79b-70abd832d746-kube-api-access-r9ctl\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lsxc7\" (UID: \"73c31c1a-7233-4c2c-b79b-70abd832d746\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lsxc7" Feb 19 09:54:15 crc kubenswrapper[4965]: I0219 09:54:15.446781 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lsxc7" Feb 19 09:54:15 crc kubenswrapper[4965]: I0219 09:54:15.691361 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lsxc7"] Feb 19 09:54:16 crc kubenswrapper[4965]: I0219 09:54:16.591232 4965 generic.go:334] "Generic (PLEG): container finished" podID="73c31c1a-7233-4c2c-b79b-70abd832d746" containerID="10ee00ef8d971a8a85df217c11dd901e580be93b7904f3362c9a47a2bd432b44" exitCode=0 Feb 19 09:54:16 crc kubenswrapper[4965]: I0219 09:54:16.591479 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lsxc7" event={"ID":"73c31c1a-7233-4c2c-b79b-70abd832d746","Type":"ContainerDied","Data":"10ee00ef8d971a8a85df217c11dd901e580be93b7904f3362c9a47a2bd432b44"} Feb 19 09:54:16 crc kubenswrapper[4965]: I0219 09:54:16.591859 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lsxc7" event={"ID":"73c31c1a-7233-4c2c-b79b-70abd832d746","Type":"ContainerStarted","Data":"3ec53897a30b2fa21f0e3c62fe24566765b32c3cc16fc7917f17e0bc82c615ff"} Feb 19 09:54:16 crc kubenswrapper[4965]: I0219 09:54:16.594229 4965 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 09:54:16 crc kubenswrapper[4965]: I0219 09:54:16.601433 4965 patch_prober.go:28] interesting pod/machine-config-daemon-7mhh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 09:54:16 crc kubenswrapper[4965]: I0219 09:54:16.601541 4965 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 09:54:18 crc kubenswrapper[4965]: I0219 09:54:18.606831 4965 generic.go:334] "Generic (PLEG): container finished" podID="73c31c1a-7233-4c2c-b79b-70abd832d746" containerID="ccc6e357b8e25cb7e29cdb0223d431133f215f68fa5f30f7799481f78c44c3d8" exitCode=0 Feb 19 09:54:18 crc kubenswrapper[4965]: I0219 09:54:18.606942 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lsxc7" event={"ID":"73c31c1a-7233-4c2c-b79b-70abd832d746","Type":"ContainerDied","Data":"ccc6e357b8e25cb7e29cdb0223d431133f215f68fa5f30f7799481f78c44c3d8"} Feb 19 09:54:19 crc kubenswrapper[4965]: I0219 09:54:19.617045 4965 generic.go:334] "Generic (PLEG): container finished" podID="73c31c1a-7233-4c2c-b79b-70abd832d746" containerID="5ff1e5849a930cf6048db2e8c5d7b2036930699223ff3f2799898f53f0855256" exitCode=0 Feb 19 09:54:19 crc kubenswrapper[4965]: I0219 09:54:19.617171 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lsxc7" event={"ID":"73c31c1a-7233-4c2c-b79b-70abd832d746","Type":"ContainerDied","Data":"5ff1e5849a930cf6048db2e8c5d7b2036930699223ff3f2799898f53f0855256"} Feb 19 09:54:20 crc kubenswrapper[4965]: I0219 09:54:20.965690 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lsxc7" Feb 19 09:54:21 crc kubenswrapper[4965]: I0219 09:54:21.067657 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/73c31c1a-7233-4c2c-b79b-70abd832d746-bundle\") pod \"73c31c1a-7233-4c2c-b79b-70abd832d746\" (UID: \"73c31c1a-7233-4c2c-b79b-70abd832d746\") " Feb 19 09:54:21 crc kubenswrapper[4965]: I0219 09:54:21.067716 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/73c31c1a-7233-4c2c-b79b-70abd832d746-util\") pod \"73c31c1a-7233-4c2c-b79b-70abd832d746\" (UID: \"73c31c1a-7233-4c2c-b79b-70abd832d746\") " Feb 19 09:54:21 crc kubenswrapper[4965]: I0219 09:54:21.067781 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9ctl\" (UniqueName: \"kubernetes.io/projected/73c31c1a-7233-4c2c-b79b-70abd832d746-kube-api-access-r9ctl\") pod \"73c31c1a-7233-4c2c-b79b-70abd832d746\" (UID: \"73c31c1a-7233-4c2c-b79b-70abd832d746\") " Feb 19 09:54:21 crc kubenswrapper[4965]: I0219 09:54:21.072805 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73c31c1a-7233-4c2c-b79b-70abd832d746-bundle" (OuterVolumeSpecName: "bundle") pod "73c31c1a-7233-4c2c-b79b-70abd832d746" (UID: "73c31c1a-7233-4c2c-b79b-70abd832d746"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:54:21 crc kubenswrapper[4965]: I0219 09:54:21.076454 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73c31c1a-7233-4c2c-b79b-70abd832d746-kube-api-access-r9ctl" (OuterVolumeSpecName: "kube-api-access-r9ctl") pod "73c31c1a-7233-4c2c-b79b-70abd832d746" (UID: "73c31c1a-7233-4c2c-b79b-70abd832d746"). InnerVolumeSpecName "kube-api-access-r9ctl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:54:21 crc kubenswrapper[4965]: I0219 09:54:21.169182 4965 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/73c31c1a-7233-4c2c-b79b-70abd832d746-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 09:54:21 crc kubenswrapper[4965]: I0219 09:54:21.169264 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9ctl\" (UniqueName: \"kubernetes.io/projected/73c31c1a-7233-4c2c-b79b-70abd832d746-kube-api-access-r9ctl\") on node \"crc\" DevicePath \"\"" Feb 19 09:54:21 crc kubenswrapper[4965]: I0219 09:54:21.391159 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73c31c1a-7233-4c2c-b79b-70abd832d746-util" (OuterVolumeSpecName: "util") pod "73c31c1a-7233-4c2c-b79b-70abd832d746" (UID: "73c31c1a-7233-4c2c-b79b-70abd832d746"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:54:21 crc kubenswrapper[4965]: I0219 09:54:21.472727 4965 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/73c31c1a-7233-4c2c-b79b-70abd832d746-util\") on node \"crc\" DevicePath \"\"" Feb 19 09:54:21 crc kubenswrapper[4965]: I0219 09:54:21.632428 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lsxc7" event={"ID":"73c31c1a-7233-4c2c-b79b-70abd832d746","Type":"ContainerDied","Data":"3ec53897a30b2fa21f0e3c62fe24566765b32c3cc16fc7917f17e0bc82c615ff"} Feb 19 09:54:21 crc kubenswrapper[4965]: I0219 09:54:21.632483 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lsxc7" Feb 19 09:54:21 crc kubenswrapper[4965]: I0219 09:54:21.632498 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ec53897a30b2fa21f0e3c62fe24566765b32c3cc16fc7917f17e0bc82c615ff" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.429348 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-dcfpx"] Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.430669 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" podUID="7c788dfa-1923-4a2b-9619-73acf92ec849" containerName="ovn-controller" containerID="cri-o://9bc418c94085bcd4ed93250cce9eb6bc122cd045035b72800df2bdf4b364d6ab" gracePeriod=30 Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.430774 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" podUID="7c788dfa-1923-4a2b-9619-73acf92ec849" containerName="kube-rbac-proxy-node" containerID="cri-o://efa60b6875cede631c9383845eb085f96d62a6365609f1f98b84165b54e0872a" gracePeriod=30 Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.430774 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" podUID="7c788dfa-1923-4a2b-9619-73acf92ec849" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://0ebb933d7238665138ec7e854756522607a2814b48116b2ce4474869b39344c1" gracePeriod=30 Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.430865 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" podUID="7c788dfa-1923-4a2b-9619-73acf92ec849" containerName="northd" containerID="cri-o://51316b32af59fe23cdf832fbc0b37b11f74d3a57d01eed32ca30a196d4c7e2c3" gracePeriod=30 Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.430818 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" podUID="7c788dfa-1923-4a2b-9619-73acf92ec849" containerName="ovn-acl-logging" containerID="cri-o://ccba1acfe523175d218c25c2f59a6f9874426235c9cba981a80cc53aca12408a" gracePeriod=30 Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.430774 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" podUID="7c788dfa-1923-4a2b-9619-73acf92ec849" containerName="nbdb" containerID="cri-o://dac7fd5095ec7fd8ce98b9150bd5c0a642004e2c1239a6fa1ff002efa67471df" gracePeriod=30 Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.430937 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" podUID="7c788dfa-1923-4a2b-9619-73acf92ec849" containerName="sbdb" containerID="cri-o://533452e14c9d0d57a451ec0dd06097f87f60658a8f008203b29c31b2b5310eb2" gracePeriod=30 Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.477075 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" podUID="7c788dfa-1923-4a2b-9619-73acf92ec849" containerName="ovnkube-controller" containerID="cri-o://48a894cd63123be13228cd57371260ac740cdceb7c7280f8d0d01608f0008dff" gracePeriod=30 Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.670776 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dcfpx_7c788dfa-1923-4a2b-9619-73acf92ec849/ovnkube-controller/3.log" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.674146 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dcfpx_7c788dfa-1923-4a2b-9619-73acf92ec849/ovn-acl-logging/0.log" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.674746 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dcfpx_7c788dfa-1923-4a2b-9619-73acf92ec849/ovn-controller/0.log" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.675113 4965 generic.go:334] "Generic (PLEG): container finished" podID="7c788dfa-1923-4a2b-9619-73acf92ec849" containerID="48a894cd63123be13228cd57371260ac740cdceb7c7280f8d0d01608f0008dff" exitCode=0 Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.675140 4965 generic.go:334] "Generic (PLEG): container finished" podID="7c788dfa-1923-4a2b-9619-73acf92ec849" containerID="533452e14c9d0d57a451ec0dd06097f87f60658a8f008203b29c31b2b5310eb2" exitCode=0 Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.675147 4965 generic.go:334] "Generic (PLEG): container finished" podID="7c788dfa-1923-4a2b-9619-73acf92ec849" containerID="dac7fd5095ec7fd8ce98b9150bd5c0a642004e2c1239a6fa1ff002efa67471df" exitCode=0 Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.675158 4965 generic.go:334] "Generic (PLEG): container finished" podID="7c788dfa-1923-4a2b-9619-73acf92ec849" containerID="0ebb933d7238665138ec7e854756522607a2814b48116b2ce4474869b39344c1" exitCode=0 Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.675167 4965 generic.go:334] "Generic (PLEG): container finished" podID="7c788dfa-1923-4a2b-9619-73acf92ec849" containerID="efa60b6875cede631c9383845eb085f96d62a6365609f1f98b84165b54e0872a" exitCode=0 Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.675157 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" event={"ID":"7c788dfa-1923-4a2b-9619-73acf92ec849","Type":"ContainerDied","Data":"48a894cd63123be13228cd57371260ac740cdceb7c7280f8d0d01608f0008dff"} Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.675248 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" event={"ID":"7c788dfa-1923-4a2b-9619-73acf92ec849","Type":"ContainerDied","Data":"533452e14c9d0d57a451ec0dd06097f87f60658a8f008203b29c31b2b5310eb2"} Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.675266 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" event={"ID":"7c788dfa-1923-4a2b-9619-73acf92ec849","Type":"ContainerDied","Data":"dac7fd5095ec7fd8ce98b9150bd5c0a642004e2c1239a6fa1ff002efa67471df"} Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.675279 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" event={"ID":"7c788dfa-1923-4a2b-9619-73acf92ec849","Type":"ContainerDied","Data":"0ebb933d7238665138ec7e854756522607a2814b48116b2ce4474869b39344c1"} Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.675176 4965 generic.go:334] "Generic (PLEG): container finished" podID="7c788dfa-1923-4a2b-9619-73acf92ec849" containerID="ccba1acfe523175d218c25c2f59a6f9874426235c9cba981a80cc53aca12408a" exitCode=143 Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.675302 4965 generic.go:334] "Generic (PLEG): container finished" podID="7c788dfa-1923-4a2b-9619-73acf92ec849" containerID="9bc418c94085bcd4ed93250cce9eb6bc122cd045035b72800df2bdf4b364d6ab" exitCode=143 Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.675308 4965 scope.go:117] "RemoveContainer" containerID="df664c777d3f25b8d74075723b13263568db42db0feb4d1c5a85cc38fc50aee9" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.675292 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" event={"ID":"7c788dfa-1923-4a2b-9619-73acf92ec849","Type":"ContainerDied","Data":"efa60b6875cede631c9383845eb085f96d62a6365609f1f98b84165b54e0872a"} Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.675376 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" event={"ID":"7c788dfa-1923-4a2b-9619-73acf92ec849","Type":"ContainerDied","Data":"ccba1acfe523175d218c25c2f59a6f9874426235c9cba981a80cc53aca12408a"} Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.675392 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" event={"ID":"7c788dfa-1923-4a2b-9619-73acf92ec849","Type":"ContainerDied","Data":"9bc418c94085bcd4ed93250cce9eb6bc122cd045035b72800df2bdf4b364d6ab"} Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.677968 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nsjqz_5e0b10c6-02b7-49d0-9a76-e89ebbb00528/kube-multus/2.log" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.678477 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nsjqz_5e0b10c6-02b7-49d0-9a76-e89ebbb00528/kube-multus/1.log" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.678533 4965 generic.go:334] "Generic (PLEG): container finished" podID="5e0b10c6-02b7-49d0-9a76-e89ebbb00528" containerID="5ce78b16779886d7dcc4f414531a624941d19304ad86ccb93cd0f009d3274b40" exitCode=2 Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.678569 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-nsjqz" event={"ID":"5e0b10c6-02b7-49d0-9a76-e89ebbb00528","Type":"ContainerDied","Data":"5ce78b16779886d7dcc4f414531a624941d19304ad86ccb93cd0f009d3274b40"} Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.679143 4965 scope.go:117] "RemoveContainer" containerID="5ce78b16779886d7dcc4f414531a624941d19304ad86ccb93cd0f009d3274b40" Feb 19 09:54:26 crc kubenswrapper[4965]: E0219 09:54:26.680042 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-nsjqz_openshift-multus(5e0b10c6-02b7-49d0-9a76-e89ebbb00528)\"" pod="openshift-multus/multus-nsjqz" podUID="5e0b10c6-02b7-49d0-9a76-e89ebbb00528" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.780470 4965 scope.go:117] "RemoveContainer" containerID="54890991cfb2ac3b404ed7c4c815f5c02e5a23fed0a82dcbc8b0071ae6bda90b" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.795937 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dcfpx_7c788dfa-1923-4a2b-9619-73acf92ec849/ovn-acl-logging/0.log" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.796457 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dcfpx_7c788dfa-1923-4a2b-9619-73acf92ec849/ovn-controller/0.log" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.796830 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.846693 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7c788dfa-1923-4a2b-9619-73acf92ec849-ovn-node-metrics-cert\") pod \"7c788dfa-1923-4a2b-9619-73acf92ec849\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.846780 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-host-cni-netd\") pod \"7c788dfa-1923-4a2b-9619-73acf92ec849\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.846806 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-log-socket\") pod \"7c788dfa-1923-4a2b-9619-73acf92ec849\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.846832 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-run-systemd\") pod \"7c788dfa-1923-4a2b-9619-73acf92ec849\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.846853 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-host-run-ovn-kubernetes\") pod \"7c788dfa-1923-4a2b-9619-73acf92ec849\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.846882 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-var-lib-openvswitch\") pod \"7c788dfa-1923-4a2b-9619-73acf92ec849\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.846903 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7c788dfa-1923-4a2b-9619-73acf92ec849-env-overrides\") pod \"7c788dfa-1923-4a2b-9619-73acf92ec849\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.846932 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-node-log\") pod \"7c788dfa-1923-4a2b-9619-73acf92ec849\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.846951 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-host-slash\") pod \"7c788dfa-1923-4a2b-9619-73acf92ec849\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.846984 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7c788dfa-1923-4a2b-9619-73acf92ec849-ovnkube-script-lib\") pod \"7c788dfa-1923-4a2b-9619-73acf92ec849\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.847014 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-host-var-lib-cni-networks-ovn-kubernetes\") pod \"7c788dfa-1923-4a2b-9619-73acf92ec849\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.847040 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-run-openvswitch\") pod \"7c788dfa-1923-4a2b-9619-73acf92ec849\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.847070 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-etc-openvswitch\") pod \"7c788dfa-1923-4a2b-9619-73acf92ec849\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.847109 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-host-run-netns\") pod \"7c788dfa-1923-4a2b-9619-73acf92ec849\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.847131 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7c788dfa-1923-4a2b-9619-73acf92ec849-ovnkube-config\") pod \"7c788dfa-1923-4a2b-9619-73acf92ec849\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.847170 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d758w\" (UniqueName: \"kubernetes.io/projected/7c788dfa-1923-4a2b-9619-73acf92ec849-kube-api-access-d758w\") pod \"7c788dfa-1923-4a2b-9619-73acf92ec849\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.847214 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-systemd-units\") pod \"7c788dfa-1923-4a2b-9619-73acf92ec849\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.847250 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-host-kubelet\") pod \"7c788dfa-1923-4a2b-9619-73acf92ec849\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.847278 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-run-ovn\") pod \"7c788dfa-1923-4a2b-9619-73acf92ec849\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.847299 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-host-cni-bin\") pod \"7c788dfa-1923-4a2b-9619-73acf92ec849\" (UID: \"7c788dfa-1923-4a2b-9619-73acf92ec849\") " Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.847580 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "7c788dfa-1923-4a2b-9619-73acf92ec849" (UID: "7c788dfa-1923-4a2b-9619-73acf92ec849"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.848795 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "7c788dfa-1923-4a2b-9619-73acf92ec849" (UID: "7c788dfa-1923-4a2b-9619-73acf92ec849"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.848874 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "7c788dfa-1923-4a2b-9619-73acf92ec849" (UID: "7c788dfa-1923-4a2b-9619-73acf92ec849"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.848904 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-log-socket" (OuterVolumeSpecName: "log-socket") pod "7c788dfa-1923-4a2b-9619-73acf92ec849" (UID: "7c788dfa-1923-4a2b-9619-73acf92ec849"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.849368 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c788dfa-1923-4a2b-9619-73acf92ec849-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "7c788dfa-1923-4a2b-9619-73acf92ec849" (UID: "7c788dfa-1923-4a2b-9619-73acf92ec849"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.849410 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "7c788dfa-1923-4a2b-9619-73acf92ec849" (UID: "7c788dfa-1923-4a2b-9619-73acf92ec849"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.849722 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c788dfa-1923-4a2b-9619-73acf92ec849-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7c788dfa-1923-4a2b-9619-73acf92ec849" (UID: "7c788dfa-1923-4a2b-9619-73acf92ec849"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.849751 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-node-log" (OuterVolumeSpecName: "node-log") pod "7c788dfa-1923-4a2b-9619-73acf92ec849" (UID: "7c788dfa-1923-4a2b-9619-73acf92ec849"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.849771 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-host-slash" (OuterVolumeSpecName: "host-slash") pod "7c788dfa-1923-4a2b-9619-73acf92ec849" (UID: "7c788dfa-1923-4a2b-9619-73acf92ec849"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.850073 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c788dfa-1923-4a2b-9619-73acf92ec849-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7c788dfa-1923-4a2b-9619-73acf92ec849" (UID: "7c788dfa-1923-4a2b-9619-73acf92ec849"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.850098 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "7c788dfa-1923-4a2b-9619-73acf92ec849" (UID: "7c788dfa-1923-4a2b-9619-73acf92ec849"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.850117 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "7c788dfa-1923-4a2b-9619-73acf92ec849" (UID: "7c788dfa-1923-4a2b-9619-73acf92ec849"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.850135 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "7c788dfa-1923-4a2b-9619-73acf92ec849" (UID: "7c788dfa-1923-4a2b-9619-73acf92ec849"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.850152 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "7c788dfa-1923-4a2b-9619-73acf92ec849" (UID: "7c788dfa-1923-4a2b-9619-73acf92ec849"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.850174 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "7c788dfa-1923-4a2b-9619-73acf92ec849" (UID: "7c788dfa-1923-4a2b-9619-73acf92ec849"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.853416 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "7c788dfa-1923-4a2b-9619-73acf92ec849" (UID: "7c788dfa-1923-4a2b-9619-73acf92ec849"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.853525 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "7c788dfa-1923-4a2b-9619-73acf92ec849" (UID: "7c788dfa-1923-4a2b-9619-73acf92ec849"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.854535 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-c2rms"] Feb 19 09:54:26 crc kubenswrapper[4965]: E0219 09:54:26.854792 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73c31c1a-7233-4c2c-b79b-70abd832d746" containerName="pull" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.854819 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="73c31c1a-7233-4c2c-b79b-70abd832d746" containerName="pull" Feb 19 09:54:26 crc kubenswrapper[4965]: E0219 09:54:26.854836 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c788dfa-1923-4a2b-9619-73acf92ec849" containerName="northd" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.854846 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c788dfa-1923-4a2b-9619-73acf92ec849" containerName="northd" Feb 19 09:54:26 crc kubenswrapper[4965]: E0219 09:54:26.854865 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c788dfa-1923-4a2b-9619-73acf92ec849" containerName="ovn-acl-logging" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.854873 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c788dfa-1923-4a2b-9619-73acf92ec849" containerName="ovn-acl-logging" Feb 19 09:54:26 crc kubenswrapper[4965]: E0219 09:54:26.854885 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73c31c1a-7233-4c2c-b79b-70abd832d746" containerName="extract" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.854895 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="73c31c1a-7233-4c2c-b79b-70abd832d746" containerName="extract" Feb 19 09:54:26 crc kubenswrapper[4965]: E0219 09:54:26.854911 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c788dfa-1923-4a2b-9619-73acf92ec849" containerName="ovnkube-controller" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.854923 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c788dfa-1923-4a2b-9619-73acf92ec849" containerName="ovnkube-controller" Feb 19 09:54:26 crc kubenswrapper[4965]: E0219 09:54:26.854937 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c788dfa-1923-4a2b-9619-73acf92ec849" containerName="ovn-controller" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.854947 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c788dfa-1923-4a2b-9619-73acf92ec849" containerName="ovn-controller" Feb 19 09:54:26 crc kubenswrapper[4965]: E0219 09:54:26.854960 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c788dfa-1923-4a2b-9619-73acf92ec849" containerName="kube-rbac-proxy-ovn-metrics" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.854970 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c788dfa-1923-4a2b-9619-73acf92ec849" containerName="kube-rbac-proxy-ovn-metrics" Feb 19 09:54:26 crc kubenswrapper[4965]: E0219 09:54:26.854982 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c788dfa-1923-4a2b-9619-73acf92ec849" containerName="kubecfg-setup" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.854992 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c788dfa-1923-4a2b-9619-73acf92ec849" containerName="kubecfg-setup" Feb 19 09:54:26 crc kubenswrapper[4965]: E0219 09:54:26.855003 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c788dfa-1923-4a2b-9619-73acf92ec849" containerName="sbdb" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.855015 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c788dfa-1923-4a2b-9619-73acf92ec849" containerName="sbdb" Feb 19 09:54:26 crc kubenswrapper[4965]: E0219 09:54:26.855031 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c788dfa-1923-4a2b-9619-73acf92ec849" containerName="ovnkube-controller" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.855042 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c788dfa-1923-4a2b-9619-73acf92ec849" containerName="ovnkube-controller" Feb 19 09:54:26 crc kubenswrapper[4965]: E0219 09:54:26.855055 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73c31c1a-7233-4c2c-b79b-70abd832d746" containerName="util" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.855063 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="73c31c1a-7233-4c2c-b79b-70abd832d746" containerName="util" Feb 19 09:54:26 crc kubenswrapper[4965]: E0219 09:54:26.855078 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c788dfa-1923-4a2b-9619-73acf92ec849" containerName="ovnkube-controller" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.855086 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c788dfa-1923-4a2b-9619-73acf92ec849" containerName="ovnkube-controller" Feb 19 09:54:26 crc kubenswrapper[4965]: E0219 09:54:26.855095 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c788dfa-1923-4a2b-9619-73acf92ec849" containerName="nbdb" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.855103 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c788dfa-1923-4a2b-9619-73acf92ec849" containerName="nbdb" Feb 19 09:54:26 crc kubenswrapper[4965]: E0219 09:54:26.855112 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c788dfa-1923-4a2b-9619-73acf92ec849" containerName="ovnkube-controller" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.855120 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c788dfa-1923-4a2b-9619-73acf92ec849" containerName="ovnkube-controller" Feb 19 09:54:26 crc kubenswrapper[4965]: E0219 09:54:26.855134 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c788dfa-1923-4a2b-9619-73acf92ec849" containerName="kube-rbac-proxy-node" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.855142 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c788dfa-1923-4a2b-9619-73acf92ec849" containerName="kube-rbac-proxy-node" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.855289 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c788dfa-1923-4a2b-9619-73acf92ec849" containerName="kube-rbac-proxy-node" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.855303 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c788dfa-1923-4a2b-9619-73acf92ec849" containerName="nbdb" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.855315 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c788dfa-1923-4a2b-9619-73acf92ec849" containerName="ovnkube-controller" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.855327 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c788dfa-1923-4a2b-9619-73acf92ec849" containerName="northd" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.855338 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="73c31c1a-7233-4c2c-b79b-70abd832d746" containerName="extract" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.855348 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c788dfa-1923-4a2b-9619-73acf92ec849" containerName="ovnkube-controller" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.855358 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c788dfa-1923-4a2b-9619-73acf92ec849" containerName="ovn-controller" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.855371 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c788dfa-1923-4a2b-9619-73acf92ec849" containerName="kube-rbac-proxy-ovn-metrics" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.855384 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c788dfa-1923-4a2b-9619-73acf92ec849" containerName="ovn-acl-logging" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.855394 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c788dfa-1923-4a2b-9619-73acf92ec849" containerName="sbdb" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.855404 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c788dfa-1923-4a2b-9619-73acf92ec849" containerName="ovnkube-controller" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.855412 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c788dfa-1923-4a2b-9619-73acf92ec849" containerName="ovnkube-controller" Feb 19 09:54:26 crc kubenswrapper[4965]: E0219 09:54:26.855541 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c788dfa-1923-4a2b-9619-73acf92ec849" containerName="ovnkube-controller" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.855551 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c788dfa-1923-4a2b-9619-73acf92ec849" containerName="ovnkube-controller" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.855932 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c788dfa-1923-4a2b-9619-73acf92ec849-kube-api-access-d758w" (OuterVolumeSpecName: "kube-api-access-d758w") pod "7c788dfa-1923-4a2b-9619-73acf92ec849" (UID: "7c788dfa-1923-4a2b-9619-73acf92ec849"). InnerVolumeSpecName "kube-api-access-d758w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.858981 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c788dfa-1923-4a2b-9619-73acf92ec849" containerName="ovnkube-controller" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.860804 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.862026 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c788dfa-1923-4a2b-9619-73acf92ec849-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "7c788dfa-1923-4a2b-9619-73acf92ec849" (UID: "7c788dfa-1923-4a2b-9619-73acf92ec849"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.864728 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "7c788dfa-1923-4a2b-9619-73acf92ec849" (UID: "7c788dfa-1923-4a2b-9619-73acf92ec849"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.948911 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c74f4d75-fecc-44be-84ca-3d4871e0afad-host-run-netns\") pod \"ovnkube-node-c2rms\" (UID: \"c74f4d75-fecc-44be-84ca-3d4871e0afad\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.948972 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c74f4d75-fecc-44be-84ca-3d4871e0afad-etc-openvswitch\") pod \"ovnkube-node-c2rms\" (UID: \"c74f4d75-fecc-44be-84ca-3d4871e0afad\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.949035 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c74f4d75-fecc-44be-84ca-3d4871e0afad-ovnkube-script-lib\") pod \"ovnkube-node-c2rms\" (UID: \"c74f4d75-fecc-44be-84ca-3d4871e0afad\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.949070 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c74f4d75-fecc-44be-84ca-3d4871e0afad-host-kubelet\") pod \"ovnkube-node-c2rms\" (UID: \"c74f4d75-fecc-44be-84ca-3d4871e0afad\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.949097 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5n77\" (UniqueName: \"kubernetes.io/projected/c74f4d75-fecc-44be-84ca-3d4871e0afad-kube-api-access-z5n77\") pod \"ovnkube-node-c2rms\" (UID: \"c74f4d75-fecc-44be-84ca-3d4871e0afad\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.949120 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c74f4d75-fecc-44be-84ca-3d4871e0afad-node-log\") pod \"ovnkube-node-c2rms\" (UID: \"c74f4d75-fecc-44be-84ca-3d4871e0afad\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.949140 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c74f4d75-fecc-44be-84ca-3d4871e0afad-systemd-units\") pod \"ovnkube-node-c2rms\" (UID: \"c74f4d75-fecc-44be-84ca-3d4871e0afad\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.949180 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c74f4d75-fecc-44be-84ca-3d4871e0afad-log-socket\") pod \"ovnkube-node-c2rms\" (UID: \"c74f4d75-fecc-44be-84ca-3d4871e0afad\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.949230 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c74f4d75-fecc-44be-84ca-3d4871e0afad-var-lib-openvswitch\") pod \"ovnkube-node-c2rms\" (UID: \"c74f4d75-fecc-44be-84ca-3d4871e0afad\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.949350 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c74f4d75-fecc-44be-84ca-3d4871e0afad-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-c2rms\" (UID: \"c74f4d75-fecc-44be-84ca-3d4871e0afad\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.949421 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c74f4d75-fecc-44be-84ca-3d4871e0afad-run-ovn\") pod \"ovnkube-node-c2rms\" (UID: \"c74f4d75-fecc-44be-84ca-3d4871e0afad\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.949448 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c74f4d75-fecc-44be-84ca-3d4871e0afad-ovnkube-config\") pod \"ovnkube-node-c2rms\" (UID: \"c74f4d75-fecc-44be-84ca-3d4871e0afad\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.949471 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c74f4d75-fecc-44be-84ca-3d4871e0afad-host-slash\") pod \"ovnkube-node-c2rms\" (UID: \"c74f4d75-fecc-44be-84ca-3d4871e0afad\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.949497 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c74f4d75-fecc-44be-84ca-3d4871e0afad-host-run-ovn-kubernetes\") pod \"ovnkube-node-c2rms\" (UID: \"c74f4d75-fecc-44be-84ca-3d4871e0afad\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.949554 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c74f4d75-fecc-44be-84ca-3d4871e0afad-host-cni-netd\") pod \"ovnkube-node-c2rms\" (UID: \"c74f4d75-fecc-44be-84ca-3d4871e0afad\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.949577 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c74f4d75-fecc-44be-84ca-3d4871e0afad-host-cni-bin\") pod \"ovnkube-node-c2rms\" (UID: \"c74f4d75-fecc-44be-84ca-3d4871e0afad\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.949628 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c74f4d75-fecc-44be-84ca-3d4871e0afad-run-systemd\") pod \"ovnkube-node-c2rms\" (UID: \"c74f4d75-fecc-44be-84ca-3d4871e0afad\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.949662 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c74f4d75-fecc-44be-84ca-3d4871e0afad-ovn-node-metrics-cert\") pod \"ovnkube-node-c2rms\" (UID: \"c74f4d75-fecc-44be-84ca-3d4871e0afad\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.949691 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c74f4d75-fecc-44be-84ca-3d4871e0afad-run-openvswitch\") pod \"ovnkube-node-c2rms\" (UID: \"c74f4d75-fecc-44be-84ca-3d4871e0afad\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.949712 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c74f4d75-fecc-44be-84ca-3d4871e0afad-env-overrides\") pod \"ovnkube-node-c2rms\" (UID: \"c74f4d75-fecc-44be-84ca-3d4871e0afad\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.949810 4965 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.949831 4965 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.949845 4965 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.949856 4965 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.949867 4965 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7c788dfa-1923-4a2b-9619-73acf92ec849-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.949879 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d758w\" (UniqueName: \"kubernetes.io/projected/7c788dfa-1923-4a2b-9619-73acf92ec849-kube-api-access-d758w\") on node \"crc\" DevicePath \"\"" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.949892 4965 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.949902 4965 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.949913 4965 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.949923 4965 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.949933 4965 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7c788dfa-1923-4a2b-9619-73acf92ec849-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.949942 4965 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.949952 4965 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-log-socket\") on node \"crc\" DevicePath \"\"" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.949962 4965 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.949972 4965 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.949982 4965 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7c788dfa-1923-4a2b-9619-73acf92ec849-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.949992 4965 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.950003 4965 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-node-log\") on node \"crc\" DevicePath \"\"" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.950014 4965 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7c788dfa-1923-4a2b-9619-73acf92ec849-host-slash\") on node \"crc\" DevicePath \"\"" Feb 19 09:54:26 crc kubenswrapper[4965]: I0219 09:54:26.950024 4965 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7c788dfa-1923-4a2b-9619-73acf92ec849-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.050720 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c74f4d75-fecc-44be-84ca-3d4871e0afad-host-run-netns\") pod \"ovnkube-node-c2rms\" (UID: \"c74f4d75-fecc-44be-84ca-3d4871e0afad\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.050783 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c74f4d75-fecc-44be-84ca-3d4871e0afad-etc-openvswitch\") pod \"ovnkube-node-c2rms\" (UID: \"c74f4d75-fecc-44be-84ca-3d4871e0afad\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.050814 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c74f4d75-fecc-44be-84ca-3d4871e0afad-ovnkube-script-lib\") pod \"ovnkube-node-c2rms\" (UID: \"c74f4d75-fecc-44be-84ca-3d4871e0afad\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.050836 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c74f4d75-fecc-44be-84ca-3d4871e0afad-host-kubelet\") pod \"ovnkube-node-c2rms\" (UID: \"c74f4d75-fecc-44be-84ca-3d4871e0afad\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.050841 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c74f4d75-fecc-44be-84ca-3d4871e0afad-host-run-netns\") pod \"ovnkube-node-c2rms\" (UID: \"c74f4d75-fecc-44be-84ca-3d4871e0afad\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.050861 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5n77\" (UniqueName: \"kubernetes.io/projected/c74f4d75-fecc-44be-84ca-3d4871e0afad-kube-api-access-z5n77\") pod \"ovnkube-node-c2rms\" (UID: \"c74f4d75-fecc-44be-84ca-3d4871e0afad\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.050900 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c74f4d75-fecc-44be-84ca-3d4871e0afad-systemd-units\") pod \"ovnkube-node-c2rms\" (UID: \"c74f4d75-fecc-44be-84ca-3d4871e0afad\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.050926 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c74f4d75-fecc-44be-84ca-3d4871e0afad-node-log\") pod \"ovnkube-node-c2rms\" (UID: \"c74f4d75-fecc-44be-84ca-3d4871e0afad\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.050932 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c74f4d75-fecc-44be-84ca-3d4871e0afad-etc-openvswitch\") pod \"ovnkube-node-c2rms\" (UID: \"c74f4d75-fecc-44be-84ca-3d4871e0afad\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.050978 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c74f4d75-fecc-44be-84ca-3d4871e0afad-log-socket\") pod \"ovnkube-node-c2rms\" (UID: \"c74f4d75-fecc-44be-84ca-3d4871e0afad\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.050954 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c74f4d75-fecc-44be-84ca-3d4871e0afad-log-socket\") pod \"ovnkube-node-c2rms\" (UID: \"c74f4d75-fecc-44be-84ca-3d4871e0afad\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.051052 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c74f4d75-fecc-44be-84ca-3d4871e0afad-host-kubelet\") pod \"ovnkube-node-c2rms\" (UID: \"c74f4d75-fecc-44be-84ca-3d4871e0afad\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.051102 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c74f4d75-fecc-44be-84ca-3d4871e0afad-systemd-units\") pod \"ovnkube-node-c2rms\" (UID: \"c74f4d75-fecc-44be-84ca-3d4871e0afad\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.051145 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c74f4d75-fecc-44be-84ca-3d4871e0afad-node-log\") pod \"ovnkube-node-c2rms\" (UID: \"c74f4d75-fecc-44be-84ca-3d4871e0afad\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.051232 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c74f4d75-fecc-44be-84ca-3d4871e0afad-var-lib-openvswitch\") pod \"ovnkube-node-c2rms\" (UID: \"c74f4d75-fecc-44be-84ca-3d4871e0afad\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.051268 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c74f4d75-fecc-44be-84ca-3d4871e0afad-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-c2rms\" (UID: \"c74f4d75-fecc-44be-84ca-3d4871e0afad\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.051306 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c74f4d75-fecc-44be-84ca-3d4871e0afad-run-ovn\") pod \"ovnkube-node-c2rms\" (UID: \"c74f4d75-fecc-44be-84ca-3d4871e0afad\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.051341 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c74f4d75-fecc-44be-84ca-3d4871e0afad-ovnkube-config\") pod \"ovnkube-node-c2rms\" (UID: \"c74f4d75-fecc-44be-84ca-3d4871e0afad\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.051366 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c74f4d75-fecc-44be-84ca-3d4871e0afad-host-slash\") pod \"ovnkube-node-c2rms\" (UID: \"c74f4d75-fecc-44be-84ca-3d4871e0afad\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.051399 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c74f4d75-fecc-44be-84ca-3d4871e0afad-host-run-ovn-kubernetes\") pod \"ovnkube-node-c2rms\" (UID: \"c74f4d75-fecc-44be-84ca-3d4871e0afad\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.051438 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c74f4d75-fecc-44be-84ca-3d4871e0afad-host-cni-netd\") pod \"ovnkube-node-c2rms\" (UID: \"c74f4d75-fecc-44be-84ca-3d4871e0afad\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.051461 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c74f4d75-fecc-44be-84ca-3d4871e0afad-host-cni-bin\") pod \"ovnkube-node-c2rms\" (UID: \"c74f4d75-fecc-44be-84ca-3d4871e0afad\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.051486 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c74f4d75-fecc-44be-84ca-3d4871e0afad-run-systemd\") pod \"ovnkube-node-c2rms\" (UID: \"c74f4d75-fecc-44be-84ca-3d4871e0afad\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.051514 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c74f4d75-fecc-44be-84ca-3d4871e0afad-ovn-node-metrics-cert\") pod \"ovnkube-node-c2rms\" (UID: \"c74f4d75-fecc-44be-84ca-3d4871e0afad\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.051554 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c74f4d75-fecc-44be-84ca-3d4871e0afad-run-openvswitch\") pod \"ovnkube-node-c2rms\" (UID: \"c74f4d75-fecc-44be-84ca-3d4871e0afad\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.051584 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c74f4d75-fecc-44be-84ca-3d4871e0afad-env-overrides\") pod \"ovnkube-node-c2rms\" (UID: \"c74f4d75-fecc-44be-84ca-3d4871e0afad\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.051703 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c74f4d75-fecc-44be-84ca-3d4871e0afad-host-run-ovn-kubernetes\") pod \"ovnkube-node-c2rms\" (UID: \"c74f4d75-fecc-44be-84ca-3d4871e0afad\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.051750 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c74f4d75-fecc-44be-84ca-3d4871e0afad-run-ovn\") pod \"ovnkube-node-c2rms\" (UID: \"c74f4d75-fecc-44be-84ca-3d4871e0afad\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.051813 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c74f4d75-fecc-44be-84ca-3d4871e0afad-host-slash\") pod \"ovnkube-node-c2rms\" (UID: \"c74f4d75-fecc-44be-84ca-3d4871e0afad\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.051807 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c74f4d75-fecc-44be-84ca-3d4871e0afad-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-c2rms\" (UID: \"c74f4d75-fecc-44be-84ca-3d4871e0afad\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.051779 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c74f4d75-fecc-44be-84ca-3d4871e0afad-var-lib-openvswitch\") pod \"ovnkube-node-c2rms\" (UID: \"c74f4d75-fecc-44be-84ca-3d4871e0afad\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.051860 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c74f4d75-fecc-44be-84ca-3d4871e0afad-ovnkube-script-lib\") pod \"ovnkube-node-c2rms\" (UID: \"c74f4d75-fecc-44be-84ca-3d4871e0afad\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.051901 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c74f4d75-fecc-44be-84ca-3d4871e0afad-host-cni-netd\") pod \"ovnkube-node-c2rms\" (UID: \"c74f4d75-fecc-44be-84ca-3d4871e0afad\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.051830 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c74f4d75-fecc-44be-84ca-3d4871e0afad-run-openvswitch\") pod \"ovnkube-node-c2rms\" (UID: \"c74f4d75-fecc-44be-84ca-3d4871e0afad\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.052037 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c74f4d75-fecc-44be-84ca-3d4871e0afad-host-cni-bin\") pod \"ovnkube-node-c2rms\" (UID: \"c74f4d75-fecc-44be-84ca-3d4871e0afad\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.052406 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c74f4d75-fecc-44be-84ca-3d4871e0afad-env-overrides\") pod \"ovnkube-node-c2rms\" (UID: \"c74f4d75-fecc-44be-84ca-3d4871e0afad\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.052493 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c74f4d75-fecc-44be-84ca-3d4871e0afad-ovnkube-config\") pod \"ovnkube-node-c2rms\" (UID: \"c74f4d75-fecc-44be-84ca-3d4871e0afad\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.052496 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c74f4d75-fecc-44be-84ca-3d4871e0afad-run-systemd\") pod \"ovnkube-node-c2rms\" (UID: \"c74f4d75-fecc-44be-84ca-3d4871e0afad\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.057821 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c74f4d75-fecc-44be-84ca-3d4871e0afad-ovn-node-metrics-cert\") pod \"ovnkube-node-c2rms\" (UID: \"c74f4d75-fecc-44be-84ca-3d4871e0afad\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.096140 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5n77\" (UniqueName: \"kubernetes.io/projected/c74f4d75-fecc-44be-84ca-3d4871e0afad-kube-api-access-z5n77\") pod \"ovnkube-node-c2rms\" (UID: \"c74f4d75-fecc-44be-84ca-3d4871e0afad\") " pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.175212 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.687632 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dcfpx_7c788dfa-1923-4a2b-9619-73acf92ec849/ovn-acl-logging/0.log" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.688088 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dcfpx_7c788dfa-1923-4a2b-9619-73acf92ec849/ovn-controller/0.log" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.688448 4965 generic.go:334] "Generic (PLEG): container finished" podID="7c788dfa-1923-4a2b-9619-73acf92ec849" containerID="51316b32af59fe23cdf832fbc0b37b11f74d3a57d01eed32ca30a196d4c7e2c3" exitCode=0 Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.688531 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" event={"ID":"7c788dfa-1923-4a2b-9619-73acf92ec849","Type":"ContainerDied","Data":"51316b32af59fe23cdf832fbc0b37b11f74d3a57d01eed32ca30a196d4c7e2c3"} Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.688560 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" event={"ID":"7c788dfa-1923-4a2b-9619-73acf92ec849","Type":"ContainerDied","Data":"e29fc7842b60cddc6bf76bc025db661541b45e08fe3f04f198c9f7210e22408a"} Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.688579 4965 scope.go:117] "RemoveContainer" containerID="48a894cd63123be13228cd57371260ac740cdceb7c7280f8d0d01608f0008dff" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.688685 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dcfpx" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.691423 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nsjqz_5e0b10c6-02b7-49d0-9a76-e89ebbb00528/kube-multus/2.log" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.695695 4965 generic.go:334] "Generic (PLEG): container finished" podID="c74f4d75-fecc-44be-84ca-3d4871e0afad" containerID="a7a8b783613223370d5d3fbaac4d38ae30acf98a4d1347517aceeeb16fd093fe" exitCode=0 Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.695800 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" event={"ID":"c74f4d75-fecc-44be-84ca-3d4871e0afad","Type":"ContainerDied","Data":"a7a8b783613223370d5d3fbaac4d38ae30acf98a4d1347517aceeeb16fd093fe"} Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.695893 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" event={"ID":"c74f4d75-fecc-44be-84ca-3d4871e0afad","Type":"ContainerStarted","Data":"2bf6e2dd5ec97137ef3290d33139a3c52ba40060ffb61e7c0f42a7735d195357"} Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.707488 4965 scope.go:117] "RemoveContainer" containerID="533452e14c9d0d57a451ec0dd06097f87f60658a8f008203b29c31b2b5310eb2" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.719993 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-dcfpx"] Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.728680 4965 scope.go:117] "RemoveContainer" containerID="dac7fd5095ec7fd8ce98b9150bd5c0a642004e2c1239a6fa1ff002efa67471df" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.728981 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-dcfpx"] Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.746884 4965 scope.go:117] "RemoveContainer" containerID="51316b32af59fe23cdf832fbc0b37b11f74d3a57d01eed32ca30a196d4c7e2c3" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.787736 4965 scope.go:117] "RemoveContainer" containerID="0ebb933d7238665138ec7e854756522607a2814b48116b2ce4474869b39344c1" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.809364 4965 scope.go:117] "RemoveContainer" containerID="efa60b6875cede631c9383845eb085f96d62a6365609f1f98b84165b54e0872a" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.835927 4965 scope.go:117] "RemoveContainer" containerID="ccba1acfe523175d218c25c2f59a6f9874426235c9cba981a80cc53aca12408a" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.857638 4965 scope.go:117] "RemoveContainer" containerID="9bc418c94085bcd4ed93250cce9eb6bc122cd045035b72800df2bdf4b364d6ab" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.882721 4965 scope.go:117] "RemoveContainer" containerID="743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.908482 4965 scope.go:117] "RemoveContainer" containerID="48a894cd63123be13228cd57371260ac740cdceb7c7280f8d0d01608f0008dff" Feb 19 09:54:27 crc kubenswrapper[4965]: E0219 09:54:27.909649 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48a894cd63123be13228cd57371260ac740cdceb7c7280f8d0d01608f0008dff\": container with ID starting with 48a894cd63123be13228cd57371260ac740cdceb7c7280f8d0d01608f0008dff not found: ID does not exist" containerID="48a894cd63123be13228cd57371260ac740cdceb7c7280f8d0d01608f0008dff" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.909703 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48a894cd63123be13228cd57371260ac740cdceb7c7280f8d0d01608f0008dff"} err="failed to get container status \"48a894cd63123be13228cd57371260ac740cdceb7c7280f8d0d01608f0008dff\": rpc error: code = NotFound desc = could not find container \"48a894cd63123be13228cd57371260ac740cdceb7c7280f8d0d01608f0008dff\": container with ID starting with 48a894cd63123be13228cd57371260ac740cdceb7c7280f8d0d01608f0008dff not found: ID does not exist" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.909730 4965 scope.go:117] "RemoveContainer" containerID="533452e14c9d0d57a451ec0dd06097f87f60658a8f008203b29c31b2b5310eb2" Feb 19 09:54:27 crc kubenswrapper[4965]: E0219 09:54:27.910065 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"533452e14c9d0d57a451ec0dd06097f87f60658a8f008203b29c31b2b5310eb2\": container with ID starting with 533452e14c9d0d57a451ec0dd06097f87f60658a8f008203b29c31b2b5310eb2 not found: ID does not exist" containerID="533452e14c9d0d57a451ec0dd06097f87f60658a8f008203b29c31b2b5310eb2" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.910209 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"533452e14c9d0d57a451ec0dd06097f87f60658a8f008203b29c31b2b5310eb2"} err="failed to get container status \"533452e14c9d0d57a451ec0dd06097f87f60658a8f008203b29c31b2b5310eb2\": rpc error: code = NotFound desc = could not find container \"533452e14c9d0d57a451ec0dd06097f87f60658a8f008203b29c31b2b5310eb2\": container with ID starting with 533452e14c9d0d57a451ec0dd06097f87f60658a8f008203b29c31b2b5310eb2 not found: ID does not exist" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.910304 4965 scope.go:117] "RemoveContainer" containerID="dac7fd5095ec7fd8ce98b9150bd5c0a642004e2c1239a6fa1ff002efa67471df" Feb 19 09:54:27 crc kubenswrapper[4965]: E0219 09:54:27.911304 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dac7fd5095ec7fd8ce98b9150bd5c0a642004e2c1239a6fa1ff002efa67471df\": container with ID starting with dac7fd5095ec7fd8ce98b9150bd5c0a642004e2c1239a6fa1ff002efa67471df not found: ID does not exist" containerID="dac7fd5095ec7fd8ce98b9150bd5c0a642004e2c1239a6fa1ff002efa67471df" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.911357 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dac7fd5095ec7fd8ce98b9150bd5c0a642004e2c1239a6fa1ff002efa67471df"} err="failed to get container status \"dac7fd5095ec7fd8ce98b9150bd5c0a642004e2c1239a6fa1ff002efa67471df\": rpc error: code = NotFound desc = could not find container \"dac7fd5095ec7fd8ce98b9150bd5c0a642004e2c1239a6fa1ff002efa67471df\": container with ID starting with dac7fd5095ec7fd8ce98b9150bd5c0a642004e2c1239a6fa1ff002efa67471df not found: ID does not exist" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.911390 4965 scope.go:117] "RemoveContainer" containerID="51316b32af59fe23cdf832fbc0b37b11f74d3a57d01eed32ca30a196d4c7e2c3" Feb 19 09:54:27 crc kubenswrapper[4965]: E0219 09:54:27.912558 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51316b32af59fe23cdf832fbc0b37b11f74d3a57d01eed32ca30a196d4c7e2c3\": container with ID starting with 51316b32af59fe23cdf832fbc0b37b11f74d3a57d01eed32ca30a196d4c7e2c3 not found: ID does not exist" containerID="51316b32af59fe23cdf832fbc0b37b11f74d3a57d01eed32ca30a196d4c7e2c3" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.912617 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51316b32af59fe23cdf832fbc0b37b11f74d3a57d01eed32ca30a196d4c7e2c3"} err="failed to get container status \"51316b32af59fe23cdf832fbc0b37b11f74d3a57d01eed32ca30a196d4c7e2c3\": rpc error: code = NotFound desc = could not find container \"51316b32af59fe23cdf832fbc0b37b11f74d3a57d01eed32ca30a196d4c7e2c3\": container with ID starting with 51316b32af59fe23cdf832fbc0b37b11f74d3a57d01eed32ca30a196d4c7e2c3 not found: ID does not exist" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.912654 4965 scope.go:117] "RemoveContainer" containerID="0ebb933d7238665138ec7e854756522607a2814b48116b2ce4474869b39344c1" Feb 19 09:54:27 crc kubenswrapper[4965]: E0219 09:54:27.912930 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ebb933d7238665138ec7e854756522607a2814b48116b2ce4474869b39344c1\": container with ID starting with 0ebb933d7238665138ec7e854756522607a2814b48116b2ce4474869b39344c1 not found: ID does not exist" containerID="0ebb933d7238665138ec7e854756522607a2814b48116b2ce4474869b39344c1" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.912962 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ebb933d7238665138ec7e854756522607a2814b48116b2ce4474869b39344c1"} err="failed to get container status \"0ebb933d7238665138ec7e854756522607a2814b48116b2ce4474869b39344c1\": rpc error: code = NotFound desc = could not find container \"0ebb933d7238665138ec7e854756522607a2814b48116b2ce4474869b39344c1\": container with ID starting with 0ebb933d7238665138ec7e854756522607a2814b48116b2ce4474869b39344c1 not found: ID does not exist" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.912981 4965 scope.go:117] "RemoveContainer" containerID="efa60b6875cede631c9383845eb085f96d62a6365609f1f98b84165b54e0872a" Feb 19 09:54:27 crc kubenswrapper[4965]: E0219 09:54:27.913223 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efa60b6875cede631c9383845eb085f96d62a6365609f1f98b84165b54e0872a\": container with ID starting with efa60b6875cede631c9383845eb085f96d62a6365609f1f98b84165b54e0872a not found: ID does not exist" containerID="efa60b6875cede631c9383845eb085f96d62a6365609f1f98b84165b54e0872a" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.913250 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efa60b6875cede631c9383845eb085f96d62a6365609f1f98b84165b54e0872a"} err="failed to get container status \"efa60b6875cede631c9383845eb085f96d62a6365609f1f98b84165b54e0872a\": rpc error: code = NotFound desc = could not find container \"efa60b6875cede631c9383845eb085f96d62a6365609f1f98b84165b54e0872a\": container with ID starting with efa60b6875cede631c9383845eb085f96d62a6365609f1f98b84165b54e0872a not found: ID does not exist" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.913271 4965 scope.go:117] "RemoveContainer" containerID="ccba1acfe523175d218c25c2f59a6f9874426235c9cba981a80cc53aca12408a" Feb 19 09:54:27 crc kubenswrapper[4965]: E0219 09:54:27.913512 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccba1acfe523175d218c25c2f59a6f9874426235c9cba981a80cc53aca12408a\": container with ID starting with ccba1acfe523175d218c25c2f59a6f9874426235c9cba981a80cc53aca12408a not found: ID does not exist" containerID="ccba1acfe523175d218c25c2f59a6f9874426235c9cba981a80cc53aca12408a" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.913547 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccba1acfe523175d218c25c2f59a6f9874426235c9cba981a80cc53aca12408a"} err="failed to get container status \"ccba1acfe523175d218c25c2f59a6f9874426235c9cba981a80cc53aca12408a\": rpc error: code = NotFound desc = could not find container \"ccba1acfe523175d218c25c2f59a6f9874426235c9cba981a80cc53aca12408a\": container with ID starting with ccba1acfe523175d218c25c2f59a6f9874426235c9cba981a80cc53aca12408a not found: ID does not exist" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.913568 4965 scope.go:117] "RemoveContainer" containerID="9bc418c94085bcd4ed93250cce9eb6bc122cd045035b72800df2bdf4b364d6ab" Feb 19 09:54:27 crc kubenswrapper[4965]: E0219 09:54:27.913775 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bc418c94085bcd4ed93250cce9eb6bc122cd045035b72800df2bdf4b364d6ab\": container with ID starting with 9bc418c94085bcd4ed93250cce9eb6bc122cd045035b72800df2bdf4b364d6ab not found: ID does not exist" containerID="9bc418c94085bcd4ed93250cce9eb6bc122cd045035b72800df2bdf4b364d6ab" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.913807 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bc418c94085bcd4ed93250cce9eb6bc122cd045035b72800df2bdf4b364d6ab"} err="failed to get container status \"9bc418c94085bcd4ed93250cce9eb6bc122cd045035b72800df2bdf4b364d6ab\": rpc error: code = NotFound desc = could not find container \"9bc418c94085bcd4ed93250cce9eb6bc122cd045035b72800df2bdf4b364d6ab\": container with ID starting with 9bc418c94085bcd4ed93250cce9eb6bc122cd045035b72800df2bdf4b364d6ab not found: ID does not exist" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.913826 4965 scope.go:117] "RemoveContainer" containerID="743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256" Feb 19 09:54:27 crc kubenswrapper[4965]: E0219 09:54:27.914056 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\": container with ID starting with 743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256 not found: ID does not exist" containerID="743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256" Feb 19 09:54:27 crc kubenswrapper[4965]: I0219 09:54:27.914087 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256"} err="failed to get container status \"743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\": rpc error: code = NotFound desc = could not find container \"743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256\": container with ID starting with 743e3e1a0f2cf7a4842ca3162169cc1bbc399a823070546625450cc140220256 not found: ID does not exist" Feb 19 09:54:28 crc kubenswrapper[4965]: I0219 09:54:28.706407 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" event={"ID":"c74f4d75-fecc-44be-84ca-3d4871e0afad","Type":"ContainerStarted","Data":"057d7ed17e6fe9fc41ef10e7953b178db8918b522a8be5440c263116ae5bcb23"} Feb 19 09:54:28 crc kubenswrapper[4965]: I0219 09:54:28.706448 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" event={"ID":"c74f4d75-fecc-44be-84ca-3d4871e0afad","Type":"ContainerStarted","Data":"c46f23dced879ffa0d0ec7ac783e18858d5dc204c8a1c3602473c467902f8278"} Feb 19 09:54:28 crc kubenswrapper[4965]: I0219 09:54:28.706459 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" event={"ID":"c74f4d75-fecc-44be-84ca-3d4871e0afad","Type":"ContainerStarted","Data":"a6545794eb65fe584f563553b2d63cca313fabefbc166d4e4141b747375676c6"} Feb 19 09:54:28 crc kubenswrapper[4965]: I0219 09:54:28.706468 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" event={"ID":"c74f4d75-fecc-44be-84ca-3d4871e0afad","Type":"ContainerStarted","Data":"aebf4f1442ebb6574947f9eebbf910d1ee49026d18aaccabdfb693b2f6ff9c08"} Feb 19 09:54:28 crc kubenswrapper[4965]: I0219 09:54:28.706477 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" event={"ID":"c74f4d75-fecc-44be-84ca-3d4871e0afad","Type":"ContainerStarted","Data":"50fce7e5b2e414e09b2724d557a91836e8fbc226b8ea59f8c675b8213fee62c8"} Feb 19 09:54:29 crc kubenswrapper[4965]: I0219 09:54:29.204769 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c788dfa-1923-4a2b-9619-73acf92ec849" path="/var/lib/kubelet/pods/7c788dfa-1923-4a2b-9619-73acf92ec849/volumes" Feb 19 09:54:29 crc kubenswrapper[4965]: I0219 09:54:29.715561 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" event={"ID":"c74f4d75-fecc-44be-84ca-3d4871e0afad","Type":"ContainerStarted","Data":"15cee5b9ada3cd04d0ddf0cd413fc8a0d197c7133a0bc5b6af961c9a638f6ff1"} Feb 19 09:54:31 crc kubenswrapper[4965]: I0219 09:54:31.727728 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" event={"ID":"c74f4d75-fecc-44be-84ca-3d4871e0afad","Type":"ContainerStarted","Data":"5d6f5b96643699e094294eeeb80394f98a12bdcb9da144a6207e65969584249e"} Feb 19 09:54:32 crc kubenswrapper[4965]: I0219 09:54:32.846061 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-qfjz7"] Feb 19 09:54:32 crc kubenswrapper[4965]: I0219 09:54:32.847039 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qfjz7" Feb 19 09:54:32 crc kubenswrapper[4965]: I0219 09:54:32.851506 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 19 09:54:32 crc kubenswrapper[4965]: I0219 09:54:32.851797 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 19 09:54:32 crc kubenswrapper[4965]: I0219 09:54:32.852180 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-hvrkn" Feb 19 09:54:32 crc kubenswrapper[4965]: I0219 09:54:32.891600 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5675bf8465-d42dv"] Feb 19 09:54:32 crc kubenswrapper[4965]: I0219 09:54:32.892305 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5675bf8465-d42dv" Feb 19 09:54:32 crc kubenswrapper[4965]: I0219 09:54:32.893932 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 19 09:54:32 crc kubenswrapper[4965]: I0219 09:54:32.895417 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-7vl86" Feb 19 09:54:32 crc kubenswrapper[4965]: I0219 09:54:32.924668 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88v8j\" (UniqueName: \"kubernetes.io/projected/97e4a3bf-25d9-4a7b-ab73-7be5267dcfb1-kube-api-access-88v8j\") pod \"obo-prometheus-operator-68bc856cb9-qfjz7\" (UID: \"97e4a3bf-25d9-4a7b-ab73-7be5267dcfb1\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qfjz7" Feb 19 09:54:32 crc kubenswrapper[4965]: I0219 09:54:32.927457 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5675bf8465-h45db"] Feb 19 09:54:32 crc kubenswrapper[4965]: I0219 09:54:32.928105 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5675bf8465-h45db" Feb 19 09:54:33 crc kubenswrapper[4965]: I0219 09:54:33.025616 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0e50e1bd-3144-4362-9c46-355cfb2ba24f-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5675bf8465-d42dv\" (UID: \"0e50e1bd-3144-4362-9c46-355cfb2ba24f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5675bf8465-d42dv" Feb 19 09:54:33 crc kubenswrapper[4965]: I0219 09:54:33.025686 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0e50e1bd-3144-4362-9c46-355cfb2ba24f-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5675bf8465-d42dv\" (UID: \"0e50e1bd-3144-4362-9c46-355cfb2ba24f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5675bf8465-d42dv" Feb 19 09:54:33 crc kubenswrapper[4965]: I0219 09:54:33.025722 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88v8j\" (UniqueName: \"kubernetes.io/projected/97e4a3bf-25d9-4a7b-ab73-7be5267dcfb1-kube-api-access-88v8j\") pod \"obo-prometheus-operator-68bc856cb9-qfjz7\" (UID: \"97e4a3bf-25d9-4a7b-ab73-7be5267dcfb1\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qfjz7" Feb 19 09:54:33 crc kubenswrapper[4965]: I0219 09:54:33.025753 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0d85e95a-22ec-4364-a43c-04e60d68be0d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5675bf8465-h45db\" (UID: \"0d85e95a-22ec-4364-a43c-04e60d68be0d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5675bf8465-h45db" Feb 19 09:54:33 crc kubenswrapper[4965]: I0219 09:54:33.025828 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0d85e95a-22ec-4364-a43c-04e60d68be0d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5675bf8465-h45db\" (UID: \"0d85e95a-22ec-4364-a43c-04e60d68be0d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5675bf8465-h45db" Feb 19 09:54:33 crc kubenswrapper[4965]: I0219 09:54:33.048519 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88v8j\" (UniqueName: \"kubernetes.io/projected/97e4a3bf-25d9-4a7b-ab73-7be5267dcfb1-kube-api-access-88v8j\") pod \"obo-prometheus-operator-68bc856cb9-qfjz7\" (UID: \"97e4a3bf-25d9-4a7b-ab73-7be5267dcfb1\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qfjz7" Feb 19 09:54:33 crc kubenswrapper[4965]: I0219 09:54:33.076899 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-h4689"] Feb 19 09:54:33 crc kubenswrapper[4965]: I0219 09:54:33.077989 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-h4689" Feb 19 09:54:33 crc kubenswrapper[4965]: I0219 09:54:33.081318 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-4x4xj" Feb 19 09:54:33 crc kubenswrapper[4965]: I0219 09:54:33.081355 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 19 09:54:33 crc kubenswrapper[4965]: I0219 09:54:33.126811 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0d85e95a-22ec-4364-a43c-04e60d68be0d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5675bf8465-h45db\" (UID: \"0d85e95a-22ec-4364-a43c-04e60d68be0d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5675bf8465-h45db" Feb 19 09:54:33 crc kubenswrapper[4965]: I0219 09:54:33.127155 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0d85e95a-22ec-4364-a43c-04e60d68be0d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5675bf8465-h45db\" (UID: \"0d85e95a-22ec-4364-a43c-04e60d68be0d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5675bf8465-h45db" Feb 19 09:54:33 crc kubenswrapper[4965]: I0219 09:54:33.127408 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0e50e1bd-3144-4362-9c46-355cfb2ba24f-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5675bf8465-d42dv\" (UID: \"0e50e1bd-3144-4362-9c46-355cfb2ba24f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5675bf8465-d42dv" Feb 19 09:54:33 crc kubenswrapper[4965]: I0219 09:54:33.127511 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0e50e1bd-3144-4362-9c46-355cfb2ba24f-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5675bf8465-d42dv\" (UID: \"0e50e1bd-3144-4362-9c46-355cfb2ba24f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5675bf8465-d42dv" Feb 19 09:54:33 crc kubenswrapper[4965]: I0219 09:54:33.131152 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0d85e95a-22ec-4364-a43c-04e60d68be0d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5675bf8465-h45db\" (UID: \"0d85e95a-22ec-4364-a43c-04e60d68be0d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5675bf8465-h45db" Feb 19 09:54:33 crc kubenswrapper[4965]: I0219 09:54:33.131857 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0e50e1bd-3144-4362-9c46-355cfb2ba24f-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5675bf8465-d42dv\" (UID: \"0e50e1bd-3144-4362-9c46-355cfb2ba24f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5675bf8465-d42dv" Feb 19 09:54:33 crc kubenswrapper[4965]: I0219 09:54:33.134238 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0e50e1bd-3144-4362-9c46-355cfb2ba24f-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5675bf8465-d42dv\" (UID: \"0e50e1bd-3144-4362-9c46-355cfb2ba24f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5675bf8465-d42dv" Feb 19 09:54:33 crc kubenswrapper[4965]: I0219 09:54:33.136091 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0d85e95a-22ec-4364-a43c-04e60d68be0d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5675bf8465-h45db\" (UID: \"0d85e95a-22ec-4364-a43c-04e60d68be0d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5675bf8465-h45db" Feb 19 09:54:33 crc kubenswrapper[4965]: I0219 09:54:33.166034 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qfjz7" Feb 19 09:54:33 crc kubenswrapper[4965]: I0219 09:54:33.185917 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-x7xjb"] Feb 19 09:54:33 crc kubenswrapper[4965]: I0219 09:54:33.186811 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-x7xjb" Feb 19 09:54:33 crc kubenswrapper[4965]: I0219 09:54:33.189325 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-n78qm" Feb 19 09:54:33 crc kubenswrapper[4965]: I0219 09:54:33.207485 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5675bf8465-d42dv" Feb 19 09:54:33 crc kubenswrapper[4965]: E0219 09:54:33.208356 4965 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-qfjz7_openshift-operators_97e4a3bf-25d9-4a7b-ab73-7be5267dcfb1_0(16bb2bc16171a3629074ef8deff491018abc76a855cb3bb3b8a09cdec75b6319): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 09:54:33 crc kubenswrapper[4965]: E0219 09:54:33.208433 4965 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-qfjz7_openshift-operators_97e4a3bf-25d9-4a7b-ab73-7be5267dcfb1_0(16bb2bc16171a3629074ef8deff491018abc76a855cb3bb3b8a09cdec75b6319): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qfjz7" Feb 19 09:54:33 crc kubenswrapper[4965]: E0219 09:54:33.208461 4965 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-qfjz7_openshift-operators_97e4a3bf-25d9-4a7b-ab73-7be5267dcfb1_0(16bb2bc16171a3629074ef8deff491018abc76a855cb3bb3b8a09cdec75b6319): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qfjz7" Feb 19 09:54:33 crc kubenswrapper[4965]: E0219 09:54:33.208505 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-qfjz7_openshift-operators(97e4a3bf-25d9-4a7b-ab73-7be5267dcfb1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-qfjz7_openshift-operators(97e4a3bf-25d9-4a7b-ab73-7be5267dcfb1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-qfjz7_openshift-operators_97e4a3bf-25d9-4a7b-ab73-7be5267dcfb1_0(16bb2bc16171a3629074ef8deff491018abc76a855cb3bb3b8a09cdec75b6319): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qfjz7" podUID="97e4a3bf-25d9-4a7b-ab73-7be5267dcfb1" Feb 19 09:54:33 crc kubenswrapper[4965]: I0219 09:54:33.228567 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7wgq\" (UniqueName: \"kubernetes.io/projected/b7e1070f-f099-4a4f-a107-c1b8589af7c7-kube-api-access-f7wgq\") pod \"observability-operator-59bdc8b94-h4689\" (UID: \"b7e1070f-f099-4a4f-a107-c1b8589af7c7\") " pod="openshift-operators/observability-operator-59bdc8b94-h4689" Feb 19 09:54:33 crc kubenswrapper[4965]: I0219 09:54:33.228631 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/b7e1070f-f099-4a4f-a107-c1b8589af7c7-observability-operator-tls\") pod \"observability-operator-59bdc8b94-h4689\" (UID: \"b7e1070f-f099-4a4f-a107-c1b8589af7c7\") " pod="openshift-operators/observability-operator-59bdc8b94-h4689" Feb 19 09:54:33 crc kubenswrapper[4965]: E0219 09:54:33.232048 4965 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5675bf8465-d42dv_openshift-operators_0e50e1bd-3144-4362-9c46-355cfb2ba24f_0(a8c27d01da4ee58fdfae257d917d08ff4309ba3ab865394a758ee0ff93e1ba50): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 09:54:33 crc kubenswrapper[4965]: E0219 09:54:33.232133 4965 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5675bf8465-d42dv_openshift-operators_0e50e1bd-3144-4362-9c46-355cfb2ba24f_0(a8c27d01da4ee58fdfae257d917d08ff4309ba3ab865394a758ee0ff93e1ba50): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5675bf8465-d42dv" Feb 19 09:54:33 crc kubenswrapper[4965]: E0219 09:54:33.232163 4965 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5675bf8465-d42dv_openshift-operators_0e50e1bd-3144-4362-9c46-355cfb2ba24f_0(a8c27d01da4ee58fdfae257d917d08ff4309ba3ab865394a758ee0ff93e1ba50): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5675bf8465-d42dv" Feb 19 09:54:33 crc kubenswrapper[4965]: E0219 09:54:33.232237 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5675bf8465-d42dv_openshift-operators(0e50e1bd-3144-4362-9c46-355cfb2ba24f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5675bf8465-d42dv_openshift-operators(0e50e1bd-3144-4362-9c46-355cfb2ba24f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5675bf8465-d42dv_openshift-operators_0e50e1bd-3144-4362-9c46-355cfb2ba24f_0(a8c27d01da4ee58fdfae257d917d08ff4309ba3ab865394a758ee0ff93e1ba50): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5675bf8465-d42dv" podUID="0e50e1bd-3144-4362-9c46-355cfb2ba24f" Feb 19 09:54:33 crc kubenswrapper[4965]: I0219 09:54:33.252017 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5675bf8465-h45db" Feb 19 09:54:33 crc kubenswrapper[4965]: E0219 09:54:33.275023 4965 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5675bf8465-h45db_openshift-operators_0d85e95a-22ec-4364-a43c-04e60d68be0d_0(34374b0cc38e16ebadc8bdb7a324dd3e3026c2c010378a61ae4b4832c2ee47ae): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 09:54:33 crc kubenswrapper[4965]: E0219 09:54:33.275120 4965 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5675bf8465-h45db_openshift-operators_0d85e95a-22ec-4364-a43c-04e60d68be0d_0(34374b0cc38e16ebadc8bdb7a324dd3e3026c2c010378a61ae4b4832c2ee47ae): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5675bf8465-h45db" Feb 19 09:54:33 crc kubenswrapper[4965]: E0219 09:54:33.275149 4965 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5675bf8465-h45db_openshift-operators_0d85e95a-22ec-4364-a43c-04e60d68be0d_0(34374b0cc38e16ebadc8bdb7a324dd3e3026c2c010378a61ae4b4832c2ee47ae): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5675bf8465-h45db" Feb 19 09:54:33 crc kubenswrapper[4965]: E0219 09:54:33.275229 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5675bf8465-h45db_openshift-operators(0d85e95a-22ec-4364-a43c-04e60d68be0d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5675bf8465-h45db_openshift-operators(0d85e95a-22ec-4364-a43c-04e60d68be0d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5675bf8465-h45db_openshift-operators_0d85e95a-22ec-4364-a43c-04e60d68be0d_0(34374b0cc38e16ebadc8bdb7a324dd3e3026c2c010378a61ae4b4832c2ee47ae): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5675bf8465-h45db" podUID="0d85e95a-22ec-4364-a43c-04e60d68be0d" Feb 19 09:54:33 crc kubenswrapper[4965]: I0219 09:54:33.329520 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/b7e1070f-f099-4a4f-a107-c1b8589af7c7-observability-operator-tls\") pod \"observability-operator-59bdc8b94-h4689\" (UID: \"b7e1070f-f099-4a4f-a107-c1b8589af7c7\") " pod="openshift-operators/observability-operator-59bdc8b94-h4689" Feb 19 09:54:33 crc kubenswrapper[4965]: I0219 09:54:33.329627 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhs8d\" (UniqueName: \"kubernetes.io/projected/d55c4261-3d41-49fd-97dd-098bb8747449-kube-api-access-hhs8d\") pod \"perses-operator-5bf474d74f-x7xjb\" (UID: \"d55c4261-3d41-49fd-97dd-098bb8747449\") " pod="openshift-operators/perses-operator-5bf474d74f-x7xjb" Feb 19 09:54:33 crc kubenswrapper[4965]: I0219 09:54:33.329658 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/d55c4261-3d41-49fd-97dd-098bb8747449-openshift-service-ca\") pod \"perses-operator-5bf474d74f-x7xjb\" (UID: \"d55c4261-3d41-49fd-97dd-098bb8747449\") " pod="openshift-operators/perses-operator-5bf474d74f-x7xjb" Feb 19 09:54:33 crc kubenswrapper[4965]: I0219 09:54:33.329682 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7wgq\" (UniqueName: \"kubernetes.io/projected/b7e1070f-f099-4a4f-a107-c1b8589af7c7-kube-api-access-f7wgq\") pod \"observability-operator-59bdc8b94-h4689\" (UID: \"b7e1070f-f099-4a4f-a107-c1b8589af7c7\") " pod="openshift-operators/observability-operator-59bdc8b94-h4689" Feb 19 09:54:33 crc kubenswrapper[4965]: I0219 09:54:33.336826 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/b7e1070f-f099-4a4f-a107-c1b8589af7c7-observability-operator-tls\") pod \"observability-operator-59bdc8b94-h4689\" (UID: \"b7e1070f-f099-4a4f-a107-c1b8589af7c7\") " pod="openshift-operators/observability-operator-59bdc8b94-h4689" Feb 19 09:54:33 crc kubenswrapper[4965]: I0219 09:54:33.345916 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7wgq\" (UniqueName: \"kubernetes.io/projected/b7e1070f-f099-4a4f-a107-c1b8589af7c7-kube-api-access-f7wgq\") pod \"observability-operator-59bdc8b94-h4689\" (UID: \"b7e1070f-f099-4a4f-a107-c1b8589af7c7\") " pod="openshift-operators/observability-operator-59bdc8b94-h4689" Feb 19 09:54:33 crc kubenswrapper[4965]: I0219 09:54:33.396093 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-h4689" Feb 19 09:54:33 crc kubenswrapper[4965]: E0219 09:54:33.415713 4965 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-h4689_openshift-operators_b7e1070f-f099-4a4f-a107-c1b8589af7c7_0(3372adfe49f06021983d47179a6d9e3c09bc83c47953b0541f1154ac10f1ec5b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 09:54:33 crc kubenswrapper[4965]: E0219 09:54:33.415796 4965 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-h4689_openshift-operators_b7e1070f-f099-4a4f-a107-c1b8589af7c7_0(3372adfe49f06021983d47179a6d9e3c09bc83c47953b0541f1154ac10f1ec5b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-h4689" Feb 19 09:54:33 crc kubenswrapper[4965]: E0219 09:54:33.415824 4965 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-h4689_openshift-operators_b7e1070f-f099-4a4f-a107-c1b8589af7c7_0(3372adfe49f06021983d47179a6d9e3c09bc83c47953b0541f1154ac10f1ec5b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-h4689" Feb 19 09:54:33 crc kubenswrapper[4965]: E0219 09:54:33.415894 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-h4689_openshift-operators(b7e1070f-f099-4a4f-a107-c1b8589af7c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-h4689_openshift-operators(b7e1070f-f099-4a4f-a107-c1b8589af7c7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-h4689_openshift-operators_b7e1070f-f099-4a4f-a107-c1b8589af7c7_0(3372adfe49f06021983d47179a6d9e3c09bc83c47953b0541f1154ac10f1ec5b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-h4689" podUID="b7e1070f-f099-4a4f-a107-c1b8589af7c7" Feb 19 09:54:33 crc kubenswrapper[4965]: I0219 09:54:33.431322 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhs8d\" (UniqueName: \"kubernetes.io/projected/d55c4261-3d41-49fd-97dd-098bb8747449-kube-api-access-hhs8d\") pod \"perses-operator-5bf474d74f-x7xjb\" (UID: \"d55c4261-3d41-49fd-97dd-098bb8747449\") " pod="openshift-operators/perses-operator-5bf474d74f-x7xjb" Feb 19 09:54:33 crc kubenswrapper[4965]: I0219 09:54:33.431384 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/d55c4261-3d41-49fd-97dd-098bb8747449-openshift-service-ca\") pod \"perses-operator-5bf474d74f-x7xjb\" (UID: \"d55c4261-3d41-49fd-97dd-098bb8747449\") " pod="openshift-operators/perses-operator-5bf474d74f-x7xjb" Feb 19 09:54:33 crc kubenswrapper[4965]: I0219 09:54:33.432680 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/d55c4261-3d41-49fd-97dd-098bb8747449-openshift-service-ca\") pod \"perses-operator-5bf474d74f-x7xjb\" (UID: \"d55c4261-3d41-49fd-97dd-098bb8747449\") " pod="openshift-operators/perses-operator-5bf474d74f-x7xjb" Feb 19 09:54:33 crc kubenswrapper[4965]: I0219 09:54:33.461227 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhs8d\" (UniqueName: \"kubernetes.io/projected/d55c4261-3d41-49fd-97dd-098bb8747449-kube-api-access-hhs8d\") pod \"perses-operator-5bf474d74f-x7xjb\" (UID: \"d55c4261-3d41-49fd-97dd-098bb8747449\") " pod="openshift-operators/perses-operator-5bf474d74f-x7xjb" Feb 19 09:54:33 crc kubenswrapper[4965]: I0219 09:54:33.522234 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-x7xjb" Feb 19 09:54:33 crc kubenswrapper[4965]: E0219 09:54:33.543028 4965 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-x7xjb_openshift-operators_d55c4261-3d41-49fd-97dd-098bb8747449_0(1fb1bd5e3d5fb42f2551a80de416ff5f9a64ed5a7c77c494fc6950bfa514cd59): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 09:54:33 crc kubenswrapper[4965]: E0219 09:54:33.543181 4965 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-x7xjb_openshift-operators_d55c4261-3d41-49fd-97dd-098bb8747449_0(1fb1bd5e3d5fb42f2551a80de416ff5f9a64ed5a7c77c494fc6950bfa514cd59): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-x7xjb" Feb 19 09:54:33 crc kubenswrapper[4965]: E0219 09:54:33.543240 4965 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-x7xjb_openshift-operators_d55c4261-3d41-49fd-97dd-098bb8747449_0(1fb1bd5e3d5fb42f2551a80de416ff5f9a64ed5a7c77c494fc6950bfa514cd59): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-x7xjb" Feb 19 09:54:33 crc kubenswrapper[4965]: E0219 09:54:33.543308 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-x7xjb_openshift-operators(d55c4261-3d41-49fd-97dd-098bb8747449)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-x7xjb_openshift-operators(d55c4261-3d41-49fd-97dd-098bb8747449)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-x7xjb_openshift-operators_d55c4261-3d41-49fd-97dd-098bb8747449_0(1fb1bd5e3d5fb42f2551a80de416ff5f9a64ed5a7c77c494fc6950bfa514cd59): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-x7xjb" podUID="d55c4261-3d41-49fd-97dd-098bb8747449" Feb 19 09:54:33 crc kubenswrapper[4965]: I0219 09:54:33.750307 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" event={"ID":"c74f4d75-fecc-44be-84ca-3d4871e0afad","Type":"ContainerStarted","Data":"3e1440d84e5d911b4aa3e4f46d78d9904e570ed79a405409416f3d29e6c42767"} Feb 19 09:54:33 crc kubenswrapper[4965]: I0219 09:54:33.750642 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:33 crc kubenswrapper[4965]: I0219 09:54:33.750758 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:33 crc kubenswrapper[4965]: I0219 09:54:33.750821 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:33 crc kubenswrapper[4965]: I0219 09:54:33.781364 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" podStartSLOduration=7.781344669 podStartE2EDuration="7.781344669s" podCreationTimestamp="2026-02-19 09:54:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:54:33.779469802 +0000 UTC m=+729.400791112" watchObservedRunningTime="2026-02-19 09:54:33.781344669 +0000 UTC m=+729.402665979" Feb 19 09:54:33 crc kubenswrapper[4965]: I0219 09:54:33.786226 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:33 crc kubenswrapper[4965]: I0219 09:54:33.791755 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:54:34 crc kubenswrapper[4965]: I0219 09:54:34.282287 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-h4689"] Feb 19 09:54:34 crc kubenswrapper[4965]: I0219 09:54:34.282443 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-h4689" Feb 19 09:54:34 crc kubenswrapper[4965]: I0219 09:54:34.283158 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-h4689" Feb 19 09:54:34 crc kubenswrapper[4965]: I0219 09:54:34.285748 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5675bf8465-h45db"] Feb 19 09:54:34 crc kubenswrapper[4965]: I0219 09:54:34.285887 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5675bf8465-h45db" Feb 19 09:54:34 crc kubenswrapper[4965]: I0219 09:54:34.286372 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5675bf8465-h45db" Feb 19 09:54:34 crc kubenswrapper[4965]: I0219 09:54:34.303633 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5675bf8465-d42dv"] Feb 19 09:54:34 crc kubenswrapper[4965]: I0219 09:54:34.303975 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5675bf8465-d42dv" Feb 19 09:54:34 crc kubenswrapper[4965]: I0219 09:54:34.304589 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5675bf8465-d42dv" Feb 19 09:54:34 crc kubenswrapper[4965]: I0219 09:54:34.306992 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-x7xjb"] Feb 19 09:54:34 crc kubenswrapper[4965]: I0219 09:54:34.307126 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-x7xjb" Feb 19 09:54:34 crc kubenswrapper[4965]: I0219 09:54:34.307830 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-x7xjb" Feb 19 09:54:34 crc kubenswrapper[4965]: I0219 09:54:34.314540 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-qfjz7"] Feb 19 09:54:34 crc kubenswrapper[4965]: I0219 09:54:34.314668 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qfjz7" Feb 19 09:54:34 crc kubenswrapper[4965]: I0219 09:54:34.315110 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qfjz7" Feb 19 09:54:34 crc kubenswrapper[4965]: E0219 09:54:34.332958 4965 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-h4689_openshift-operators_b7e1070f-f099-4a4f-a107-c1b8589af7c7_0(a8c08da656e370c59a2f094c5bcf964581d10bc0f8a327a18e0a90a1f42dab9e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 09:54:34 crc kubenswrapper[4965]: E0219 09:54:34.333094 4965 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-h4689_openshift-operators_b7e1070f-f099-4a4f-a107-c1b8589af7c7_0(a8c08da656e370c59a2f094c5bcf964581d10bc0f8a327a18e0a90a1f42dab9e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-h4689" Feb 19 09:54:34 crc kubenswrapper[4965]: E0219 09:54:34.333159 4965 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-h4689_openshift-operators_b7e1070f-f099-4a4f-a107-c1b8589af7c7_0(a8c08da656e370c59a2f094c5bcf964581d10bc0f8a327a18e0a90a1f42dab9e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-h4689" Feb 19 09:54:34 crc kubenswrapper[4965]: E0219 09:54:34.333294 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-h4689_openshift-operators(b7e1070f-f099-4a4f-a107-c1b8589af7c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-h4689_openshift-operators(b7e1070f-f099-4a4f-a107-c1b8589af7c7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-h4689_openshift-operators_b7e1070f-f099-4a4f-a107-c1b8589af7c7_0(a8c08da656e370c59a2f094c5bcf964581d10bc0f8a327a18e0a90a1f42dab9e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-h4689" podUID="b7e1070f-f099-4a4f-a107-c1b8589af7c7" Feb 19 09:54:34 crc kubenswrapper[4965]: E0219 09:54:34.342794 4965 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5675bf8465-h45db_openshift-operators_0d85e95a-22ec-4364-a43c-04e60d68be0d_0(882edb46434c41150e95d71f8048430a1073e2b478419ffda85ef97c405b03cf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 09:54:34 crc kubenswrapper[4965]: E0219 09:54:34.342878 4965 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5675bf8465-h45db_openshift-operators_0d85e95a-22ec-4364-a43c-04e60d68be0d_0(882edb46434c41150e95d71f8048430a1073e2b478419ffda85ef97c405b03cf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5675bf8465-h45db" Feb 19 09:54:34 crc kubenswrapper[4965]: E0219 09:54:34.342922 4965 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5675bf8465-h45db_openshift-operators_0d85e95a-22ec-4364-a43c-04e60d68be0d_0(882edb46434c41150e95d71f8048430a1073e2b478419ffda85ef97c405b03cf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5675bf8465-h45db" Feb 19 09:54:34 crc kubenswrapper[4965]: E0219 09:54:34.342974 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5675bf8465-h45db_openshift-operators(0d85e95a-22ec-4364-a43c-04e60d68be0d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5675bf8465-h45db_openshift-operators(0d85e95a-22ec-4364-a43c-04e60d68be0d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5675bf8465-h45db_openshift-operators_0d85e95a-22ec-4364-a43c-04e60d68be0d_0(882edb46434c41150e95d71f8048430a1073e2b478419ffda85ef97c405b03cf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5675bf8465-h45db" podUID="0d85e95a-22ec-4364-a43c-04e60d68be0d" Feb 19 09:54:34 crc kubenswrapper[4965]: E0219 09:54:34.361584 4965 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5675bf8465-d42dv_openshift-operators_0e50e1bd-3144-4362-9c46-355cfb2ba24f_0(41c7b71b582c254f5606db304c103b8d70a723f6a003fa5efc0c02548bb9d444): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 09:54:34 crc kubenswrapper[4965]: E0219 09:54:34.361661 4965 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5675bf8465-d42dv_openshift-operators_0e50e1bd-3144-4362-9c46-355cfb2ba24f_0(41c7b71b582c254f5606db304c103b8d70a723f6a003fa5efc0c02548bb9d444): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5675bf8465-d42dv" Feb 19 09:54:34 crc kubenswrapper[4965]: E0219 09:54:34.361682 4965 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5675bf8465-d42dv_openshift-operators_0e50e1bd-3144-4362-9c46-355cfb2ba24f_0(41c7b71b582c254f5606db304c103b8d70a723f6a003fa5efc0c02548bb9d444): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5675bf8465-d42dv" Feb 19 09:54:34 crc kubenswrapper[4965]: E0219 09:54:34.361731 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5675bf8465-d42dv_openshift-operators(0e50e1bd-3144-4362-9c46-355cfb2ba24f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5675bf8465-d42dv_openshift-operators(0e50e1bd-3144-4362-9c46-355cfb2ba24f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5675bf8465-d42dv_openshift-operators_0e50e1bd-3144-4362-9c46-355cfb2ba24f_0(41c7b71b582c254f5606db304c103b8d70a723f6a003fa5efc0c02548bb9d444): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5675bf8465-d42dv" podUID="0e50e1bd-3144-4362-9c46-355cfb2ba24f" Feb 19 09:54:34 crc kubenswrapper[4965]: E0219 09:54:34.377351 4965 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-qfjz7_openshift-operators_97e4a3bf-25d9-4a7b-ab73-7be5267dcfb1_0(4dea24981de6919cc3581a0c9fed0baf871d716d3a12e7432548370a6b2d1348): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 09:54:34 crc kubenswrapper[4965]: E0219 09:54:34.377425 4965 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-qfjz7_openshift-operators_97e4a3bf-25d9-4a7b-ab73-7be5267dcfb1_0(4dea24981de6919cc3581a0c9fed0baf871d716d3a12e7432548370a6b2d1348): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qfjz7" Feb 19 09:54:34 crc kubenswrapper[4965]: E0219 09:54:34.377445 4965 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-qfjz7_openshift-operators_97e4a3bf-25d9-4a7b-ab73-7be5267dcfb1_0(4dea24981de6919cc3581a0c9fed0baf871d716d3a12e7432548370a6b2d1348): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qfjz7" Feb 19 09:54:34 crc kubenswrapper[4965]: E0219 09:54:34.377500 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-qfjz7_openshift-operators(97e4a3bf-25d9-4a7b-ab73-7be5267dcfb1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-qfjz7_openshift-operators(97e4a3bf-25d9-4a7b-ab73-7be5267dcfb1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-qfjz7_openshift-operators_97e4a3bf-25d9-4a7b-ab73-7be5267dcfb1_0(4dea24981de6919cc3581a0c9fed0baf871d716d3a12e7432548370a6b2d1348): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qfjz7" podUID="97e4a3bf-25d9-4a7b-ab73-7be5267dcfb1" Feb 19 09:54:34 crc kubenswrapper[4965]: E0219 09:54:34.391852 4965 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-x7xjb_openshift-operators_d55c4261-3d41-49fd-97dd-098bb8747449_0(174fff008a5c421a2701979f245082b8b7ac38e3d68fcdeb89df4e889fbb4b86): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 09:54:34 crc kubenswrapper[4965]: E0219 09:54:34.391923 4965 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-x7xjb_openshift-operators_d55c4261-3d41-49fd-97dd-098bb8747449_0(174fff008a5c421a2701979f245082b8b7ac38e3d68fcdeb89df4e889fbb4b86): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-x7xjb" Feb 19 09:54:34 crc kubenswrapper[4965]: E0219 09:54:34.391943 4965 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-x7xjb_openshift-operators_d55c4261-3d41-49fd-97dd-098bb8747449_0(174fff008a5c421a2701979f245082b8b7ac38e3d68fcdeb89df4e889fbb4b86): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-x7xjb" Feb 19 09:54:34 crc kubenswrapper[4965]: E0219 09:54:34.391986 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-x7xjb_openshift-operators(d55c4261-3d41-49fd-97dd-098bb8747449)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-x7xjb_openshift-operators(d55c4261-3d41-49fd-97dd-098bb8747449)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-x7xjb_openshift-operators_d55c4261-3d41-49fd-97dd-098bb8747449_0(174fff008a5c421a2701979f245082b8b7ac38e3d68fcdeb89df4e889fbb4b86): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-x7xjb" podUID="d55c4261-3d41-49fd-97dd-098bb8747449" Feb 19 09:54:38 crc kubenswrapper[4965]: I0219 09:54:38.198001 4965 scope.go:117] "RemoveContainer" containerID="5ce78b16779886d7dcc4f414531a624941d19304ad86ccb93cd0f009d3274b40" Feb 19 09:54:38 crc kubenswrapper[4965]: E0219 09:54:38.198860 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-nsjqz_openshift-multus(5e0b10c6-02b7-49d0-9a76-e89ebbb00528)\"" pod="openshift-multus/multus-nsjqz" podUID="5e0b10c6-02b7-49d0-9a76-e89ebbb00528" Feb 19 09:54:46 crc kubenswrapper[4965]: I0219 09:54:46.601111 4965 patch_prober.go:28] interesting pod/machine-config-daemon-7mhh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 09:54:46 crc kubenswrapper[4965]: I0219 09:54:46.601995 4965 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 09:54:47 crc kubenswrapper[4965]: I0219 09:54:47.196863 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5675bf8465-h45db" Feb 19 09:54:47 crc kubenswrapper[4965]: I0219 09:54:47.196902 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-h4689" Feb 19 09:54:47 crc kubenswrapper[4965]: I0219 09:54:47.197462 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5675bf8465-h45db" Feb 19 09:54:47 crc kubenswrapper[4965]: I0219 09:54:47.197561 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-h4689" Feb 19 09:54:47 crc kubenswrapper[4965]: E0219 09:54:47.254788 4965 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5675bf8465-h45db_openshift-operators_0d85e95a-22ec-4364-a43c-04e60d68be0d_0(2e19bdfedf388b9b1fe8204adba5b8e1b142192b957908d38eea562c6db39bf4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 09:54:47 crc kubenswrapper[4965]: E0219 09:54:47.255384 4965 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5675bf8465-h45db_openshift-operators_0d85e95a-22ec-4364-a43c-04e60d68be0d_0(2e19bdfedf388b9b1fe8204adba5b8e1b142192b957908d38eea562c6db39bf4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5675bf8465-h45db" Feb 19 09:54:47 crc kubenswrapper[4965]: E0219 09:54:47.255437 4965 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5675bf8465-h45db_openshift-operators_0d85e95a-22ec-4364-a43c-04e60d68be0d_0(2e19bdfedf388b9b1fe8204adba5b8e1b142192b957908d38eea562c6db39bf4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5675bf8465-h45db" Feb 19 09:54:47 crc kubenswrapper[4965]: E0219 09:54:47.255524 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5675bf8465-h45db_openshift-operators(0d85e95a-22ec-4364-a43c-04e60d68be0d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5675bf8465-h45db_openshift-operators(0d85e95a-22ec-4364-a43c-04e60d68be0d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5675bf8465-h45db_openshift-operators_0d85e95a-22ec-4364-a43c-04e60d68be0d_0(2e19bdfedf388b9b1fe8204adba5b8e1b142192b957908d38eea562c6db39bf4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5675bf8465-h45db" podUID="0d85e95a-22ec-4364-a43c-04e60d68be0d" Feb 19 09:54:47 crc kubenswrapper[4965]: E0219 09:54:47.259869 4965 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-h4689_openshift-operators_b7e1070f-f099-4a4f-a107-c1b8589af7c7_0(d995110e67ccdeec63e37738dcf73e5133c0f81854ee81438198d6efdeeed155): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 09:54:47 crc kubenswrapper[4965]: E0219 09:54:47.259968 4965 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-h4689_openshift-operators_b7e1070f-f099-4a4f-a107-c1b8589af7c7_0(d995110e67ccdeec63e37738dcf73e5133c0f81854ee81438198d6efdeeed155): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-h4689" Feb 19 09:54:47 crc kubenswrapper[4965]: E0219 09:54:47.260004 4965 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-h4689_openshift-operators_b7e1070f-f099-4a4f-a107-c1b8589af7c7_0(d995110e67ccdeec63e37738dcf73e5133c0f81854ee81438198d6efdeeed155): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-h4689" Feb 19 09:54:47 crc kubenswrapper[4965]: E0219 09:54:47.260078 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-h4689_openshift-operators(b7e1070f-f099-4a4f-a107-c1b8589af7c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-h4689_openshift-operators(b7e1070f-f099-4a4f-a107-c1b8589af7c7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-h4689_openshift-operators_b7e1070f-f099-4a4f-a107-c1b8589af7c7_0(d995110e67ccdeec63e37738dcf73e5133c0f81854ee81438198d6efdeeed155): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-h4689" podUID="b7e1070f-f099-4a4f-a107-c1b8589af7c7" Feb 19 09:54:48 crc kubenswrapper[4965]: I0219 09:54:48.197350 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5675bf8465-d42dv" Feb 19 09:54:48 crc kubenswrapper[4965]: I0219 09:54:48.197873 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5675bf8465-d42dv" Feb 19 09:54:48 crc kubenswrapper[4965]: E0219 09:54:48.223832 4965 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5675bf8465-d42dv_openshift-operators_0e50e1bd-3144-4362-9c46-355cfb2ba24f_0(a5619d11361d1c2f5bff5deef8a8e63902cadbcb9f258ff84daab6465bd74e31): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 09:54:48 crc kubenswrapper[4965]: E0219 09:54:48.223920 4965 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5675bf8465-d42dv_openshift-operators_0e50e1bd-3144-4362-9c46-355cfb2ba24f_0(a5619d11361d1c2f5bff5deef8a8e63902cadbcb9f258ff84daab6465bd74e31): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5675bf8465-d42dv" Feb 19 09:54:48 crc kubenswrapper[4965]: E0219 09:54:48.223956 4965 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5675bf8465-d42dv_openshift-operators_0e50e1bd-3144-4362-9c46-355cfb2ba24f_0(a5619d11361d1c2f5bff5deef8a8e63902cadbcb9f258ff84daab6465bd74e31): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5675bf8465-d42dv" Feb 19 09:54:48 crc kubenswrapper[4965]: E0219 09:54:48.224033 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5675bf8465-d42dv_openshift-operators(0e50e1bd-3144-4362-9c46-355cfb2ba24f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5675bf8465-d42dv_openshift-operators(0e50e1bd-3144-4362-9c46-355cfb2ba24f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5675bf8465-d42dv_openshift-operators_0e50e1bd-3144-4362-9c46-355cfb2ba24f_0(a5619d11361d1c2f5bff5deef8a8e63902cadbcb9f258ff84daab6465bd74e31): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5675bf8465-d42dv" podUID="0e50e1bd-3144-4362-9c46-355cfb2ba24f" Feb 19 09:54:49 crc kubenswrapper[4965]: I0219 09:54:49.197815 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-x7xjb" Feb 19 09:54:49 crc kubenswrapper[4965]: I0219 09:54:49.197889 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qfjz7" Feb 19 09:54:49 crc kubenswrapper[4965]: I0219 09:54:49.198455 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-x7xjb" Feb 19 09:54:49 crc kubenswrapper[4965]: I0219 09:54:49.198554 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qfjz7" Feb 19 09:54:49 crc kubenswrapper[4965]: E0219 09:54:49.241643 4965 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-x7xjb_openshift-operators_d55c4261-3d41-49fd-97dd-098bb8747449_0(b09863276a59609eae77ee110d37c24539b166046c84c9dec1da5e6d22ae3ed1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 09:54:49 crc kubenswrapper[4965]: E0219 09:54:49.241721 4965 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-x7xjb_openshift-operators_d55c4261-3d41-49fd-97dd-098bb8747449_0(b09863276a59609eae77ee110d37c24539b166046c84c9dec1da5e6d22ae3ed1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-x7xjb" Feb 19 09:54:49 crc kubenswrapper[4965]: E0219 09:54:49.241754 4965 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-x7xjb_openshift-operators_d55c4261-3d41-49fd-97dd-098bb8747449_0(b09863276a59609eae77ee110d37c24539b166046c84c9dec1da5e6d22ae3ed1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-x7xjb" Feb 19 09:54:49 crc kubenswrapper[4965]: E0219 09:54:49.241826 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-x7xjb_openshift-operators(d55c4261-3d41-49fd-97dd-098bb8747449)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-x7xjb_openshift-operators(d55c4261-3d41-49fd-97dd-098bb8747449)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-x7xjb_openshift-operators_d55c4261-3d41-49fd-97dd-098bb8747449_0(b09863276a59609eae77ee110d37c24539b166046c84c9dec1da5e6d22ae3ed1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-x7xjb" podUID="d55c4261-3d41-49fd-97dd-098bb8747449" Feb 19 09:54:49 crc kubenswrapper[4965]: E0219 09:54:49.249715 4965 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-qfjz7_openshift-operators_97e4a3bf-25d9-4a7b-ab73-7be5267dcfb1_0(3ac83e36f67359d312325ae8a70a3c8e61653aeefce1e2350f5584b730a8304c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 09:54:49 crc kubenswrapper[4965]: E0219 09:54:49.249766 4965 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-qfjz7_openshift-operators_97e4a3bf-25d9-4a7b-ab73-7be5267dcfb1_0(3ac83e36f67359d312325ae8a70a3c8e61653aeefce1e2350f5584b730a8304c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qfjz7" Feb 19 09:54:49 crc kubenswrapper[4965]: E0219 09:54:49.249795 4965 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-qfjz7_openshift-operators_97e4a3bf-25d9-4a7b-ab73-7be5267dcfb1_0(3ac83e36f67359d312325ae8a70a3c8e61653aeefce1e2350f5584b730a8304c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qfjz7" Feb 19 09:54:49 crc kubenswrapper[4965]: E0219 09:54:49.249840 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-qfjz7_openshift-operators(97e4a3bf-25d9-4a7b-ab73-7be5267dcfb1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-qfjz7_openshift-operators(97e4a3bf-25d9-4a7b-ab73-7be5267dcfb1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-qfjz7_openshift-operators_97e4a3bf-25d9-4a7b-ab73-7be5267dcfb1_0(3ac83e36f67359d312325ae8a70a3c8e61653aeefce1e2350f5584b730a8304c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qfjz7" podUID="97e4a3bf-25d9-4a7b-ab73-7be5267dcfb1" Feb 19 09:54:52 crc kubenswrapper[4965]: I0219 09:54:52.198386 4965 scope.go:117] "RemoveContainer" containerID="5ce78b16779886d7dcc4f414531a624941d19304ad86ccb93cd0f009d3274b40" Feb 19 09:54:52 crc kubenswrapper[4965]: I0219 09:54:52.867100 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-nsjqz_5e0b10c6-02b7-49d0-9a76-e89ebbb00528/kube-multus/2.log" Feb 19 09:54:52 crc kubenswrapper[4965]: I0219 09:54:52.867568 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-nsjqz" event={"ID":"5e0b10c6-02b7-49d0-9a76-e89ebbb00528","Type":"ContainerStarted","Data":"18e82fe8f42ce50d2cf47ec805ea6b82b46a819b225ef9ad65e24a1f6893868c"} Feb 19 09:54:57 crc kubenswrapper[4965]: I0219 09:54:57.205400 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-c2rms" Feb 19 09:55:00 crc kubenswrapper[4965]: I0219 09:55:00.197961 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-h4689" Feb 19 09:55:00 crc kubenswrapper[4965]: I0219 09:55:00.198517 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-h4689" Feb 19 09:55:00 crc kubenswrapper[4965]: I0219 09:55:00.630382 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-h4689"] Feb 19 09:55:00 crc kubenswrapper[4965]: I0219 09:55:00.917915 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-h4689" event={"ID":"b7e1070f-f099-4a4f-a107-c1b8589af7c7","Type":"ContainerStarted","Data":"00b9985542cc6bc7f3fae63e56b928c73e9592fbe3d6aa21589bfaa72af45a60"} Feb 19 09:55:01 crc kubenswrapper[4965]: I0219 09:55:01.197804 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5675bf8465-h45db" Feb 19 09:55:01 crc kubenswrapper[4965]: I0219 09:55:01.197898 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-x7xjb" Feb 19 09:55:01 crc kubenswrapper[4965]: I0219 09:55:01.198507 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5675bf8465-h45db" Feb 19 09:55:01 crc kubenswrapper[4965]: I0219 09:55:01.198542 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-x7xjb" Feb 19 09:55:01 crc kubenswrapper[4965]: I0219 09:55:01.674720 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5675bf8465-h45db"] Feb 19 09:55:01 crc kubenswrapper[4965]: W0219 09:55:01.677928 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d85e95a_22ec_4364_a43c_04e60d68be0d.slice/crio-fb848414ab433c10009e1bb3674bbcfc01d4ec24e4352855fa42af4be5c7a995 WatchSource:0}: Error finding container fb848414ab433c10009e1bb3674bbcfc01d4ec24e4352855fa42af4be5c7a995: Status 404 returned error can't find the container with id fb848414ab433c10009e1bb3674bbcfc01d4ec24e4352855fa42af4be5c7a995 Feb 19 09:55:01 crc kubenswrapper[4965]: I0219 09:55:01.678356 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-x7xjb"] Feb 19 09:55:01 crc kubenswrapper[4965]: W0219 09:55:01.688485 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd55c4261_3d41_49fd_97dd_098bb8747449.slice/crio-83ddb7c3f48719156eeba420831fc92a33a4c4ab4b3bf179646fd7c236f59509 WatchSource:0}: Error finding container 83ddb7c3f48719156eeba420831fc92a33a4c4ab4b3bf179646fd7c236f59509: Status 404 returned error can't find the container with id 83ddb7c3f48719156eeba420831fc92a33a4c4ab4b3bf179646fd7c236f59509 Feb 19 09:55:01 crc kubenswrapper[4965]: I0219 09:55:01.932056 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-x7xjb" event={"ID":"d55c4261-3d41-49fd-97dd-098bb8747449","Type":"ContainerStarted","Data":"83ddb7c3f48719156eeba420831fc92a33a4c4ab4b3bf179646fd7c236f59509"} Feb 19 09:55:01 crc kubenswrapper[4965]: I0219 09:55:01.934487 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5675bf8465-h45db" event={"ID":"0d85e95a-22ec-4364-a43c-04e60d68be0d","Type":"ContainerStarted","Data":"fb848414ab433c10009e1bb3674bbcfc01d4ec24e4352855fa42af4be5c7a995"} Feb 19 09:55:03 crc kubenswrapper[4965]: I0219 09:55:03.205669 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5675bf8465-d42dv" Feb 19 09:55:03 crc kubenswrapper[4965]: I0219 09:55:03.206348 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5675bf8465-d42dv" Feb 19 09:55:04 crc kubenswrapper[4965]: I0219 09:55:04.197351 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qfjz7" Feb 19 09:55:04 crc kubenswrapper[4965]: I0219 09:55:04.197876 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qfjz7" Feb 19 09:55:07 crc kubenswrapper[4965]: I0219 09:55:07.501223 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5675bf8465-d42dv"] Feb 19 09:55:07 crc kubenswrapper[4965]: I0219 09:55:07.575953 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-qfjz7"] Feb 19 09:55:07 crc kubenswrapper[4965]: W0219 09:55:07.585131 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97e4a3bf_25d9_4a7b_ab73_7be5267dcfb1.slice/crio-537139eaf4d1125e2bef8a2b968ed6cdc9df56ccbd3400af46b4518386b1cb1b WatchSource:0}: Error finding container 537139eaf4d1125e2bef8a2b968ed6cdc9df56ccbd3400af46b4518386b1cb1b: Status 404 returned error can't find the container with id 537139eaf4d1125e2bef8a2b968ed6cdc9df56ccbd3400af46b4518386b1cb1b Feb 19 09:55:07 crc kubenswrapper[4965]: I0219 09:55:07.970482 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5675bf8465-d42dv" event={"ID":"0e50e1bd-3144-4362-9c46-355cfb2ba24f","Type":"ContainerStarted","Data":"398e73a9e2e00d7f249b93d258dab96d7e26375b94c893daea6d52fde14831db"} Feb 19 09:55:07 crc kubenswrapper[4965]: I0219 09:55:07.971678 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5675bf8465-d42dv" event={"ID":"0e50e1bd-3144-4362-9c46-355cfb2ba24f","Type":"ContainerStarted","Data":"c82e765c3cd9df62cc5ff63a391ba4f1a7fca13163beae02200bc16ae8dc7474"} Feb 19 09:55:07 crc kubenswrapper[4965]: I0219 09:55:07.972990 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-x7xjb" event={"ID":"d55c4261-3d41-49fd-97dd-098bb8747449","Type":"ContainerStarted","Data":"3df929b42f6a6f1980decd8e0254d26fbc7deedf33e1a289adb284f7f87dd1b7"} Feb 19 09:55:07 crc kubenswrapper[4965]: I0219 09:55:07.973399 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-x7xjb" Feb 19 09:55:07 crc kubenswrapper[4965]: I0219 09:55:07.975019 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5675bf8465-h45db" event={"ID":"0d85e95a-22ec-4364-a43c-04e60d68be0d","Type":"ContainerStarted","Data":"e4194c851771c1a053bbc88dbf5ec5f52f12647763d7324117ad333091563c97"} Feb 19 09:55:07 crc kubenswrapper[4965]: I0219 09:55:07.976667 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qfjz7" event={"ID":"97e4a3bf-25d9-4a7b-ab73-7be5267dcfb1","Type":"ContainerStarted","Data":"537139eaf4d1125e2bef8a2b968ed6cdc9df56ccbd3400af46b4518386b1cb1b"} Feb 19 09:55:07 crc kubenswrapper[4965]: I0219 09:55:07.978088 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-h4689" event={"ID":"b7e1070f-f099-4a4f-a107-c1b8589af7c7","Type":"ContainerStarted","Data":"34a115fd1f60f57a9b5927023d580fc17cedd92f31aac397bfa3fa05d711a6c5"} Feb 19 09:55:07 crc kubenswrapper[4965]: I0219 09:55:07.978304 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-h4689" Feb 19 09:55:08 crc kubenswrapper[4965]: I0219 09:55:08.011716 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-h4689" Feb 19 09:55:08 crc kubenswrapper[4965]: I0219 09:55:08.022380 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5675bf8465-h45db" podStartSLOduration=30.630587186 podStartE2EDuration="36.02235269s" podCreationTimestamp="2026-02-19 09:54:32 +0000 UTC" firstStartedPulling="2026-02-19 09:55:01.680645638 +0000 UTC m=+757.301966948" lastFinishedPulling="2026-02-19 09:55:07.072411142 +0000 UTC m=+762.693732452" observedRunningTime="2026-02-19 09:55:08.015130854 +0000 UTC m=+763.636452174" watchObservedRunningTime="2026-02-19 09:55:08.02235269 +0000 UTC m=+763.643674000" Feb 19 09:55:08 crc kubenswrapper[4965]: I0219 09:55:08.022862 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5675bf8465-d42dv" podStartSLOduration=36.022854482 podStartE2EDuration="36.022854482s" podCreationTimestamp="2026-02-19 09:54:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:55:07.9956535 +0000 UTC m=+763.616974830" watchObservedRunningTime="2026-02-19 09:55:08.022854482 +0000 UTC m=+763.644175792" Feb 19 09:55:08 crc kubenswrapper[4965]: I0219 09:55:08.038991 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-h4689" podStartSLOduration=28.524623267 podStartE2EDuration="35.038965355s" podCreationTimestamp="2026-02-19 09:54:33 +0000 UTC" firstStartedPulling="2026-02-19 09:55:00.642630513 +0000 UTC m=+756.263951823" lastFinishedPulling="2026-02-19 09:55:07.156972601 +0000 UTC m=+762.778293911" observedRunningTime="2026-02-19 09:55:08.037743215 +0000 UTC m=+763.659064525" watchObservedRunningTime="2026-02-19 09:55:08.038965355 +0000 UTC m=+763.660286665" Feb 19 09:55:08 crc kubenswrapper[4965]: I0219 09:55:08.080040 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-x7xjb" podStartSLOduration=29.705984782 podStartE2EDuration="35.080018964s" podCreationTimestamp="2026-02-19 09:54:33 +0000 UTC" firstStartedPulling="2026-02-19 09:55:01.692942137 +0000 UTC m=+757.314263457" lastFinishedPulling="2026-02-19 09:55:07.066976329 +0000 UTC m=+762.688297639" observedRunningTime="2026-02-19 09:55:08.075378252 +0000 UTC m=+763.696699562" watchObservedRunningTime="2026-02-19 09:55:08.080018964 +0000 UTC m=+763.701340274" Feb 19 09:55:11 crc kubenswrapper[4965]: I0219 09:55:11.001434 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qfjz7" event={"ID":"97e4a3bf-25d9-4a7b-ab73-7be5267dcfb1","Type":"ContainerStarted","Data":"8fc80851511ea707aff12932dc52453e67b8ff79d707b12161af5de8cdfa9150"} Feb 19 09:55:11 crc kubenswrapper[4965]: I0219 09:55:11.026103 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-qfjz7" podStartSLOduration=36.677548109 podStartE2EDuration="39.026081855s" podCreationTimestamp="2026-02-19 09:54:32 +0000 UTC" firstStartedPulling="2026-02-19 09:55:07.586458413 +0000 UTC m=+763.207779723" lastFinishedPulling="2026-02-19 09:55:09.934992149 +0000 UTC m=+765.556313469" observedRunningTime="2026-02-19 09:55:11.023342859 +0000 UTC m=+766.644664199" watchObservedRunningTime="2026-02-19 09:55:11.026081855 +0000 UTC m=+766.647403175" Feb 19 09:55:13 crc kubenswrapper[4965]: I0219 09:55:13.526094 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-x7xjb" Feb 19 09:55:13 crc kubenswrapper[4965]: I0219 09:55:13.781560 4965 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 19 09:55:14 crc kubenswrapper[4965]: I0219 09:55:14.785873 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-ls5h7"] Feb 19 09:55:14 crc kubenswrapper[4965]: I0219 09:55:14.787709 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-ls5h7" Feb 19 09:55:14 crc kubenswrapper[4965]: I0219 09:55:14.790730 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 19 09:55:14 crc kubenswrapper[4965]: I0219 09:55:14.791777 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 19 09:55:14 crc kubenswrapper[4965]: I0219 09:55:14.791802 4965 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-cjt9n" Feb 19 09:55:14 crc kubenswrapper[4965]: I0219 09:55:14.792525 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-vmfkz"] Feb 19 09:55:14 crc kubenswrapper[4965]: I0219 09:55:14.793398 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-vmfkz" Feb 19 09:55:14 crc kubenswrapper[4965]: I0219 09:55:14.798695 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-5vx5v"] Feb 19 09:55:14 crc kubenswrapper[4965]: I0219 09:55:14.799569 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-5vx5v" Feb 19 09:55:14 crc kubenswrapper[4965]: I0219 09:55:14.799903 4965 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-7fjbq" Feb 19 09:55:14 crc kubenswrapper[4965]: I0219 09:55:14.803257 4965 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-c7fcw" Feb 19 09:55:14 crc kubenswrapper[4965]: I0219 09:55:14.823475 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-ls5h7"] Feb 19 09:55:14 crc kubenswrapper[4965]: I0219 09:55:14.893385 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-vmfkz"] Feb 19 09:55:14 crc kubenswrapper[4965]: I0219 09:55:14.896944 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-5vx5v"] Feb 19 09:55:14 crc kubenswrapper[4965]: I0219 09:55:14.914039 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9zbk\" (UniqueName: \"kubernetes.io/projected/ac8283e8-11a9-4b2f-ac84-4f8f6a7821bc-kube-api-access-b9zbk\") pod \"cert-manager-webhook-687f57d79b-5vx5v\" (UID: \"ac8283e8-11a9-4b2f-ac84-4f8f6a7821bc\") " pod="cert-manager/cert-manager-webhook-687f57d79b-5vx5v" Feb 19 09:55:14 crc kubenswrapper[4965]: I0219 09:55:14.914100 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5trt\" (UniqueName: \"kubernetes.io/projected/592650ba-f791-4f32-bbbe-23c0a5d9e82b-kube-api-access-j5trt\") pod \"cert-manager-cainjector-cf98fcc89-ls5h7\" (UID: \"592650ba-f791-4f32-bbbe-23c0a5d9e82b\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-ls5h7" Feb 19 09:55:14 crc kubenswrapper[4965]: I0219 09:55:14.914140 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hffc4\" (UniqueName: \"kubernetes.io/projected/41967e40-5df3-456a-aae9-86b898d18216-kube-api-access-hffc4\") pod \"cert-manager-858654f9db-vmfkz\" (UID: \"41967e40-5df3-456a-aae9-86b898d18216\") " pod="cert-manager/cert-manager-858654f9db-vmfkz" Feb 19 09:55:15 crc kubenswrapper[4965]: I0219 09:55:15.015840 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9zbk\" (UniqueName: \"kubernetes.io/projected/ac8283e8-11a9-4b2f-ac84-4f8f6a7821bc-kube-api-access-b9zbk\") pod \"cert-manager-webhook-687f57d79b-5vx5v\" (UID: \"ac8283e8-11a9-4b2f-ac84-4f8f6a7821bc\") " pod="cert-manager/cert-manager-webhook-687f57d79b-5vx5v" Feb 19 09:55:15 crc kubenswrapper[4965]: I0219 09:55:15.015904 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5trt\" (UniqueName: \"kubernetes.io/projected/592650ba-f791-4f32-bbbe-23c0a5d9e82b-kube-api-access-j5trt\") pod \"cert-manager-cainjector-cf98fcc89-ls5h7\" (UID: \"592650ba-f791-4f32-bbbe-23c0a5d9e82b\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-ls5h7" Feb 19 09:55:15 crc kubenswrapper[4965]: I0219 09:55:15.015939 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hffc4\" (UniqueName: \"kubernetes.io/projected/41967e40-5df3-456a-aae9-86b898d18216-kube-api-access-hffc4\") pod \"cert-manager-858654f9db-vmfkz\" (UID: \"41967e40-5df3-456a-aae9-86b898d18216\") " pod="cert-manager/cert-manager-858654f9db-vmfkz" Feb 19 09:55:15 crc kubenswrapper[4965]: I0219 09:55:15.040994 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hffc4\" (UniqueName: \"kubernetes.io/projected/41967e40-5df3-456a-aae9-86b898d18216-kube-api-access-hffc4\") pod \"cert-manager-858654f9db-vmfkz\" (UID: \"41967e40-5df3-456a-aae9-86b898d18216\") " pod="cert-manager/cert-manager-858654f9db-vmfkz" Feb 19 09:55:15 crc kubenswrapper[4965]: I0219 09:55:15.052296 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9zbk\" (UniqueName: \"kubernetes.io/projected/ac8283e8-11a9-4b2f-ac84-4f8f6a7821bc-kube-api-access-b9zbk\") pod \"cert-manager-webhook-687f57d79b-5vx5v\" (UID: \"ac8283e8-11a9-4b2f-ac84-4f8f6a7821bc\") " pod="cert-manager/cert-manager-webhook-687f57d79b-5vx5v" Feb 19 09:55:15 crc kubenswrapper[4965]: I0219 09:55:15.053070 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5trt\" (UniqueName: \"kubernetes.io/projected/592650ba-f791-4f32-bbbe-23c0a5d9e82b-kube-api-access-j5trt\") pod \"cert-manager-cainjector-cf98fcc89-ls5h7\" (UID: \"592650ba-f791-4f32-bbbe-23c0a5d9e82b\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-ls5h7" Feb 19 09:55:15 crc kubenswrapper[4965]: I0219 09:55:15.125796 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-ls5h7" Feb 19 09:55:15 crc kubenswrapper[4965]: I0219 09:55:15.153501 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-vmfkz" Feb 19 09:55:15 crc kubenswrapper[4965]: I0219 09:55:15.184124 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-5vx5v" Feb 19 09:55:15 crc kubenswrapper[4965]: I0219 09:55:15.540611 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-vmfkz"] Feb 19 09:55:15 crc kubenswrapper[4965]: I0219 09:55:15.603456 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-5vx5v"] Feb 19 09:55:15 crc kubenswrapper[4965]: W0219 09:55:15.613596 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac8283e8_11a9_4b2f_ac84_4f8f6a7821bc.slice/crio-34554cac380bfb91102cfffa2433e1cfe9d229a4499f06ad8c50613b5d94293a WatchSource:0}: Error finding container 34554cac380bfb91102cfffa2433e1cfe9d229a4499f06ad8c50613b5d94293a: Status 404 returned error can't find the container with id 34554cac380bfb91102cfffa2433e1cfe9d229a4499f06ad8c50613b5d94293a Feb 19 09:55:15 crc kubenswrapper[4965]: I0219 09:55:15.632986 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-ls5h7"] Feb 19 09:55:16 crc kubenswrapper[4965]: I0219 09:55:16.035584 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-vmfkz" event={"ID":"41967e40-5df3-456a-aae9-86b898d18216","Type":"ContainerStarted","Data":"4038214dbdd4e09e54a66a679212be2a0b44bab977cdfc5931c4d7c11f6e7d55"} Feb 19 09:55:16 crc kubenswrapper[4965]: I0219 09:55:16.036859 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-5vx5v" event={"ID":"ac8283e8-11a9-4b2f-ac84-4f8f6a7821bc","Type":"ContainerStarted","Data":"34554cac380bfb91102cfffa2433e1cfe9d229a4499f06ad8c50613b5d94293a"} Feb 19 09:55:16 crc kubenswrapper[4965]: I0219 09:55:16.038064 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-ls5h7" event={"ID":"592650ba-f791-4f32-bbbe-23c0a5d9e82b","Type":"ContainerStarted","Data":"6916a0e93c27101f447c873de7e691dc33da7fa6655a43597610f0d1e6b38908"} Feb 19 09:55:16 crc kubenswrapper[4965]: I0219 09:55:16.601540 4965 patch_prober.go:28] interesting pod/machine-config-daemon-7mhh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 09:55:16 crc kubenswrapper[4965]: I0219 09:55:16.601607 4965 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 09:55:16 crc kubenswrapper[4965]: I0219 09:55:16.601661 4965 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" Feb 19 09:55:16 crc kubenswrapper[4965]: I0219 09:55:16.602231 4965 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b59d24bc3fa01905164aa2b246a7f2c9309e5d002a2ffc3bd7f13562cf306e5b"} pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 09:55:16 crc kubenswrapper[4965]: I0219 09:55:16.602286 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerName="machine-config-daemon" containerID="cri-o://b59d24bc3fa01905164aa2b246a7f2c9309e5d002a2ffc3bd7f13562cf306e5b" gracePeriod=600 Feb 19 09:55:17 crc kubenswrapper[4965]: I0219 09:55:17.075971 4965 generic.go:334] "Generic (PLEG): container finished" podID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerID="b59d24bc3fa01905164aa2b246a7f2c9309e5d002a2ffc3bd7f13562cf306e5b" exitCode=0 Feb 19 09:55:17 crc kubenswrapper[4965]: I0219 09:55:17.076079 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" event={"ID":"63ef3eb8-6103-492d-b6ef-f16081d15e83","Type":"ContainerDied","Data":"b59d24bc3fa01905164aa2b246a7f2c9309e5d002a2ffc3bd7f13562cf306e5b"} Feb 19 09:55:17 crc kubenswrapper[4965]: I0219 09:55:17.076490 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" event={"ID":"63ef3eb8-6103-492d-b6ef-f16081d15e83","Type":"ContainerStarted","Data":"2381a024086baeb4b1c2a62ae636f4e796e3ec1a1ca046d7c801db6f42b09ff3"} Feb 19 09:55:17 crc kubenswrapper[4965]: I0219 09:55:17.076514 4965 scope.go:117] "RemoveContainer" containerID="aeea13d7baceae3d38efbdd04018cfdc27f75d2c326225193a932aad7bc7bcd2" Feb 19 09:55:21 crc kubenswrapper[4965]: I0219 09:55:21.130994 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-5vx5v" event={"ID":"ac8283e8-11a9-4b2f-ac84-4f8f6a7821bc","Type":"ContainerStarted","Data":"63aaaee6ba64e6f1719ac93fd1343e57f44b708e00860c822de9e931da65c11e"} Feb 19 09:55:21 crc kubenswrapper[4965]: I0219 09:55:21.131935 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-5vx5v" Feb 19 09:55:21 crc kubenswrapper[4965]: I0219 09:55:21.132673 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-ls5h7" event={"ID":"592650ba-f791-4f32-bbbe-23c0a5d9e82b","Type":"ContainerStarted","Data":"98b5f2bf59efb7a4e80ff4949a9cb56952361609eead06a749109f3a4beab18e"} Feb 19 09:55:21 crc kubenswrapper[4965]: I0219 09:55:21.134281 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-vmfkz" event={"ID":"41967e40-5df3-456a-aae9-86b898d18216","Type":"ContainerStarted","Data":"2a8783136874ad81377379c7d5871cd96bf539c0a028a3dc6742103bd28d1259"} Feb 19 09:55:21 crc kubenswrapper[4965]: I0219 09:55:21.160526 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-5vx5v" podStartSLOduration=1.9323863540000001 podStartE2EDuration="7.160496743s" podCreationTimestamp="2026-02-19 09:55:14 +0000 UTC" firstStartedPulling="2026-02-19 09:55:15.61730772 +0000 UTC m=+771.238629020" lastFinishedPulling="2026-02-19 09:55:20.845418069 +0000 UTC m=+776.466739409" observedRunningTime="2026-02-19 09:55:21.154868326 +0000 UTC m=+776.776189666" watchObservedRunningTime="2026-02-19 09:55:21.160496743 +0000 UTC m=+776.781818053" Feb 19 09:55:21 crc kubenswrapper[4965]: I0219 09:55:21.182932 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-ls5h7" podStartSLOduration=2.000273249 podStartE2EDuration="7.182902679s" podCreationTimestamp="2026-02-19 09:55:14 +0000 UTC" firstStartedPulling="2026-02-19 09:55:15.649680779 +0000 UTC m=+771.271002089" lastFinishedPulling="2026-02-19 09:55:20.832310189 +0000 UTC m=+776.453631519" observedRunningTime="2026-02-19 09:55:21.17923863 +0000 UTC m=+776.800559950" watchObservedRunningTime="2026-02-19 09:55:21.182902679 +0000 UTC m=+776.804224019" Feb 19 09:55:30 crc kubenswrapper[4965]: I0219 09:55:30.188066 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-5vx5v" Feb 19 09:55:30 crc kubenswrapper[4965]: I0219 09:55:30.206346 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-vmfkz" podStartSLOduration=10.924386316 podStartE2EDuration="16.206317326s" podCreationTimestamp="2026-02-19 09:55:14 +0000 UTC" firstStartedPulling="2026-02-19 09:55:15.552596833 +0000 UTC m=+771.173918143" lastFinishedPulling="2026-02-19 09:55:20.834527843 +0000 UTC m=+776.455849153" observedRunningTime="2026-02-19 09:55:21.214174721 +0000 UTC m=+776.835496061" watchObservedRunningTime="2026-02-19 09:55:30.206317326 +0000 UTC m=+785.827638646" Feb 19 09:55:50 crc kubenswrapper[4965]: I0219 09:55:50.553806 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6224k"] Feb 19 09:55:50 crc kubenswrapper[4965]: I0219 09:55:50.556550 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6224k" Feb 19 09:55:50 crc kubenswrapper[4965]: I0219 09:55:50.619076 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6224k"] Feb 19 09:55:50 crc kubenswrapper[4965]: I0219 09:55:50.741281 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggsc5\" (UniqueName: \"kubernetes.io/projected/4de6e909-65bb-4aa0-a792-d25430c74676-kube-api-access-ggsc5\") pod \"community-operators-6224k\" (UID: \"4de6e909-65bb-4aa0-a792-d25430c74676\") " pod="openshift-marketplace/community-operators-6224k" Feb 19 09:55:50 crc kubenswrapper[4965]: I0219 09:55:50.741352 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4de6e909-65bb-4aa0-a792-d25430c74676-utilities\") pod \"community-operators-6224k\" (UID: \"4de6e909-65bb-4aa0-a792-d25430c74676\") " pod="openshift-marketplace/community-operators-6224k" Feb 19 09:55:50 crc kubenswrapper[4965]: I0219 09:55:50.741404 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4de6e909-65bb-4aa0-a792-d25430c74676-catalog-content\") pod \"community-operators-6224k\" (UID: \"4de6e909-65bb-4aa0-a792-d25430c74676\") " pod="openshift-marketplace/community-operators-6224k" Feb 19 09:55:50 crc kubenswrapper[4965]: I0219 09:55:50.842329 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggsc5\" (UniqueName: \"kubernetes.io/projected/4de6e909-65bb-4aa0-a792-d25430c74676-kube-api-access-ggsc5\") pod \"community-operators-6224k\" (UID: \"4de6e909-65bb-4aa0-a792-d25430c74676\") " pod="openshift-marketplace/community-operators-6224k" Feb 19 09:55:50 crc kubenswrapper[4965]: I0219 09:55:50.842387 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4de6e909-65bb-4aa0-a792-d25430c74676-utilities\") pod \"community-operators-6224k\" (UID: \"4de6e909-65bb-4aa0-a792-d25430c74676\") " pod="openshift-marketplace/community-operators-6224k" Feb 19 09:55:50 crc kubenswrapper[4965]: I0219 09:55:50.842440 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4de6e909-65bb-4aa0-a792-d25430c74676-catalog-content\") pod \"community-operators-6224k\" (UID: \"4de6e909-65bb-4aa0-a792-d25430c74676\") " pod="openshift-marketplace/community-operators-6224k" Feb 19 09:55:50 crc kubenswrapper[4965]: I0219 09:55:50.843017 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4de6e909-65bb-4aa0-a792-d25430c74676-catalog-content\") pod \"community-operators-6224k\" (UID: \"4de6e909-65bb-4aa0-a792-d25430c74676\") " pod="openshift-marketplace/community-operators-6224k" Feb 19 09:55:50 crc kubenswrapper[4965]: I0219 09:55:50.843761 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4de6e909-65bb-4aa0-a792-d25430c74676-utilities\") pod \"community-operators-6224k\" (UID: \"4de6e909-65bb-4aa0-a792-d25430c74676\") " pod="openshift-marketplace/community-operators-6224k" Feb 19 09:55:50 crc kubenswrapper[4965]: I0219 09:55:50.880861 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggsc5\" (UniqueName: \"kubernetes.io/projected/4de6e909-65bb-4aa0-a792-d25430c74676-kube-api-access-ggsc5\") pod \"community-operators-6224k\" (UID: \"4de6e909-65bb-4aa0-a792-d25430c74676\") " pod="openshift-marketplace/community-operators-6224k" Feb 19 09:55:51 crc kubenswrapper[4965]: I0219 09:55:51.175665 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6224k" Feb 19 09:55:51 crc kubenswrapper[4965]: I0219 09:55:51.662931 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6224k"] Feb 19 09:55:51 crc kubenswrapper[4965]: W0219 09:55:51.688393 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4de6e909_65bb_4aa0_a792_d25430c74676.slice/crio-9d96f2656364375e866c3372ba31cbbac3d0823db10101ee1dae87a19dcf5c42 WatchSource:0}: Error finding container 9d96f2656364375e866c3372ba31cbbac3d0823db10101ee1dae87a19dcf5c42: Status 404 returned error can't find the container with id 9d96f2656364375e866c3372ba31cbbac3d0823db10101ee1dae87a19dcf5c42 Feb 19 09:55:52 crc kubenswrapper[4965]: I0219 09:55:52.349851 4965 generic.go:334] "Generic (PLEG): container finished" podID="4de6e909-65bb-4aa0-a792-d25430c74676" containerID="a969bc7c570548a1ea12b99b5f7b35e94cb900bb0e6fb82f60a70572bba5384d" exitCode=0 Feb 19 09:55:52 crc kubenswrapper[4965]: I0219 09:55:52.349912 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6224k" event={"ID":"4de6e909-65bb-4aa0-a792-d25430c74676","Type":"ContainerDied","Data":"a969bc7c570548a1ea12b99b5f7b35e94cb900bb0e6fb82f60a70572bba5384d"} Feb 19 09:55:52 crc kubenswrapper[4965]: I0219 09:55:52.349946 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6224k" event={"ID":"4de6e909-65bb-4aa0-a792-d25430c74676","Type":"ContainerStarted","Data":"9d96f2656364375e866c3372ba31cbbac3d0823db10101ee1dae87a19dcf5c42"} Feb 19 09:55:53 crc kubenswrapper[4965]: I0219 09:55:53.360518 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6224k" event={"ID":"4de6e909-65bb-4aa0-a792-d25430c74676","Type":"ContainerStarted","Data":"639c5ec82f837b0e6401f2d5e41ab080293802f1cdc43c21cea1f8c54d7c238a"} Feb 19 09:55:54 crc kubenswrapper[4965]: I0219 09:55:54.372064 4965 generic.go:334] "Generic (PLEG): container finished" podID="4de6e909-65bb-4aa0-a792-d25430c74676" containerID="639c5ec82f837b0e6401f2d5e41ab080293802f1cdc43c21cea1f8c54d7c238a" exitCode=0 Feb 19 09:55:54 crc kubenswrapper[4965]: I0219 09:55:54.372117 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6224k" event={"ID":"4de6e909-65bb-4aa0-a792-d25430c74676","Type":"ContainerDied","Data":"639c5ec82f837b0e6401f2d5e41ab080293802f1cdc43c21cea1f8c54d7c238a"} Feb 19 09:55:55 crc kubenswrapper[4965]: I0219 09:55:55.388266 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6224k" event={"ID":"4de6e909-65bb-4aa0-a792-d25430c74676","Type":"ContainerStarted","Data":"3c59b84cf3d131a4a59d1e3f5438e49df4381ef39b77a11fb8ef7acfde41bbd8"} Feb 19 09:55:55 crc kubenswrapper[4965]: I0219 09:55:55.418224 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6224k" podStartSLOduration=2.96596357 podStartE2EDuration="5.418183892s" podCreationTimestamp="2026-02-19 09:55:50 +0000 UTC" firstStartedPulling="2026-02-19 09:55:52.352038428 +0000 UTC m=+807.973359758" lastFinishedPulling="2026-02-19 09:55:54.80425877 +0000 UTC m=+810.425580080" observedRunningTime="2026-02-19 09:55:55.415879466 +0000 UTC m=+811.037200876" watchObservedRunningTime="2026-02-19 09:55:55.418183892 +0000 UTC m=+811.039505202" Feb 19 09:56:00 crc kubenswrapper[4965]: I0219 09:56:00.797146 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651vxf6v"] Feb 19 09:56:00 crc kubenswrapper[4965]: I0219 09:56:00.801310 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651vxf6v" Feb 19 09:56:00 crc kubenswrapper[4965]: I0219 09:56:00.803613 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 19 09:56:00 crc kubenswrapper[4965]: I0219 09:56:00.817944 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651vxf6v"] Feb 19 09:56:00 crc kubenswrapper[4965]: I0219 09:56:00.898898 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/476478a2-24c2-4386-9876-ab59f36cabbf-util\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651vxf6v\" (UID: \"476478a2-24c2-4386-9876-ab59f36cabbf\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651vxf6v" Feb 19 09:56:00 crc kubenswrapper[4965]: I0219 09:56:00.898992 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6dft\" (UniqueName: \"kubernetes.io/projected/476478a2-24c2-4386-9876-ab59f36cabbf-kube-api-access-t6dft\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651vxf6v\" (UID: \"476478a2-24c2-4386-9876-ab59f36cabbf\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651vxf6v" Feb 19 09:56:00 crc kubenswrapper[4965]: I0219 09:56:00.899038 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/476478a2-24c2-4386-9876-ab59f36cabbf-bundle\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651vxf6v\" (UID: \"476478a2-24c2-4386-9876-ab59f36cabbf\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651vxf6v" Feb 19 09:56:01 crc kubenswrapper[4965]: I0219 09:56:01.000585 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/476478a2-24c2-4386-9876-ab59f36cabbf-util\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651vxf6v\" (UID: \"476478a2-24c2-4386-9876-ab59f36cabbf\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651vxf6v" Feb 19 09:56:01 crc kubenswrapper[4965]: I0219 09:56:01.000910 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6dft\" (UniqueName: \"kubernetes.io/projected/476478a2-24c2-4386-9876-ab59f36cabbf-kube-api-access-t6dft\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651vxf6v\" (UID: \"476478a2-24c2-4386-9876-ab59f36cabbf\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651vxf6v" Feb 19 09:56:01 crc kubenswrapper[4965]: I0219 09:56:01.000987 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/476478a2-24c2-4386-9876-ab59f36cabbf-bundle\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651vxf6v\" (UID: \"476478a2-24c2-4386-9876-ab59f36cabbf\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651vxf6v" Feb 19 09:56:01 crc kubenswrapper[4965]: I0219 09:56:01.002060 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/476478a2-24c2-4386-9876-ab59f36cabbf-util\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651vxf6v\" (UID: \"476478a2-24c2-4386-9876-ab59f36cabbf\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651vxf6v" Feb 19 09:56:01 crc kubenswrapper[4965]: I0219 09:56:01.002060 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/476478a2-24c2-4386-9876-ab59f36cabbf-bundle\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651vxf6v\" (UID: \"476478a2-24c2-4386-9876-ab59f36cabbf\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651vxf6v" Feb 19 09:56:01 crc kubenswrapper[4965]: I0219 09:56:01.024796 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6dft\" (UniqueName: \"kubernetes.io/projected/476478a2-24c2-4386-9876-ab59f36cabbf-kube-api-access-t6dft\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651vxf6v\" (UID: \"476478a2-24c2-4386-9876-ab59f36cabbf\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651vxf6v" Feb 19 09:56:01 crc kubenswrapper[4965]: I0219 09:56:01.130785 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651vxf6v" Feb 19 09:56:01 crc kubenswrapper[4965]: I0219 09:56:01.176113 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6224k" Feb 19 09:56:01 crc kubenswrapper[4965]: I0219 09:56:01.176243 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6224k" Feb 19 09:56:01 crc kubenswrapper[4965]: I0219 09:56:01.238642 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6224k" Feb 19 09:56:01 crc kubenswrapper[4965]: I0219 09:56:01.368672 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651vxf6v"] Feb 19 09:56:01 crc kubenswrapper[4965]: I0219 09:56:01.481409 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651vxf6v" event={"ID":"476478a2-24c2-4386-9876-ab59f36cabbf","Type":"ContainerStarted","Data":"f3958449d36e96a3936fd27bc8f42aadf63bac911a6a5858e933ed6c90dba9a5"} Feb 19 09:56:01 crc kubenswrapper[4965]: I0219 09:56:01.530725 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6224k" Feb 19 09:56:02 crc kubenswrapper[4965]: I0219 09:56:02.487480 4965 generic.go:334] "Generic (PLEG): container finished" podID="476478a2-24c2-4386-9876-ab59f36cabbf" containerID="ffd9aadc4d1d5b03a0d8f62524b8d8b937ba8c1385f74657c4cbe771620e205e" exitCode=0 Feb 19 09:56:02 crc kubenswrapper[4965]: I0219 09:56:02.487547 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651vxf6v" event={"ID":"476478a2-24c2-4386-9876-ab59f36cabbf","Type":"ContainerDied","Data":"ffd9aadc4d1d5b03a0d8f62524b8d8b937ba8c1385f74657c4cbe771620e205e"} Feb 19 09:56:03 crc kubenswrapper[4965]: I0219 09:56:03.954223 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-p6p28"] Feb 19 09:56:03 crc kubenswrapper[4965]: I0219 09:56:03.955507 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p6p28" Feb 19 09:56:03 crc kubenswrapper[4965]: I0219 09:56:03.971658 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p6p28"] Feb 19 09:56:04 crc kubenswrapper[4965]: I0219 09:56:04.001529 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Feb 19 09:56:04 crc kubenswrapper[4965]: I0219 09:56:04.002299 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 19 09:56:04 crc kubenswrapper[4965]: I0219 09:56:04.013121 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Feb 19 09:56:04 crc kubenswrapper[4965]: I0219 09:56:04.013291 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Feb 19 09:56:04 crc kubenswrapper[4965]: I0219 09:56:04.028227 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 19 09:56:04 crc kubenswrapper[4965]: I0219 09:56:04.044551 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm9v6\" (UniqueName: \"kubernetes.io/projected/6b0d8fd6-b505-4482-a453-64584799d747-kube-api-access-lm9v6\") pod \"redhat-operators-p6p28\" (UID: \"6b0d8fd6-b505-4482-a453-64584799d747\") " pod="openshift-marketplace/redhat-operators-p6p28" Feb 19 09:56:04 crc kubenswrapper[4965]: I0219 09:56:04.044631 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b0d8fd6-b505-4482-a453-64584799d747-utilities\") pod \"redhat-operators-p6p28\" (UID: \"6b0d8fd6-b505-4482-a453-64584799d747\") " pod="openshift-marketplace/redhat-operators-p6p28" Feb 19 09:56:04 crc kubenswrapper[4965]: I0219 09:56:04.044774 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b0d8fd6-b505-4482-a453-64584799d747-catalog-content\") pod \"redhat-operators-p6p28\" (UID: \"6b0d8fd6-b505-4482-a453-64584799d747\") " pod="openshift-marketplace/redhat-operators-p6p28" Feb 19 09:56:04 crc kubenswrapper[4965]: I0219 09:56:04.145729 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b0d8fd6-b505-4482-a453-64584799d747-utilities\") pod \"redhat-operators-p6p28\" (UID: \"6b0d8fd6-b505-4482-a453-64584799d747\") " pod="openshift-marketplace/redhat-operators-p6p28" Feb 19 09:56:04 crc kubenswrapper[4965]: I0219 09:56:04.145796 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b0d8fd6-b505-4482-a453-64584799d747-catalog-content\") pod \"redhat-operators-p6p28\" (UID: \"6b0d8fd6-b505-4482-a453-64584799d747\") " pod="openshift-marketplace/redhat-operators-p6p28" Feb 19 09:56:04 crc kubenswrapper[4965]: I0219 09:56:04.145847 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lm9v6\" (UniqueName: \"kubernetes.io/projected/6b0d8fd6-b505-4482-a453-64584799d747-kube-api-access-lm9v6\") pod \"redhat-operators-p6p28\" (UID: \"6b0d8fd6-b505-4482-a453-64584799d747\") " pod="openshift-marketplace/redhat-operators-p6p28" Feb 19 09:56:04 crc kubenswrapper[4965]: I0219 09:56:04.145881 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-858f69dd-d41f-4ed2-87ac-eaa52336a900\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-858f69dd-d41f-4ed2-87ac-eaa52336a900\") pod \"minio\" (UID: \"c17b5348-5eb4-4a51-b913-43f613170fb8\") " pod="minio-dev/minio" Feb 19 09:56:04 crc kubenswrapper[4965]: I0219 09:56:04.145906 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw7dj\" (UniqueName: \"kubernetes.io/projected/c17b5348-5eb4-4a51-b913-43f613170fb8-kube-api-access-pw7dj\") pod \"minio\" (UID: \"c17b5348-5eb4-4a51-b913-43f613170fb8\") " pod="minio-dev/minio" Feb 19 09:56:04 crc kubenswrapper[4965]: I0219 09:56:04.146469 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b0d8fd6-b505-4482-a453-64584799d747-utilities\") pod \"redhat-operators-p6p28\" (UID: \"6b0d8fd6-b505-4482-a453-64584799d747\") " pod="openshift-marketplace/redhat-operators-p6p28" Feb 19 09:56:04 crc kubenswrapper[4965]: I0219 09:56:04.146469 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b0d8fd6-b505-4482-a453-64584799d747-catalog-content\") pod \"redhat-operators-p6p28\" (UID: \"6b0d8fd6-b505-4482-a453-64584799d747\") " pod="openshift-marketplace/redhat-operators-p6p28" Feb 19 09:56:04 crc kubenswrapper[4965]: I0219 09:56:04.171796 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lm9v6\" (UniqueName: \"kubernetes.io/projected/6b0d8fd6-b505-4482-a453-64584799d747-kube-api-access-lm9v6\") pod \"redhat-operators-p6p28\" (UID: \"6b0d8fd6-b505-4482-a453-64584799d747\") " pod="openshift-marketplace/redhat-operators-p6p28" Feb 19 09:56:04 crc kubenswrapper[4965]: I0219 09:56:04.247562 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-858f69dd-d41f-4ed2-87ac-eaa52336a900\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-858f69dd-d41f-4ed2-87ac-eaa52336a900\") pod \"minio\" (UID: \"c17b5348-5eb4-4a51-b913-43f613170fb8\") " pod="minio-dev/minio" Feb 19 09:56:04 crc kubenswrapper[4965]: I0219 09:56:04.247627 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pw7dj\" (UniqueName: \"kubernetes.io/projected/c17b5348-5eb4-4a51-b913-43f613170fb8-kube-api-access-pw7dj\") pod \"minio\" (UID: \"c17b5348-5eb4-4a51-b913-43f613170fb8\") " pod="minio-dev/minio" Feb 19 09:56:04 crc kubenswrapper[4965]: I0219 09:56:04.252347 4965 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 09:56:04 crc kubenswrapper[4965]: I0219 09:56:04.252416 4965 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-858f69dd-d41f-4ed2-87ac-eaa52336a900\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-858f69dd-d41f-4ed2-87ac-eaa52336a900\") pod \"minio\" (UID: \"c17b5348-5eb4-4a51-b913-43f613170fb8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0333ee9f806a967701d4633542eea4b20c796c59347fe2b79c1e5aa8d6529c29/globalmount\"" pod="minio-dev/minio" Feb 19 09:56:04 crc kubenswrapper[4965]: I0219 09:56:04.266937 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pw7dj\" (UniqueName: \"kubernetes.io/projected/c17b5348-5eb4-4a51-b913-43f613170fb8-kube-api-access-pw7dj\") pod \"minio\" (UID: \"c17b5348-5eb4-4a51-b913-43f613170fb8\") " pod="minio-dev/minio" Feb 19 09:56:04 crc kubenswrapper[4965]: I0219 09:56:04.274361 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p6p28" Feb 19 09:56:04 crc kubenswrapper[4965]: I0219 09:56:04.311882 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-858f69dd-d41f-4ed2-87ac-eaa52336a900\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-858f69dd-d41f-4ed2-87ac-eaa52336a900\") pod \"minio\" (UID: \"c17b5348-5eb4-4a51-b913-43f613170fb8\") " pod="minio-dev/minio" Feb 19 09:56:04 crc kubenswrapper[4965]: I0219 09:56:04.320461 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 19 09:56:04 crc kubenswrapper[4965]: I0219 09:56:04.504858 4965 generic.go:334] "Generic (PLEG): container finished" podID="476478a2-24c2-4386-9876-ab59f36cabbf" containerID="4fbb4db64d17adbe52a06d9cb97f01d85ef46d1c30e74a4f489db8cae5908de4" exitCode=0 Feb 19 09:56:04 crc kubenswrapper[4965]: I0219 09:56:04.505067 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651vxf6v" event={"ID":"476478a2-24c2-4386-9876-ab59f36cabbf","Type":"ContainerDied","Data":"4fbb4db64d17adbe52a06d9cb97f01d85ef46d1c30e74a4f489db8cae5908de4"} Feb 19 09:56:04 crc kubenswrapper[4965]: I0219 09:56:04.514797 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p6p28"] Feb 19 09:56:04 crc kubenswrapper[4965]: W0219 09:56:04.527982 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b0d8fd6_b505_4482_a453_64584799d747.slice/crio-42178de044193b0348066b0e82280a93c2117c155990ac1cab50a0b067422278 WatchSource:0}: Error finding container 42178de044193b0348066b0e82280a93c2117c155990ac1cab50a0b067422278: Status 404 returned error can't find the container with id 42178de044193b0348066b0e82280a93c2117c155990ac1cab50a0b067422278 Feb 19 09:56:04 crc kubenswrapper[4965]: I0219 09:56:04.589361 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 19 09:56:04 crc kubenswrapper[4965]: I0219 09:56:04.742654 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6224k"] Feb 19 09:56:04 crc kubenswrapper[4965]: I0219 09:56:04.743453 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6224k" podUID="4de6e909-65bb-4aa0-a792-d25430c74676" containerName="registry-server" containerID="cri-o://3c59b84cf3d131a4a59d1e3f5438e49df4381ef39b77a11fb8ef7acfde41bbd8" gracePeriod=2 Feb 19 09:56:05 crc kubenswrapper[4965]: I0219 09:56:05.120592 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6224k" Feb 19 09:56:05 crc kubenswrapper[4965]: I0219 09:56:05.264370 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4de6e909-65bb-4aa0-a792-d25430c74676-catalog-content\") pod \"4de6e909-65bb-4aa0-a792-d25430c74676\" (UID: \"4de6e909-65bb-4aa0-a792-d25430c74676\") " Feb 19 09:56:05 crc kubenswrapper[4965]: I0219 09:56:05.264494 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4de6e909-65bb-4aa0-a792-d25430c74676-utilities\") pod \"4de6e909-65bb-4aa0-a792-d25430c74676\" (UID: \"4de6e909-65bb-4aa0-a792-d25430c74676\") " Feb 19 09:56:05 crc kubenswrapper[4965]: I0219 09:56:05.264578 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggsc5\" (UniqueName: \"kubernetes.io/projected/4de6e909-65bb-4aa0-a792-d25430c74676-kube-api-access-ggsc5\") pod \"4de6e909-65bb-4aa0-a792-d25430c74676\" (UID: \"4de6e909-65bb-4aa0-a792-d25430c74676\") " Feb 19 09:56:05 crc kubenswrapper[4965]: I0219 09:56:05.266348 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4de6e909-65bb-4aa0-a792-d25430c74676-utilities" (OuterVolumeSpecName: "utilities") pod "4de6e909-65bb-4aa0-a792-d25430c74676" (UID: "4de6e909-65bb-4aa0-a792-d25430c74676"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:56:05 crc kubenswrapper[4965]: I0219 09:56:05.275436 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4de6e909-65bb-4aa0-a792-d25430c74676-kube-api-access-ggsc5" (OuterVolumeSpecName: "kube-api-access-ggsc5") pod "4de6e909-65bb-4aa0-a792-d25430c74676" (UID: "4de6e909-65bb-4aa0-a792-d25430c74676"). InnerVolumeSpecName "kube-api-access-ggsc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:56:05 crc kubenswrapper[4965]: I0219 09:56:05.323289 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4de6e909-65bb-4aa0-a792-d25430c74676-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4de6e909-65bb-4aa0-a792-d25430c74676" (UID: "4de6e909-65bb-4aa0-a792-d25430c74676"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:56:05 crc kubenswrapper[4965]: I0219 09:56:05.366150 4965 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4de6e909-65bb-4aa0-a792-d25430c74676-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 09:56:05 crc kubenswrapper[4965]: I0219 09:56:05.366316 4965 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4de6e909-65bb-4aa0-a792-d25430c74676-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 09:56:05 crc kubenswrapper[4965]: I0219 09:56:05.366328 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggsc5\" (UniqueName: \"kubernetes.io/projected/4de6e909-65bb-4aa0-a792-d25430c74676-kube-api-access-ggsc5\") on node \"crc\" DevicePath \"\"" Feb 19 09:56:05 crc kubenswrapper[4965]: I0219 09:56:05.518909 4965 generic.go:334] "Generic (PLEG): container finished" podID="476478a2-24c2-4386-9876-ab59f36cabbf" containerID="93ab4bbed54b591c700fa874bc66430ad1d89f30b1345945625cb25b309c5aee" exitCode=0 Feb 19 09:56:05 crc kubenswrapper[4965]: I0219 09:56:05.519283 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651vxf6v" event={"ID":"476478a2-24c2-4386-9876-ab59f36cabbf","Type":"ContainerDied","Data":"93ab4bbed54b591c700fa874bc66430ad1d89f30b1345945625cb25b309c5aee"} Feb 19 09:56:05 crc kubenswrapper[4965]: I0219 09:56:05.522363 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"c17b5348-5eb4-4a51-b913-43f613170fb8","Type":"ContainerStarted","Data":"0a80e5658e109dbcac68f22186dede27ae168c8934bfaaad6d282bb68ed8d9f0"} Feb 19 09:56:05 crc kubenswrapper[4965]: I0219 09:56:05.525282 4965 generic.go:334] "Generic (PLEG): container finished" podID="6b0d8fd6-b505-4482-a453-64584799d747" containerID="0ef7d1891c632e0af316a6cee1167c91a23711f04862dd4a8ba412b8753866ef" exitCode=0 Feb 19 09:56:05 crc kubenswrapper[4965]: I0219 09:56:05.525386 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p6p28" event={"ID":"6b0d8fd6-b505-4482-a453-64584799d747","Type":"ContainerDied","Data":"0ef7d1891c632e0af316a6cee1167c91a23711f04862dd4a8ba412b8753866ef"} Feb 19 09:56:05 crc kubenswrapper[4965]: I0219 09:56:05.525418 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p6p28" event={"ID":"6b0d8fd6-b505-4482-a453-64584799d747","Type":"ContainerStarted","Data":"42178de044193b0348066b0e82280a93c2117c155990ac1cab50a0b067422278"} Feb 19 09:56:05 crc kubenswrapper[4965]: I0219 09:56:05.531305 4965 generic.go:334] "Generic (PLEG): container finished" podID="4de6e909-65bb-4aa0-a792-d25430c74676" containerID="3c59b84cf3d131a4a59d1e3f5438e49df4381ef39b77a11fb8ef7acfde41bbd8" exitCode=0 Feb 19 09:56:05 crc kubenswrapper[4965]: I0219 09:56:05.531371 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6224k" event={"ID":"4de6e909-65bb-4aa0-a792-d25430c74676","Type":"ContainerDied","Data":"3c59b84cf3d131a4a59d1e3f5438e49df4381ef39b77a11fb8ef7acfde41bbd8"} Feb 19 09:56:05 crc kubenswrapper[4965]: I0219 09:56:05.531401 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6224k" Feb 19 09:56:05 crc kubenswrapper[4965]: I0219 09:56:05.531442 4965 scope.go:117] "RemoveContainer" containerID="3c59b84cf3d131a4a59d1e3f5438e49df4381ef39b77a11fb8ef7acfde41bbd8" Feb 19 09:56:05 crc kubenswrapper[4965]: I0219 09:56:05.531418 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6224k" event={"ID":"4de6e909-65bb-4aa0-a792-d25430c74676","Type":"ContainerDied","Data":"9d96f2656364375e866c3372ba31cbbac3d0823db10101ee1dae87a19dcf5c42"} Feb 19 09:56:05 crc kubenswrapper[4965]: I0219 09:56:05.570339 4965 scope.go:117] "RemoveContainer" containerID="639c5ec82f837b0e6401f2d5e41ab080293802f1cdc43c21cea1f8c54d7c238a" Feb 19 09:56:05 crc kubenswrapper[4965]: I0219 09:56:05.587827 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6224k"] Feb 19 09:56:05 crc kubenswrapper[4965]: I0219 09:56:05.602916 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6224k"] Feb 19 09:56:05 crc kubenswrapper[4965]: I0219 09:56:05.632940 4965 scope.go:117] "RemoveContainer" containerID="a969bc7c570548a1ea12b99b5f7b35e94cb900bb0e6fb82f60a70572bba5384d" Feb 19 09:56:05 crc kubenswrapper[4965]: I0219 09:56:05.655598 4965 scope.go:117] "RemoveContainer" containerID="3c59b84cf3d131a4a59d1e3f5438e49df4381ef39b77a11fb8ef7acfde41bbd8" Feb 19 09:56:05 crc kubenswrapper[4965]: E0219 09:56:05.656178 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c59b84cf3d131a4a59d1e3f5438e49df4381ef39b77a11fb8ef7acfde41bbd8\": container with ID starting with 3c59b84cf3d131a4a59d1e3f5438e49df4381ef39b77a11fb8ef7acfde41bbd8 not found: ID does not exist" containerID="3c59b84cf3d131a4a59d1e3f5438e49df4381ef39b77a11fb8ef7acfde41bbd8" Feb 19 09:56:05 crc kubenswrapper[4965]: I0219 09:56:05.656253 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c59b84cf3d131a4a59d1e3f5438e49df4381ef39b77a11fb8ef7acfde41bbd8"} err="failed to get container status \"3c59b84cf3d131a4a59d1e3f5438e49df4381ef39b77a11fb8ef7acfde41bbd8\": rpc error: code = NotFound desc = could not find container \"3c59b84cf3d131a4a59d1e3f5438e49df4381ef39b77a11fb8ef7acfde41bbd8\": container with ID starting with 3c59b84cf3d131a4a59d1e3f5438e49df4381ef39b77a11fb8ef7acfde41bbd8 not found: ID does not exist" Feb 19 09:56:05 crc kubenswrapper[4965]: I0219 09:56:05.656283 4965 scope.go:117] "RemoveContainer" containerID="639c5ec82f837b0e6401f2d5e41ab080293802f1cdc43c21cea1f8c54d7c238a" Feb 19 09:56:05 crc kubenswrapper[4965]: E0219 09:56:05.656626 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"639c5ec82f837b0e6401f2d5e41ab080293802f1cdc43c21cea1f8c54d7c238a\": container with ID starting with 639c5ec82f837b0e6401f2d5e41ab080293802f1cdc43c21cea1f8c54d7c238a not found: ID does not exist" containerID="639c5ec82f837b0e6401f2d5e41ab080293802f1cdc43c21cea1f8c54d7c238a" Feb 19 09:56:05 crc kubenswrapper[4965]: I0219 09:56:05.656697 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"639c5ec82f837b0e6401f2d5e41ab080293802f1cdc43c21cea1f8c54d7c238a"} err="failed to get container status \"639c5ec82f837b0e6401f2d5e41ab080293802f1cdc43c21cea1f8c54d7c238a\": rpc error: code = NotFound desc = could not find container \"639c5ec82f837b0e6401f2d5e41ab080293802f1cdc43c21cea1f8c54d7c238a\": container with ID starting with 639c5ec82f837b0e6401f2d5e41ab080293802f1cdc43c21cea1f8c54d7c238a not found: ID does not exist" Feb 19 09:56:05 crc kubenswrapper[4965]: I0219 09:56:05.656737 4965 scope.go:117] "RemoveContainer" containerID="a969bc7c570548a1ea12b99b5f7b35e94cb900bb0e6fb82f60a70572bba5384d" Feb 19 09:56:05 crc kubenswrapper[4965]: E0219 09:56:05.657128 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a969bc7c570548a1ea12b99b5f7b35e94cb900bb0e6fb82f60a70572bba5384d\": container with ID starting with a969bc7c570548a1ea12b99b5f7b35e94cb900bb0e6fb82f60a70572bba5384d not found: ID does not exist" containerID="a969bc7c570548a1ea12b99b5f7b35e94cb900bb0e6fb82f60a70572bba5384d" Feb 19 09:56:05 crc kubenswrapper[4965]: I0219 09:56:05.657163 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a969bc7c570548a1ea12b99b5f7b35e94cb900bb0e6fb82f60a70572bba5384d"} err="failed to get container status \"a969bc7c570548a1ea12b99b5f7b35e94cb900bb0e6fb82f60a70572bba5384d\": rpc error: code = NotFound desc = could not find container \"a969bc7c570548a1ea12b99b5f7b35e94cb900bb0e6fb82f60a70572bba5384d\": container with ID starting with a969bc7c570548a1ea12b99b5f7b35e94cb900bb0e6fb82f60a70572bba5384d not found: ID does not exist" Feb 19 09:56:06 crc kubenswrapper[4965]: I0219 09:56:06.813292 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651vxf6v" Feb 19 09:56:06 crc kubenswrapper[4965]: I0219 09:56:06.894385 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6dft\" (UniqueName: \"kubernetes.io/projected/476478a2-24c2-4386-9876-ab59f36cabbf-kube-api-access-t6dft\") pod \"476478a2-24c2-4386-9876-ab59f36cabbf\" (UID: \"476478a2-24c2-4386-9876-ab59f36cabbf\") " Feb 19 09:56:06 crc kubenswrapper[4965]: I0219 09:56:06.894559 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/476478a2-24c2-4386-9876-ab59f36cabbf-util\") pod \"476478a2-24c2-4386-9876-ab59f36cabbf\" (UID: \"476478a2-24c2-4386-9876-ab59f36cabbf\") " Feb 19 09:56:06 crc kubenswrapper[4965]: I0219 09:56:06.894605 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/476478a2-24c2-4386-9876-ab59f36cabbf-bundle\") pod \"476478a2-24c2-4386-9876-ab59f36cabbf\" (UID: \"476478a2-24c2-4386-9876-ab59f36cabbf\") " Feb 19 09:56:06 crc kubenswrapper[4965]: I0219 09:56:06.896251 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/476478a2-24c2-4386-9876-ab59f36cabbf-bundle" (OuterVolumeSpecName: "bundle") pod "476478a2-24c2-4386-9876-ab59f36cabbf" (UID: "476478a2-24c2-4386-9876-ab59f36cabbf"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:56:06 crc kubenswrapper[4965]: I0219 09:56:06.910960 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/476478a2-24c2-4386-9876-ab59f36cabbf-kube-api-access-t6dft" (OuterVolumeSpecName: "kube-api-access-t6dft") pod "476478a2-24c2-4386-9876-ab59f36cabbf" (UID: "476478a2-24c2-4386-9876-ab59f36cabbf"). InnerVolumeSpecName "kube-api-access-t6dft". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:56:06 crc kubenswrapper[4965]: I0219 09:56:06.918404 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/476478a2-24c2-4386-9876-ab59f36cabbf-util" (OuterVolumeSpecName: "util") pod "476478a2-24c2-4386-9876-ab59f36cabbf" (UID: "476478a2-24c2-4386-9876-ab59f36cabbf"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:56:06 crc kubenswrapper[4965]: I0219 09:56:06.996775 4965 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/476478a2-24c2-4386-9876-ab59f36cabbf-util\") on node \"crc\" DevicePath \"\"" Feb 19 09:56:06 crc kubenswrapper[4965]: I0219 09:56:06.996808 4965 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/476478a2-24c2-4386-9876-ab59f36cabbf-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 09:56:06 crc kubenswrapper[4965]: I0219 09:56:06.996821 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6dft\" (UniqueName: \"kubernetes.io/projected/476478a2-24c2-4386-9876-ab59f36cabbf-kube-api-access-t6dft\") on node \"crc\" DevicePath \"\"" Feb 19 09:56:07 crc kubenswrapper[4965]: I0219 09:56:07.257542 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4de6e909-65bb-4aa0-a792-d25430c74676" path="/var/lib/kubelet/pods/4de6e909-65bb-4aa0-a792-d25430c74676/volumes" Feb 19 09:56:07 crc kubenswrapper[4965]: I0219 09:56:07.550772 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651vxf6v" event={"ID":"476478a2-24c2-4386-9876-ab59f36cabbf","Type":"ContainerDied","Data":"f3958449d36e96a3936fd27bc8f42aadf63bac911a6a5858e933ed6c90dba9a5"} Feb 19 09:56:07 crc kubenswrapper[4965]: I0219 09:56:07.550820 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3958449d36e96a3936fd27bc8f42aadf63bac911a6a5858e933ed6c90dba9a5" Feb 19 09:56:07 crc kubenswrapper[4965]: I0219 09:56:07.550861 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651vxf6v" Feb 19 09:56:07 crc kubenswrapper[4965]: I0219 09:56:07.554586 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p6p28" event={"ID":"6b0d8fd6-b505-4482-a453-64584799d747","Type":"ContainerStarted","Data":"6b3467d0f17cdfca10c50dc701dc87dd6e1e5a227d7f1c16df6e6c49d324d9b5"} Feb 19 09:56:08 crc kubenswrapper[4965]: I0219 09:56:08.562647 4965 generic.go:334] "Generic (PLEG): container finished" podID="6b0d8fd6-b505-4482-a453-64584799d747" containerID="6b3467d0f17cdfca10c50dc701dc87dd6e1e5a227d7f1c16df6e6c49d324d9b5" exitCode=0 Feb 19 09:56:08 crc kubenswrapper[4965]: I0219 09:56:08.562697 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p6p28" event={"ID":"6b0d8fd6-b505-4482-a453-64584799d747","Type":"ContainerDied","Data":"6b3467d0f17cdfca10c50dc701dc87dd6e1e5a227d7f1c16df6e6c49d324d9b5"} Feb 19 09:56:09 crc kubenswrapper[4965]: I0219 09:56:09.569179 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"c17b5348-5eb4-4a51-b913-43f613170fb8","Type":"ContainerStarted","Data":"0af5d90af14a4e33824c907bdbe8d06372d7f58bbf47a1e50515d2e8eabfb009"} Feb 19 09:56:09 crc kubenswrapper[4965]: I0219 09:56:09.572828 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p6p28" event={"ID":"6b0d8fd6-b505-4482-a453-64584799d747","Type":"ContainerStarted","Data":"3c4eeda6d9761e8b2e45e9272165e698403b5492f44735348a0e928e3ec50f97"} Feb 19 09:56:09 crc kubenswrapper[4965]: I0219 09:56:09.589248 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=3.903816623 podStartE2EDuration="8.589222478s" podCreationTimestamp="2026-02-19 09:56:01 +0000 UTC" firstStartedPulling="2026-02-19 09:56:04.60482592 +0000 UTC m=+820.226147230" lastFinishedPulling="2026-02-19 09:56:09.290231775 +0000 UTC m=+824.911553085" observedRunningTime="2026-02-19 09:56:09.583264693 +0000 UTC m=+825.204586003" watchObservedRunningTime="2026-02-19 09:56:09.589222478 +0000 UTC m=+825.210543788" Feb 19 09:56:09 crc kubenswrapper[4965]: I0219 09:56:09.656315 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-p6p28" podStartSLOduration=2.906157978 podStartE2EDuration="6.656294107s" podCreationTimestamp="2026-02-19 09:56:03 +0000 UTC" firstStartedPulling="2026-02-19 09:56:05.527587903 +0000 UTC m=+821.148909213" lastFinishedPulling="2026-02-19 09:56:09.277724042 +0000 UTC m=+824.899045342" observedRunningTime="2026-02-19 09:56:09.652994657 +0000 UTC m=+825.274315977" watchObservedRunningTime="2026-02-19 09:56:09.656294107 +0000 UTC m=+825.277615417" Feb 19 09:56:12 crc kubenswrapper[4965]: I0219 09:56:12.250696 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-564bb987d4-6pxn4"] Feb 19 09:56:12 crc kubenswrapper[4965]: E0219 09:56:12.251388 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="476478a2-24c2-4386-9876-ab59f36cabbf" containerName="util" Feb 19 09:56:12 crc kubenswrapper[4965]: I0219 09:56:12.251403 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="476478a2-24c2-4386-9876-ab59f36cabbf" containerName="util" Feb 19 09:56:12 crc kubenswrapper[4965]: E0219 09:56:12.251412 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="476478a2-24c2-4386-9876-ab59f36cabbf" containerName="pull" Feb 19 09:56:12 crc kubenswrapper[4965]: I0219 09:56:12.251419 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="476478a2-24c2-4386-9876-ab59f36cabbf" containerName="pull" Feb 19 09:56:12 crc kubenswrapper[4965]: E0219 09:56:12.251428 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="476478a2-24c2-4386-9876-ab59f36cabbf" containerName="extract" Feb 19 09:56:12 crc kubenswrapper[4965]: I0219 09:56:12.251434 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="476478a2-24c2-4386-9876-ab59f36cabbf" containerName="extract" Feb 19 09:56:12 crc kubenswrapper[4965]: E0219 09:56:12.251444 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4de6e909-65bb-4aa0-a792-d25430c74676" containerName="registry-server" Feb 19 09:56:12 crc kubenswrapper[4965]: I0219 09:56:12.251450 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="4de6e909-65bb-4aa0-a792-d25430c74676" containerName="registry-server" Feb 19 09:56:12 crc kubenswrapper[4965]: E0219 09:56:12.251461 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4de6e909-65bb-4aa0-a792-d25430c74676" containerName="extract-content" Feb 19 09:56:12 crc kubenswrapper[4965]: I0219 09:56:12.251466 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="4de6e909-65bb-4aa0-a792-d25430c74676" containerName="extract-content" Feb 19 09:56:12 crc kubenswrapper[4965]: E0219 09:56:12.251479 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4de6e909-65bb-4aa0-a792-d25430c74676" containerName="extract-utilities" Feb 19 09:56:12 crc kubenswrapper[4965]: I0219 09:56:12.251484 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="4de6e909-65bb-4aa0-a792-d25430c74676" containerName="extract-utilities" Feb 19 09:56:12 crc kubenswrapper[4965]: I0219 09:56:12.251587 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="476478a2-24c2-4386-9876-ab59f36cabbf" containerName="extract" Feb 19 09:56:12 crc kubenswrapper[4965]: I0219 09:56:12.251599 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="4de6e909-65bb-4aa0-a792-d25430c74676" containerName="registry-server" Feb 19 09:56:12 crc kubenswrapper[4965]: I0219 09:56:12.252181 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-564bb987d4-6pxn4" Feb 19 09:56:12 crc kubenswrapper[4965]: I0219 09:56:12.254395 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Feb 19 09:56:12 crc kubenswrapper[4965]: I0219 09:56:12.255840 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Feb 19 09:56:12 crc kubenswrapper[4965]: I0219 09:56:12.256893 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Feb 19 09:56:12 crc kubenswrapper[4965]: I0219 09:56:12.256924 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Feb 19 09:56:12 crc kubenswrapper[4965]: I0219 09:56:12.256913 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Feb 19 09:56:12 crc kubenswrapper[4965]: I0219 09:56:12.258077 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-ghc5z" Feb 19 09:56:12 crc kubenswrapper[4965]: I0219 09:56:12.277687 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-564bb987d4-6pxn4"] Feb 19 09:56:12 crc kubenswrapper[4965]: I0219 09:56:12.380338 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9v4p\" (UniqueName: \"kubernetes.io/projected/d8ed232a-7084-4f69-afdf-6d674b5864de-kube-api-access-p9v4p\") pod \"loki-operator-controller-manager-564bb987d4-6pxn4\" (UID: \"d8ed232a-7084-4f69-afdf-6d674b5864de\") " pod="openshift-operators-redhat/loki-operator-controller-manager-564bb987d4-6pxn4" Feb 19 09:56:12 crc kubenswrapper[4965]: I0219 09:56:12.380419 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/d8ed232a-7084-4f69-afdf-6d674b5864de-manager-config\") pod \"loki-operator-controller-manager-564bb987d4-6pxn4\" (UID: \"d8ed232a-7084-4f69-afdf-6d674b5864de\") " pod="openshift-operators-redhat/loki-operator-controller-manager-564bb987d4-6pxn4" Feb 19 09:56:12 crc kubenswrapper[4965]: I0219 09:56:12.380454 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d8ed232a-7084-4f69-afdf-6d674b5864de-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-564bb987d4-6pxn4\" (UID: \"d8ed232a-7084-4f69-afdf-6d674b5864de\") " pod="openshift-operators-redhat/loki-operator-controller-manager-564bb987d4-6pxn4" Feb 19 09:56:12 crc kubenswrapper[4965]: I0219 09:56:12.380528 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d8ed232a-7084-4f69-afdf-6d674b5864de-apiservice-cert\") pod \"loki-operator-controller-manager-564bb987d4-6pxn4\" (UID: \"d8ed232a-7084-4f69-afdf-6d674b5864de\") " pod="openshift-operators-redhat/loki-operator-controller-manager-564bb987d4-6pxn4" Feb 19 09:56:12 crc kubenswrapper[4965]: I0219 09:56:12.380869 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d8ed232a-7084-4f69-afdf-6d674b5864de-webhook-cert\") pod \"loki-operator-controller-manager-564bb987d4-6pxn4\" (UID: \"d8ed232a-7084-4f69-afdf-6d674b5864de\") " pod="openshift-operators-redhat/loki-operator-controller-manager-564bb987d4-6pxn4" Feb 19 09:56:12 crc kubenswrapper[4965]: I0219 09:56:12.482726 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d8ed232a-7084-4f69-afdf-6d674b5864de-webhook-cert\") pod \"loki-operator-controller-manager-564bb987d4-6pxn4\" (UID: \"d8ed232a-7084-4f69-afdf-6d674b5864de\") " pod="openshift-operators-redhat/loki-operator-controller-manager-564bb987d4-6pxn4" Feb 19 09:56:12 crc kubenswrapper[4965]: I0219 09:56:12.482806 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9v4p\" (UniqueName: \"kubernetes.io/projected/d8ed232a-7084-4f69-afdf-6d674b5864de-kube-api-access-p9v4p\") pod \"loki-operator-controller-manager-564bb987d4-6pxn4\" (UID: \"d8ed232a-7084-4f69-afdf-6d674b5864de\") " pod="openshift-operators-redhat/loki-operator-controller-manager-564bb987d4-6pxn4" Feb 19 09:56:12 crc kubenswrapper[4965]: I0219 09:56:12.482828 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/d8ed232a-7084-4f69-afdf-6d674b5864de-manager-config\") pod \"loki-operator-controller-manager-564bb987d4-6pxn4\" (UID: \"d8ed232a-7084-4f69-afdf-6d674b5864de\") " pod="openshift-operators-redhat/loki-operator-controller-manager-564bb987d4-6pxn4" Feb 19 09:56:12 crc kubenswrapper[4965]: I0219 09:56:12.482843 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d8ed232a-7084-4f69-afdf-6d674b5864de-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-564bb987d4-6pxn4\" (UID: \"d8ed232a-7084-4f69-afdf-6d674b5864de\") " pod="openshift-operators-redhat/loki-operator-controller-manager-564bb987d4-6pxn4" Feb 19 09:56:12 crc kubenswrapper[4965]: I0219 09:56:12.482860 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d8ed232a-7084-4f69-afdf-6d674b5864de-apiservice-cert\") pod \"loki-operator-controller-manager-564bb987d4-6pxn4\" (UID: \"d8ed232a-7084-4f69-afdf-6d674b5864de\") " pod="openshift-operators-redhat/loki-operator-controller-manager-564bb987d4-6pxn4" Feb 19 09:56:12 crc kubenswrapper[4965]: I0219 09:56:12.484673 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/d8ed232a-7084-4f69-afdf-6d674b5864de-manager-config\") pod \"loki-operator-controller-manager-564bb987d4-6pxn4\" (UID: \"d8ed232a-7084-4f69-afdf-6d674b5864de\") " pod="openshift-operators-redhat/loki-operator-controller-manager-564bb987d4-6pxn4" Feb 19 09:56:12 crc kubenswrapper[4965]: I0219 09:56:12.491658 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d8ed232a-7084-4f69-afdf-6d674b5864de-apiservice-cert\") pod \"loki-operator-controller-manager-564bb987d4-6pxn4\" (UID: \"d8ed232a-7084-4f69-afdf-6d674b5864de\") " pod="openshift-operators-redhat/loki-operator-controller-manager-564bb987d4-6pxn4" Feb 19 09:56:12 crc kubenswrapper[4965]: I0219 09:56:12.491833 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d8ed232a-7084-4f69-afdf-6d674b5864de-webhook-cert\") pod \"loki-operator-controller-manager-564bb987d4-6pxn4\" (UID: \"d8ed232a-7084-4f69-afdf-6d674b5864de\") " pod="openshift-operators-redhat/loki-operator-controller-manager-564bb987d4-6pxn4" Feb 19 09:56:12 crc kubenswrapper[4965]: I0219 09:56:12.495929 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d8ed232a-7084-4f69-afdf-6d674b5864de-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-564bb987d4-6pxn4\" (UID: \"d8ed232a-7084-4f69-afdf-6d674b5864de\") " pod="openshift-operators-redhat/loki-operator-controller-manager-564bb987d4-6pxn4" Feb 19 09:56:12 crc kubenswrapper[4965]: I0219 09:56:12.503582 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9v4p\" (UniqueName: \"kubernetes.io/projected/d8ed232a-7084-4f69-afdf-6d674b5864de-kube-api-access-p9v4p\") pod \"loki-operator-controller-manager-564bb987d4-6pxn4\" (UID: \"d8ed232a-7084-4f69-afdf-6d674b5864de\") " pod="openshift-operators-redhat/loki-operator-controller-manager-564bb987d4-6pxn4" Feb 19 09:56:12 crc kubenswrapper[4965]: I0219 09:56:12.569002 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-564bb987d4-6pxn4" Feb 19 09:56:13 crc kubenswrapper[4965]: I0219 09:56:13.125171 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-564bb987d4-6pxn4"] Feb 19 09:56:13 crc kubenswrapper[4965]: W0219 09:56:13.135383 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8ed232a_7084_4f69_afdf_6d674b5864de.slice/crio-63fb6fcb2c50582bb07dfeb0d4313828d6c291f10b6e6fcd8d16ff3f352be6db WatchSource:0}: Error finding container 63fb6fcb2c50582bb07dfeb0d4313828d6c291f10b6e6fcd8d16ff3f352be6db: Status 404 returned error can't find the container with id 63fb6fcb2c50582bb07dfeb0d4313828d6c291f10b6e6fcd8d16ff3f352be6db Feb 19 09:56:13 crc kubenswrapper[4965]: I0219 09:56:13.596382 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-564bb987d4-6pxn4" event={"ID":"d8ed232a-7084-4f69-afdf-6d674b5864de","Type":"ContainerStarted","Data":"63fb6fcb2c50582bb07dfeb0d4313828d6c291f10b6e6fcd8d16ff3f352be6db"} Feb 19 09:56:14 crc kubenswrapper[4965]: I0219 09:56:14.274812 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-p6p28" Feb 19 09:56:14 crc kubenswrapper[4965]: I0219 09:56:14.274887 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-p6p28" Feb 19 09:56:15 crc kubenswrapper[4965]: I0219 09:56:15.339179 4965 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-p6p28" podUID="6b0d8fd6-b505-4482-a453-64584799d747" containerName="registry-server" probeResult="failure" output=< Feb 19 09:56:15 crc kubenswrapper[4965]: timeout: failed to connect service ":50051" within 1s Feb 19 09:56:15 crc kubenswrapper[4965]: > Feb 19 09:56:19 crc kubenswrapper[4965]: I0219 09:56:19.632382 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-564bb987d4-6pxn4" event={"ID":"d8ed232a-7084-4f69-afdf-6d674b5864de","Type":"ContainerStarted","Data":"81faffffe9bd9f71a3598ad5da6fd7a380b219c6e34c5aa790f1cf03a679261a"} Feb 19 09:56:24 crc kubenswrapper[4965]: I0219 09:56:24.145585 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-62m65"] Feb 19 09:56:24 crc kubenswrapper[4965]: I0219 09:56:24.147023 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-62m65" Feb 19 09:56:24 crc kubenswrapper[4965]: I0219 09:56:24.158297 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-62m65"] Feb 19 09:56:24 crc kubenswrapper[4965]: I0219 09:56:24.179564 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81ed5a82-4376-4007-8aee-d610f152b603-utilities\") pod \"redhat-marketplace-62m65\" (UID: \"81ed5a82-4376-4007-8aee-d610f152b603\") " pod="openshift-marketplace/redhat-marketplace-62m65" Feb 19 09:56:24 crc kubenswrapper[4965]: I0219 09:56:24.179612 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81ed5a82-4376-4007-8aee-d610f152b603-catalog-content\") pod \"redhat-marketplace-62m65\" (UID: \"81ed5a82-4376-4007-8aee-d610f152b603\") " pod="openshift-marketplace/redhat-marketplace-62m65" Feb 19 09:56:24 crc kubenswrapper[4965]: I0219 09:56:24.179656 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftdgn\" (UniqueName: \"kubernetes.io/projected/81ed5a82-4376-4007-8aee-d610f152b603-kube-api-access-ftdgn\") pod \"redhat-marketplace-62m65\" (UID: \"81ed5a82-4376-4007-8aee-d610f152b603\") " pod="openshift-marketplace/redhat-marketplace-62m65" Feb 19 09:56:24 crc kubenswrapper[4965]: I0219 09:56:24.280909 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81ed5a82-4376-4007-8aee-d610f152b603-utilities\") pod \"redhat-marketplace-62m65\" (UID: \"81ed5a82-4376-4007-8aee-d610f152b603\") " pod="openshift-marketplace/redhat-marketplace-62m65" Feb 19 09:56:24 crc kubenswrapper[4965]: I0219 09:56:24.281294 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81ed5a82-4376-4007-8aee-d610f152b603-catalog-content\") pod \"redhat-marketplace-62m65\" (UID: \"81ed5a82-4376-4007-8aee-d610f152b603\") " pod="openshift-marketplace/redhat-marketplace-62m65" Feb 19 09:56:24 crc kubenswrapper[4965]: I0219 09:56:24.281351 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftdgn\" (UniqueName: \"kubernetes.io/projected/81ed5a82-4376-4007-8aee-d610f152b603-kube-api-access-ftdgn\") pod \"redhat-marketplace-62m65\" (UID: \"81ed5a82-4376-4007-8aee-d610f152b603\") " pod="openshift-marketplace/redhat-marketplace-62m65" Feb 19 09:56:24 crc kubenswrapper[4965]: I0219 09:56:24.282781 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81ed5a82-4376-4007-8aee-d610f152b603-catalog-content\") pod \"redhat-marketplace-62m65\" (UID: \"81ed5a82-4376-4007-8aee-d610f152b603\") " pod="openshift-marketplace/redhat-marketplace-62m65" Feb 19 09:56:24 crc kubenswrapper[4965]: I0219 09:56:24.282913 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81ed5a82-4376-4007-8aee-d610f152b603-utilities\") pod \"redhat-marketplace-62m65\" (UID: \"81ed5a82-4376-4007-8aee-d610f152b603\") " pod="openshift-marketplace/redhat-marketplace-62m65" Feb 19 09:56:24 crc kubenswrapper[4965]: I0219 09:56:24.321521 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftdgn\" (UniqueName: \"kubernetes.io/projected/81ed5a82-4376-4007-8aee-d610f152b603-kube-api-access-ftdgn\") pod \"redhat-marketplace-62m65\" (UID: \"81ed5a82-4376-4007-8aee-d610f152b603\") " pod="openshift-marketplace/redhat-marketplace-62m65" Feb 19 09:56:24 crc kubenswrapper[4965]: I0219 09:56:24.359022 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-p6p28" Feb 19 09:56:24 crc kubenswrapper[4965]: I0219 09:56:24.408763 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-p6p28" Feb 19 09:56:24 crc kubenswrapper[4965]: I0219 09:56:24.482455 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-62m65" Feb 19 09:56:24 crc kubenswrapper[4965]: I0219 09:56:24.663727 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-564bb987d4-6pxn4" event={"ID":"d8ed232a-7084-4f69-afdf-6d674b5864de","Type":"ContainerStarted","Data":"32b83e25d4225b9a320ed2b1070a8f2ab054f26c22254f8b712686daa9dcb914"} Feb 19 09:56:24 crc kubenswrapper[4965]: I0219 09:56:24.664120 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-564bb987d4-6pxn4" Feb 19 09:56:24 crc kubenswrapper[4965]: I0219 09:56:24.666343 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-564bb987d4-6pxn4" Feb 19 09:56:24 crc kubenswrapper[4965]: I0219 09:56:24.687378 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-564bb987d4-6pxn4" podStartSLOduration=1.875415336 podStartE2EDuration="12.687360451s" podCreationTimestamp="2026-02-19 09:56:12 +0000 UTC" firstStartedPulling="2026-02-19 09:56:13.140021944 +0000 UTC m=+828.761343254" lastFinishedPulling="2026-02-19 09:56:23.951967059 +0000 UTC m=+839.573288369" observedRunningTime="2026-02-19 09:56:24.68357881 +0000 UTC m=+840.304900140" watchObservedRunningTime="2026-02-19 09:56:24.687360451 +0000 UTC m=+840.308681761" Feb 19 09:56:24 crc kubenswrapper[4965]: I0219 09:56:24.713284 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-62m65"] Feb 19 09:56:25 crc kubenswrapper[4965]: I0219 09:56:25.672624 4965 generic.go:334] "Generic (PLEG): container finished" podID="81ed5a82-4376-4007-8aee-d610f152b603" containerID="c7a13ed1877ee8939ae23ab6a43f2761d8b1103f304a6265cbb971aa45318880" exitCode=0 Feb 19 09:56:25 crc kubenswrapper[4965]: I0219 09:56:25.672729 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-62m65" event={"ID":"81ed5a82-4376-4007-8aee-d610f152b603","Type":"ContainerDied","Data":"c7a13ed1877ee8939ae23ab6a43f2761d8b1103f304a6265cbb971aa45318880"} Feb 19 09:56:25 crc kubenswrapper[4965]: I0219 09:56:25.673111 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-62m65" event={"ID":"81ed5a82-4376-4007-8aee-d610f152b603","Type":"ContainerStarted","Data":"bb4a25fbc2d824bc1d82affd5b4d2583f65b87ca5285b2ffa02dc7bb5a46ccde"} Feb 19 09:56:26 crc kubenswrapper[4965]: I0219 09:56:26.745656 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p6p28"] Feb 19 09:56:26 crc kubenswrapper[4965]: I0219 09:56:26.746283 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-p6p28" podUID="6b0d8fd6-b505-4482-a453-64584799d747" containerName="registry-server" containerID="cri-o://3c4eeda6d9761e8b2e45e9272165e698403b5492f44735348a0e928e3ec50f97" gracePeriod=2 Feb 19 09:56:27 crc kubenswrapper[4965]: I0219 09:56:27.193114 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p6p28" Feb 19 09:56:27 crc kubenswrapper[4965]: I0219 09:56:27.328824 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lm9v6\" (UniqueName: \"kubernetes.io/projected/6b0d8fd6-b505-4482-a453-64584799d747-kube-api-access-lm9v6\") pod \"6b0d8fd6-b505-4482-a453-64584799d747\" (UID: \"6b0d8fd6-b505-4482-a453-64584799d747\") " Feb 19 09:56:27 crc kubenswrapper[4965]: I0219 09:56:27.329442 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b0d8fd6-b505-4482-a453-64584799d747-utilities\") pod \"6b0d8fd6-b505-4482-a453-64584799d747\" (UID: \"6b0d8fd6-b505-4482-a453-64584799d747\") " Feb 19 09:56:27 crc kubenswrapper[4965]: I0219 09:56:27.329631 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b0d8fd6-b505-4482-a453-64584799d747-catalog-content\") pod \"6b0d8fd6-b505-4482-a453-64584799d747\" (UID: \"6b0d8fd6-b505-4482-a453-64584799d747\") " Feb 19 09:56:27 crc kubenswrapper[4965]: I0219 09:56:27.330425 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b0d8fd6-b505-4482-a453-64584799d747-utilities" (OuterVolumeSpecName: "utilities") pod "6b0d8fd6-b505-4482-a453-64584799d747" (UID: "6b0d8fd6-b505-4482-a453-64584799d747"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:56:27 crc kubenswrapper[4965]: I0219 09:56:27.334728 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b0d8fd6-b505-4482-a453-64584799d747-kube-api-access-lm9v6" (OuterVolumeSpecName: "kube-api-access-lm9v6") pod "6b0d8fd6-b505-4482-a453-64584799d747" (UID: "6b0d8fd6-b505-4482-a453-64584799d747"). InnerVolumeSpecName "kube-api-access-lm9v6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:56:27 crc kubenswrapper[4965]: I0219 09:56:27.431368 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lm9v6\" (UniqueName: \"kubernetes.io/projected/6b0d8fd6-b505-4482-a453-64584799d747-kube-api-access-lm9v6\") on node \"crc\" DevicePath \"\"" Feb 19 09:56:27 crc kubenswrapper[4965]: I0219 09:56:27.431619 4965 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b0d8fd6-b505-4482-a453-64584799d747-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 09:56:27 crc kubenswrapper[4965]: I0219 09:56:27.475925 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b0d8fd6-b505-4482-a453-64584799d747-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6b0d8fd6-b505-4482-a453-64584799d747" (UID: "6b0d8fd6-b505-4482-a453-64584799d747"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:56:27 crc kubenswrapper[4965]: I0219 09:56:27.533173 4965 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b0d8fd6-b505-4482-a453-64584799d747-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 09:56:27 crc kubenswrapper[4965]: I0219 09:56:27.690533 4965 generic.go:334] "Generic (PLEG): container finished" podID="6b0d8fd6-b505-4482-a453-64584799d747" containerID="3c4eeda6d9761e8b2e45e9272165e698403b5492f44735348a0e928e3ec50f97" exitCode=0 Feb 19 09:56:27 crc kubenswrapper[4965]: I0219 09:56:27.690720 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p6p28" event={"ID":"6b0d8fd6-b505-4482-a453-64584799d747","Type":"ContainerDied","Data":"3c4eeda6d9761e8b2e45e9272165e698403b5492f44735348a0e928e3ec50f97"} Feb 19 09:56:27 crc kubenswrapper[4965]: I0219 09:56:27.690721 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p6p28" Feb 19 09:56:27 crc kubenswrapper[4965]: I0219 09:56:27.691108 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p6p28" event={"ID":"6b0d8fd6-b505-4482-a453-64584799d747","Type":"ContainerDied","Data":"42178de044193b0348066b0e82280a93c2117c155990ac1cab50a0b067422278"} Feb 19 09:56:27 crc kubenswrapper[4965]: I0219 09:56:27.691146 4965 scope.go:117] "RemoveContainer" containerID="3c4eeda6d9761e8b2e45e9272165e698403b5492f44735348a0e928e3ec50f97" Feb 19 09:56:27 crc kubenswrapper[4965]: I0219 09:56:27.694094 4965 generic.go:334] "Generic (PLEG): container finished" podID="81ed5a82-4376-4007-8aee-d610f152b603" containerID="5b1cef680b1a04a9850698aa95bcb3a4f77b8ee6aad1bf671bd38c1dd32b07ca" exitCode=0 Feb 19 09:56:27 crc kubenswrapper[4965]: I0219 09:56:27.694271 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-62m65" event={"ID":"81ed5a82-4376-4007-8aee-d610f152b603","Type":"ContainerDied","Data":"5b1cef680b1a04a9850698aa95bcb3a4f77b8ee6aad1bf671bd38c1dd32b07ca"} Feb 19 09:56:27 crc kubenswrapper[4965]: I0219 09:56:27.735547 4965 scope.go:117] "RemoveContainer" containerID="6b3467d0f17cdfca10c50dc701dc87dd6e1e5a227d7f1c16df6e6c49d324d9b5" Feb 19 09:56:27 crc kubenswrapper[4965]: I0219 09:56:27.762368 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p6p28"] Feb 19 09:56:27 crc kubenswrapper[4965]: I0219 09:56:27.768783 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-p6p28"] Feb 19 09:56:27 crc kubenswrapper[4965]: I0219 09:56:27.769641 4965 scope.go:117] "RemoveContainer" containerID="0ef7d1891c632e0af316a6cee1167c91a23711f04862dd4a8ba412b8753866ef" Feb 19 09:56:27 crc kubenswrapper[4965]: I0219 09:56:27.784931 4965 scope.go:117] "RemoveContainer" containerID="3c4eeda6d9761e8b2e45e9272165e698403b5492f44735348a0e928e3ec50f97" Feb 19 09:56:27 crc kubenswrapper[4965]: E0219 09:56:27.785572 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c4eeda6d9761e8b2e45e9272165e698403b5492f44735348a0e928e3ec50f97\": container with ID starting with 3c4eeda6d9761e8b2e45e9272165e698403b5492f44735348a0e928e3ec50f97 not found: ID does not exist" containerID="3c4eeda6d9761e8b2e45e9272165e698403b5492f44735348a0e928e3ec50f97" Feb 19 09:56:27 crc kubenswrapper[4965]: I0219 09:56:27.785614 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c4eeda6d9761e8b2e45e9272165e698403b5492f44735348a0e928e3ec50f97"} err="failed to get container status \"3c4eeda6d9761e8b2e45e9272165e698403b5492f44735348a0e928e3ec50f97\": rpc error: code = NotFound desc = could not find container \"3c4eeda6d9761e8b2e45e9272165e698403b5492f44735348a0e928e3ec50f97\": container with ID starting with 3c4eeda6d9761e8b2e45e9272165e698403b5492f44735348a0e928e3ec50f97 not found: ID does not exist" Feb 19 09:56:27 crc kubenswrapper[4965]: I0219 09:56:27.785652 4965 scope.go:117] "RemoveContainer" containerID="6b3467d0f17cdfca10c50dc701dc87dd6e1e5a227d7f1c16df6e6c49d324d9b5" Feb 19 09:56:27 crc kubenswrapper[4965]: E0219 09:56:27.786068 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b3467d0f17cdfca10c50dc701dc87dd6e1e5a227d7f1c16df6e6c49d324d9b5\": container with ID starting with 6b3467d0f17cdfca10c50dc701dc87dd6e1e5a227d7f1c16df6e6c49d324d9b5 not found: ID does not exist" containerID="6b3467d0f17cdfca10c50dc701dc87dd6e1e5a227d7f1c16df6e6c49d324d9b5" Feb 19 09:56:27 crc kubenswrapper[4965]: I0219 09:56:27.786119 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b3467d0f17cdfca10c50dc701dc87dd6e1e5a227d7f1c16df6e6c49d324d9b5"} err="failed to get container status \"6b3467d0f17cdfca10c50dc701dc87dd6e1e5a227d7f1c16df6e6c49d324d9b5\": rpc error: code = NotFound desc = could not find container \"6b3467d0f17cdfca10c50dc701dc87dd6e1e5a227d7f1c16df6e6c49d324d9b5\": container with ID starting with 6b3467d0f17cdfca10c50dc701dc87dd6e1e5a227d7f1c16df6e6c49d324d9b5 not found: ID does not exist" Feb 19 09:56:27 crc kubenswrapper[4965]: I0219 09:56:27.786153 4965 scope.go:117] "RemoveContainer" containerID="0ef7d1891c632e0af316a6cee1167c91a23711f04862dd4a8ba412b8753866ef" Feb 19 09:56:27 crc kubenswrapper[4965]: E0219 09:56:27.786950 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ef7d1891c632e0af316a6cee1167c91a23711f04862dd4a8ba412b8753866ef\": container with ID starting with 0ef7d1891c632e0af316a6cee1167c91a23711f04862dd4a8ba412b8753866ef not found: ID does not exist" containerID="0ef7d1891c632e0af316a6cee1167c91a23711f04862dd4a8ba412b8753866ef" Feb 19 09:56:27 crc kubenswrapper[4965]: I0219 09:56:27.786998 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ef7d1891c632e0af316a6cee1167c91a23711f04862dd4a8ba412b8753866ef"} err="failed to get container status \"0ef7d1891c632e0af316a6cee1167c91a23711f04862dd4a8ba412b8753866ef\": rpc error: code = NotFound desc = could not find container \"0ef7d1891c632e0af316a6cee1167c91a23711f04862dd4a8ba412b8753866ef\": container with ID starting with 0ef7d1891c632e0af316a6cee1167c91a23711f04862dd4a8ba412b8753866ef not found: ID does not exist" Feb 19 09:56:28 crc kubenswrapper[4965]: I0219 09:56:28.701836 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-62m65" event={"ID":"81ed5a82-4376-4007-8aee-d610f152b603","Type":"ContainerStarted","Data":"e26b70a115252ffdb6f0aa483c759e7bd84b360a7db48622d6c6bb8f161ccf11"} Feb 19 09:56:28 crc kubenswrapper[4965]: I0219 09:56:28.720602 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-62m65" podStartSLOduration=2.290080839 podStartE2EDuration="4.720580215s" podCreationTimestamp="2026-02-19 09:56:24 +0000 UTC" firstStartedPulling="2026-02-19 09:56:25.674663341 +0000 UTC m=+841.295984671" lastFinishedPulling="2026-02-19 09:56:28.105162737 +0000 UTC m=+843.726484047" observedRunningTime="2026-02-19 09:56:28.720425882 +0000 UTC m=+844.341747202" watchObservedRunningTime="2026-02-19 09:56:28.720580215 +0000 UTC m=+844.341901525" Feb 19 09:56:29 crc kubenswrapper[4965]: I0219 09:56:29.204247 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b0d8fd6-b505-4482-a453-64584799d747" path="/var/lib/kubelet/pods/6b0d8fd6-b505-4482-a453-64584799d747/volumes" Feb 19 09:56:34 crc kubenswrapper[4965]: I0219 09:56:34.506606 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-62m65" Feb 19 09:56:34 crc kubenswrapper[4965]: I0219 09:56:34.506972 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-62m65" Feb 19 09:56:34 crc kubenswrapper[4965]: I0219 09:56:34.572328 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-62m65" Feb 19 09:56:34 crc kubenswrapper[4965]: I0219 09:56:34.794841 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-62m65" Feb 19 09:56:35 crc kubenswrapper[4965]: I0219 09:56:35.152929 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-62m65"] Feb 19 09:56:36 crc kubenswrapper[4965]: I0219 09:56:36.762230 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-62m65" podUID="81ed5a82-4376-4007-8aee-d610f152b603" containerName="registry-server" containerID="cri-o://e26b70a115252ffdb6f0aa483c759e7bd84b360a7db48622d6c6bb8f161ccf11" gracePeriod=2 Feb 19 09:56:37 crc kubenswrapper[4965]: I0219 09:56:37.203972 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-62m65" Feb 19 09:56:37 crc kubenswrapper[4965]: I0219 09:56:37.266347 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftdgn\" (UniqueName: \"kubernetes.io/projected/81ed5a82-4376-4007-8aee-d610f152b603-kube-api-access-ftdgn\") pod \"81ed5a82-4376-4007-8aee-d610f152b603\" (UID: \"81ed5a82-4376-4007-8aee-d610f152b603\") " Feb 19 09:56:37 crc kubenswrapper[4965]: I0219 09:56:37.266425 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81ed5a82-4376-4007-8aee-d610f152b603-catalog-content\") pod \"81ed5a82-4376-4007-8aee-d610f152b603\" (UID: \"81ed5a82-4376-4007-8aee-d610f152b603\") " Feb 19 09:56:37 crc kubenswrapper[4965]: I0219 09:56:37.266633 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81ed5a82-4376-4007-8aee-d610f152b603-utilities\") pod \"81ed5a82-4376-4007-8aee-d610f152b603\" (UID: \"81ed5a82-4376-4007-8aee-d610f152b603\") " Feb 19 09:56:37 crc kubenswrapper[4965]: I0219 09:56:37.269558 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81ed5a82-4376-4007-8aee-d610f152b603-utilities" (OuterVolumeSpecName: "utilities") pod "81ed5a82-4376-4007-8aee-d610f152b603" (UID: "81ed5a82-4376-4007-8aee-d610f152b603"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:56:37 crc kubenswrapper[4965]: I0219 09:56:37.274169 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81ed5a82-4376-4007-8aee-d610f152b603-kube-api-access-ftdgn" (OuterVolumeSpecName: "kube-api-access-ftdgn") pod "81ed5a82-4376-4007-8aee-d610f152b603" (UID: "81ed5a82-4376-4007-8aee-d610f152b603"). InnerVolumeSpecName "kube-api-access-ftdgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:56:37 crc kubenswrapper[4965]: I0219 09:56:37.294960 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81ed5a82-4376-4007-8aee-d610f152b603-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "81ed5a82-4376-4007-8aee-d610f152b603" (UID: "81ed5a82-4376-4007-8aee-d610f152b603"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:56:37 crc kubenswrapper[4965]: I0219 09:56:37.367950 4965 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81ed5a82-4376-4007-8aee-d610f152b603-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 09:56:37 crc kubenswrapper[4965]: I0219 09:56:37.367988 4965 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81ed5a82-4376-4007-8aee-d610f152b603-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 09:56:37 crc kubenswrapper[4965]: I0219 09:56:37.368000 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftdgn\" (UniqueName: \"kubernetes.io/projected/81ed5a82-4376-4007-8aee-d610f152b603-kube-api-access-ftdgn\") on node \"crc\" DevicePath \"\"" Feb 19 09:56:37 crc kubenswrapper[4965]: I0219 09:56:37.770106 4965 generic.go:334] "Generic (PLEG): container finished" podID="81ed5a82-4376-4007-8aee-d610f152b603" containerID="e26b70a115252ffdb6f0aa483c759e7bd84b360a7db48622d6c6bb8f161ccf11" exitCode=0 Feb 19 09:56:37 crc kubenswrapper[4965]: I0219 09:56:37.770253 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-62m65" Feb 19 09:56:37 crc kubenswrapper[4965]: I0219 09:56:37.770288 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-62m65" event={"ID":"81ed5a82-4376-4007-8aee-d610f152b603","Type":"ContainerDied","Data":"e26b70a115252ffdb6f0aa483c759e7bd84b360a7db48622d6c6bb8f161ccf11"} Feb 19 09:56:37 crc kubenswrapper[4965]: I0219 09:56:37.771915 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-62m65" event={"ID":"81ed5a82-4376-4007-8aee-d610f152b603","Type":"ContainerDied","Data":"bb4a25fbc2d824bc1d82affd5b4d2583f65b87ca5285b2ffa02dc7bb5a46ccde"} Feb 19 09:56:37 crc kubenswrapper[4965]: I0219 09:56:37.772031 4965 scope.go:117] "RemoveContainer" containerID="e26b70a115252ffdb6f0aa483c759e7bd84b360a7db48622d6c6bb8f161ccf11" Feb 19 09:56:37 crc kubenswrapper[4965]: I0219 09:56:37.793843 4965 scope.go:117] "RemoveContainer" containerID="5b1cef680b1a04a9850698aa95bcb3a4f77b8ee6aad1bf671bd38c1dd32b07ca" Feb 19 09:56:37 crc kubenswrapper[4965]: I0219 09:56:37.808340 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-62m65"] Feb 19 09:56:37 crc kubenswrapper[4965]: I0219 09:56:37.820940 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-62m65"] Feb 19 09:56:37 crc kubenswrapper[4965]: I0219 09:56:37.831893 4965 scope.go:117] "RemoveContainer" containerID="c7a13ed1877ee8939ae23ab6a43f2761d8b1103f304a6265cbb971aa45318880" Feb 19 09:56:37 crc kubenswrapper[4965]: I0219 09:56:37.848117 4965 scope.go:117] "RemoveContainer" containerID="e26b70a115252ffdb6f0aa483c759e7bd84b360a7db48622d6c6bb8f161ccf11" Feb 19 09:56:37 crc kubenswrapper[4965]: E0219 09:56:37.848823 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e26b70a115252ffdb6f0aa483c759e7bd84b360a7db48622d6c6bb8f161ccf11\": container with ID starting with e26b70a115252ffdb6f0aa483c759e7bd84b360a7db48622d6c6bb8f161ccf11 not found: ID does not exist" containerID="e26b70a115252ffdb6f0aa483c759e7bd84b360a7db48622d6c6bb8f161ccf11" Feb 19 09:56:37 crc kubenswrapper[4965]: I0219 09:56:37.848889 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e26b70a115252ffdb6f0aa483c759e7bd84b360a7db48622d6c6bb8f161ccf11"} err="failed to get container status \"e26b70a115252ffdb6f0aa483c759e7bd84b360a7db48622d6c6bb8f161ccf11\": rpc error: code = NotFound desc = could not find container \"e26b70a115252ffdb6f0aa483c759e7bd84b360a7db48622d6c6bb8f161ccf11\": container with ID starting with e26b70a115252ffdb6f0aa483c759e7bd84b360a7db48622d6c6bb8f161ccf11 not found: ID does not exist" Feb 19 09:56:37 crc kubenswrapper[4965]: I0219 09:56:37.848920 4965 scope.go:117] "RemoveContainer" containerID="5b1cef680b1a04a9850698aa95bcb3a4f77b8ee6aad1bf671bd38c1dd32b07ca" Feb 19 09:56:37 crc kubenswrapper[4965]: E0219 09:56:37.849428 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b1cef680b1a04a9850698aa95bcb3a4f77b8ee6aad1bf671bd38c1dd32b07ca\": container with ID starting with 5b1cef680b1a04a9850698aa95bcb3a4f77b8ee6aad1bf671bd38c1dd32b07ca not found: ID does not exist" containerID="5b1cef680b1a04a9850698aa95bcb3a4f77b8ee6aad1bf671bd38c1dd32b07ca" Feb 19 09:56:37 crc kubenswrapper[4965]: I0219 09:56:37.849544 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b1cef680b1a04a9850698aa95bcb3a4f77b8ee6aad1bf671bd38c1dd32b07ca"} err="failed to get container status \"5b1cef680b1a04a9850698aa95bcb3a4f77b8ee6aad1bf671bd38c1dd32b07ca\": rpc error: code = NotFound desc = could not find container \"5b1cef680b1a04a9850698aa95bcb3a4f77b8ee6aad1bf671bd38c1dd32b07ca\": container with ID starting with 5b1cef680b1a04a9850698aa95bcb3a4f77b8ee6aad1bf671bd38c1dd32b07ca not found: ID does not exist" Feb 19 09:56:37 crc kubenswrapper[4965]: I0219 09:56:37.849636 4965 scope.go:117] "RemoveContainer" containerID="c7a13ed1877ee8939ae23ab6a43f2761d8b1103f304a6265cbb971aa45318880" Feb 19 09:56:37 crc kubenswrapper[4965]: E0219 09:56:37.850098 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7a13ed1877ee8939ae23ab6a43f2761d8b1103f304a6265cbb971aa45318880\": container with ID starting with c7a13ed1877ee8939ae23ab6a43f2761d8b1103f304a6265cbb971aa45318880 not found: ID does not exist" containerID="c7a13ed1877ee8939ae23ab6a43f2761d8b1103f304a6265cbb971aa45318880" Feb 19 09:56:37 crc kubenswrapper[4965]: I0219 09:56:37.850150 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7a13ed1877ee8939ae23ab6a43f2761d8b1103f304a6265cbb971aa45318880"} err="failed to get container status \"c7a13ed1877ee8939ae23ab6a43f2761d8b1103f304a6265cbb971aa45318880\": rpc error: code = NotFound desc = could not find container \"c7a13ed1877ee8939ae23ab6a43f2761d8b1103f304a6265cbb971aa45318880\": container with ID starting with c7a13ed1877ee8939ae23ab6a43f2761d8b1103f304a6265cbb971aa45318880 not found: ID does not exist" Feb 19 09:56:39 crc kubenswrapper[4965]: I0219 09:56:39.208122 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81ed5a82-4376-4007-8aee-d610f152b603" path="/var/lib/kubelet/pods/81ed5a82-4376-4007-8aee-d610f152b603/volumes" Feb 19 09:56:59 crc kubenswrapper[4965]: I0219 09:56:59.737849 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf999t"] Feb 19 09:56:59 crc kubenswrapper[4965]: E0219 09:56:59.738645 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b0d8fd6-b505-4482-a453-64584799d747" containerName="extract-utilities" Feb 19 09:56:59 crc kubenswrapper[4965]: I0219 09:56:59.738661 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b0d8fd6-b505-4482-a453-64584799d747" containerName="extract-utilities" Feb 19 09:56:59 crc kubenswrapper[4965]: E0219 09:56:59.738673 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b0d8fd6-b505-4482-a453-64584799d747" containerName="registry-server" Feb 19 09:56:59 crc kubenswrapper[4965]: I0219 09:56:59.738681 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b0d8fd6-b505-4482-a453-64584799d747" containerName="registry-server" Feb 19 09:56:59 crc kubenswrapper[4965]: E0219 09:56:59.738691 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81ed5a82-4376-4007-8aee-d610f152b603" containerName="extract-utilities" Feb 19 09:56:59 crc kubenswrapper[4965]: I0219 09:56:59.738699 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="81ed5a82-4376-4007-8aee-d610f152b603" containerName="extract-utilities" Feb 19 09:56:59 crc kubenswrapper[4965]: E0219 09:56:59.738711 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b0d8fd6-b505-4482-a453-64584799d747" containerName="extract-content" Feb 19 09:56:59 crc kubenswrapper[4965]: I0219 09:56:59.738719 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b0d8fd6-b505-4482-a453-64584799d747" containerName="extract-content" Feb 19 09:56:59 crc kubenswrapper[4965]: E0219 09:56:59.738738 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81ed5a82-4376-4007-8aee-d610f152b603" containerName="extract-content" Feb 19 09:56:59 crc kubenswrapper[4965]: I0219 09:56:59.738747 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="81ed5a82-4376-4007-8aee-d610f152b603" containerName="extract-content" Feb 19 09:56:59 crc kubenswrapper[4965]: E0219 09:56:59.738760 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81ed5a82-4376-4007-8aee-d610f152b603" containerName="registry-server" Feb 19 09:56:59 crc kubenswrapper[4965]: I0219 09:56:59.738767 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="81ed5a82-4376-4007-8aee-d610f152b603" containerName="registry-server" Feb 19 09:56:59 crc kubenswrapper[4965]: I0219 09:56:59.738878 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b0d8fd6-b505-4482-a453-64584799d747" containerName="registry-server" Feb 19 09:56:59 crc kubenswrapper[4965]: I0219 09:56:59.738894 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="81ed5a82-4376-4007-8aee-d610f152b603" containerName="registry-server" Feb 19 09:56:59 crc kubenswrapper[4965]: I0219 09:56:59.739676 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf999t" Feb 19 09:56:59 crc kubenswrapper[4965]: I0219 09:56:59.741829 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 19 09:56:59 crc kubenswrapper[4965]: I0219 09:56:59.749600 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf999t"] Feb 19 09:56:59 crc kubenswrapper[4965]: I0219 09:56:59.777900 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a18e3883-75f3-47f3-a6a6-31358dbc980a-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf999t\" (UID: \"a18e3883-75f3-47f3-a6a6-31358dbc980a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf999t" Feb 19 09:56:59 crc kubenswrapper[4965]: I0219 09:56:59.777966 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhw49\" (UniqueName: \"kubernetes.io/projected/a18e3883-75f3-47f3-a6a6-31358dbc980a-kube-api-access-hhw49\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf999t\" (UID: \"a18e3883-75f3-47f3-a6a6-31358dbc980a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf999t" Feb 19 09:56:59 crc kubenswrapper[4965]: I0219 09:56:59.778034 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a18e3883-75f3-47f3-a6a6-31358dbc980a-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf999t\" (UID: \"a18e3883-75f3-47f3-a6a6-31358dbc980a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf999t" Feb 19 09:56:59 crc kubenswrapper[4965]: I0219 09:56:59.879511 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a18e3883-75f3-47f3-a6a6-31358dbc980a-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf999t\" (UID: \"a18e3883-75f3-47f3-a6a6-31358dbc980a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf999t" Feb 19 09:56:59 crc kubenswrapper[4965]: I0219 09:56:59.880141 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhw49\" (UniqueName: \"kubernetes.io/projected/a18e3883-75f3-47f3-a6a6-31358dbc980a-kube-api-access-hhw49\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf999t\" (UID: \"a18e3883-75f3-47f3-a6a6-31358dbc980a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf999t" Feb 19 09:56:59 crc kubenswrapper[4965]: I0219 09:56:59.880303 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a18e3883-75f3-47f3-a6a6-31358dbc980a-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf999t\" (UID: \"a18e3883-75f3-47f3-a6a6-31358dbc980a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf999t" Feb 19 09:56:59 crc kubenswrapper[4965]: I0219 09:56:59.880325 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a18e3883-75f3-47f3-a6a6-31358dbc980a-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf999t\" (UID: \"a18e3883-75f3-47f3-a6a6-31358dbc980a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf999t" Feb 19 09:56:59 crc kubenswrapper[4965]: I0219 09:56:59.880769 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a18e3883-75f3-47f3-a6a6-31358dbc980a-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf999t\" (UID: \"a18e3883-75f3-47f3-a6a6-31358dbc980a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf999t" Feb 19 09:56:59 crc kubenswrapper[4965]: I0219 09:56:59.901479 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhw49\" (UniqueName: \"kubernetes.io/projected/a18e3883-75f3-47f3-a6a6-31358dbc980a-kube-api-access-hhw49\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf999t\" (UID: \"a18e3883-75f3-47f3-a6a6-31358dbc980a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf999t" Feb 19 09:57:00 crc kubenswrapper[4965]: I0219 09:57:00.056431 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf999t" Feb 19 09:57:00 crc kubenswrapper[4965]: I0219 09:57:00.526712 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf999t"] Feb 19 09:57:00 crc kubenswrapper[4965]: I0219 09:57:00.935005 4965 generic.go:334] "Generic (PLEG): container finished" podID="a18e3883-75f3-47f3-a6a6-31358dbc980a" containerID="1707e8a3850cb9bf061dc0f1380bdc5365881c4123eb01d51a024b7ee0ded68b" exitCode=0 Feb 19 09:57:00 crc kubenswrapper[4965]: I0219 09:57:00.935045 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf999t" event={"ID":"a18e3883-75f3-47f3-a6a6-31358dbc980a","Type":"ContainerDied","Data":"1707e8a3850cb9bf061dc0f1380bdc5365881c4123eb01d51a024b7ee0ded68b"} Feb 19 09:57:00 crc kubenswrapper[4965]: I0219 09:57:00.935070 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf999t" event={"ID":"a18e3883-75f3-47f3-a6a6-31358dbc980a","Type":"ContainerStarted","Data":"ad6f5f9aba589fe243e4bec471059626fdec16d3179770121583564340d81af1"} Feb 19 09:57:02 crc kubenswrapper[4965]: I0219 09:57:02.957854 4965 generic.go:334] "Generic (PLEG): container finished" podID="a18e3883-75f3-47f3-a6a6-31358dbc980a" containerID="ccde42484c53be0d5549d47df845ecbd1c82061dc9cfa8ee2191c335eb96549f" exitCode=0 Feb 19 09:57:02 crc kubenswrapper[4965]: I0219 09:57:02.957949 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf999t" event={"ID":"a18e3883-75f3-47f3-a6a6-31358dbc980a","Type":"ContainerDied","Data":"ccde42484c53be0d5549d47df845ecbd1c82061dc9cfa8ee2191c335eb96549f"} Feb 19 09:57:03 crc kubenswrapper[4965]: I0219 09:57:03.969278 4965 generic.go:334] "Generic (PLEG): container finished" podID="a18e3883-75f3-47f3-a6a6-31358dbc980a" containerID="bb91926bf2e6b0871f5322a8c77326a5a40a1608563a90a1f2462775108fa6e9" exitCode=0 Feb 19 09:57:03 crc kubenswrapper[4965]: I0219 09:57:03.969331 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf999t" event={"ID":"a18e3883-75f3-47f3-a6a6-31358dbc980a","Type":"ContainerDied","Data":"bb91926bf2e6b0871f5322a8c77326a5a40a1608563a90a1f2462775108fa6e9"} Feb 19 09:57:05 crc kubenswrapper[4965]: I0219 09:57:05.238476 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf999t" Feb 19 09:57:05 crc kubenswrapper[4965]: I0219 09:57:05.256627 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhw49\" (UniqueName: \"kubernetes.io/projected/a18e3883-75f3-47f3-a6a6-31358dbc980a-kube-api-access-hhw49\") pod \"a18e3883-75f3-47f3-a6a6-31358dbc980a\" (UID: \"a18e3883-75f3-47f3-a6a6-31358dbc980a\") " Feb 19 09:57:05 crc kubenswrapper[4965]: I0219 09:57:05.256720 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a18e3883-75f3-47f3-a6a6-31358dbc980a-util\") pod \"a18e3883-75f3-47f3-a6a6-31358dbc980a\" (UID: \"a18e3883-75f3-47f3-a6a6-31358dbc980a\") " Feb 19 09:57:05 crc kubenswrapper[4965]: I0219 09:57:05.256775 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a18e3883-75f3-47f3-a6a6-31358dbc980a-bundle\") pod \"a18e3883-75f3-47f3-a6a6-31358dbc980a\" (UID: \"a18e3883-75f3-47f3-a6a6-31358dbc980a\") " Feb 19 09:57:05 crc kubenswrapper[4965]: I0219 09:57:05.257573 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a18e3883-75f3-47f3-a6a6-31358dbc980a-bundle" (OuterVolumeSpecName: "bundle") pod "a18e3883-75f3-47f3-a6a6-31358dbc980a" (UID: "a18e3883-75f3-47f3-a6a6-31358dbc980a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:57:05 crc kubenswrapper[4965]: I0219 09:57:05.262136 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a18e3883-75f3-47f3-a6a6-31358dbc980a-kube-api-access-hhw49" (OuterVolumeSpecName: "kube-api-access-hhw49") pod "a18e3883-75f3-47f3-a6a6-31358dbc980a" (UID: "a18e3883-75f3-47f3-a6a6-31358dbc980a"). InnerVolumeSpecName "kube-api-access-hhw49". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:57:05 crc kubenswrapper[4965]: I0219 09:57:05.275138 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a18e3883-75f3-47f3-a6a6-31358dbc980a-util" (OuterVolumeSpecName: "util") pod "a18e3883-75f3-47f3-a6a6-31358dbc980a" (UID: "a18e3883-75f3-47f3-a6a6-31358dbc980a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:57:05 crc kubenswrapper[4965]: I0219 09:57:05.357874 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhw49\" (UniqueName: \"kubernetes.io/projected/a18e3883-75f3-47f3-a6a6-31358dbc980a-kube-api-access-hhw49\") on node \"crc\" DevicePath \"\"" Feb 19 09:57:05 crc kubenswrapper[4965]: I0219 09:57:05.357915 4965 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a18e3883-75f3-47f3-a6a6-31358dbc980a-util\") on node \"crc\" DevicePath \"\"" Feb 19 09:57:05 crc kubenswrapper[4965]: I0219 09:57:05.357924 4965 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a18e3883-75f3-47f3-a6a6-31358dbc980a-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 09:57:05 crc kubenswrapper[4965]: I0219 09:57:05.982973 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf999t" event={"ID":"a18e3883-75f3-47f3-a6a6-31358dbc980a","Type":"ContainerDied","Data":"ad6f5f9aba589fe243e4bec471059626fdec16d3179770121583564340d81af1"} Feb 19 09:57:05 crc kubenswrapper[4965]: I0219 09:57:05.983023 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad6f5f9aba589fe243e4bec471059626fdec16d3179770121583564340d81af1" Feb 19 09:57:05 crc kubenswrapper[4965]: I0219 09:57:05.983034 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf999t" Feb 19 09:57:11 crc kubenswrapper[4965]: I0219 09:57:11.744113 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-kkrqs"] Feb 19 09:57:11 crc kubenswrapper[4965]: E0219 09:57:11.745017 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a18e3883-75f3-47f3-a6a6-31358dbc980a" containerName="util" Feb 19 09:57:11 crc kubenswrapper[4965]: I0219 09:57:11.745039 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="a18e3883-75f3-47f3-a6a6-31358dbc980a" containerName="util" Feb 19 09:57:11 crc kubenswrapper[4965]: E0219 09:57:11.745063 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a18e3883-75f3-47f3-a6a6-31358dbc980a" containerName="extract" Feb 19 09:57:11 crc kubenswrapper[4965]: I0219 09:57:11.745076 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="a18e3883-75f3-47f3-a6a6-31358dbc980a" containerName="extract" Feb 19 09:57:11 crc kubenswrapper[4965]: E0219 09:57:11.745107 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a18e3883-75f3-47f3-a6a6-31358dbc980a" containerName="pull" Feb 19 09:57:11 crc kubenswrapper[4965]: I0219 09:57:11.745120 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="a18e3883-75f3-47f3-a6a6-31358dbc980a" containerName="pull" Feb 19 09:57:11 crc kubenswrapper[4965]: I0219 09:57:11.745317 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="a18e3883-75f3-47f3-a6a6-31358dbc980a" containerName="extract" Feb 19 09:57:11 crc kubenswrapper[4965]: I0219 09:57:11.745916 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-kkrqs" Feb 19 09:57:11 crc kubenswrapper[4965]: I0219 09:57:11.749698 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 19 09:57:11 crc kubenswrapper[4965]: I0219 09:57:11.750333 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 19 09:57:11 crc kubenswrapper[4965]: I0219 09:57:11.756605 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-ttpp4" Feb 19 09:57:11 crc kubenswrapper[4965]: I0219 09:57:11.757596 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-kkrqs"] Feb 19 09:57:11 crc kubenswrapper[4965]: I0219 09:57:11.785339 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2zvw\" (UniqueName: \"kubernetes.io/projected/56e58cdb-3ef2-4cbf-a926-70ac47e83f9c-kube-api-access-r2zvw\") pod \"nmstate-operator-694c9596b7-kkrqs\" (UID: \"56e58cdb-3ef2-4cbf-a926-70ac47e83f9c\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-kkrqs" Feb 19 09:57:11 crc kubenswrapper[4965]: I0219 09:57:11.886833 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2zvw\" (UniqueName: \"kubernetes.io/projected/56e58cdb-3ef2-4cbf-a926-70ac47e83f9c-kube-api-access-r2zvw\") pod \"nmstate-operator-694c9596b7-kkrqs\" (UID: \"56e58cdb-3ef2-4cbf-a926-70ac47e83f9c\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-kkrqs" Feb 19 09:57:11 crc kubenswrapper[4965]: I0219 09:57:11.909968 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2zvw\" (UniqueName: \"kubernetes.io/projected/56e58cdb-3ef2-4cbf-a926-70ac47e83f9c-kube-api-access-r2zvw\") pod \"nmstate-operator-694c9596b7-kkrqs\" (UID: \"56e58cdb-3ef2-4cbf-a926-70ac47e83f9c\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-kkrqs" Feb 19 09:57:12 crc kubenswrapper[4965]: I0219 09:57:12.095601 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-kkrqs" Feb 19 09:57:12 crc kubenswrapper[4965]: I0219 09:57:12.542366 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-kkrqs"] Feb 19 09:57:13 crc kubenswrapper[4965]: I0219 09:57:13.041408 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-kkrqs" event={"ID":"56e58cdb-3ef2-4cbf-a926-70ac47e83f9c","Type":"ContainerStarted","Data":"13b094f3058d7b98c14e8e06eedb0a4b45fafc7830f7ad3ed8e8a08f7e43e350"} Feb 19 09:57:15 crc kubenswrapper[4965]: I0219 09:57:15.058308 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-kkrqs" event={"ID":"56e58cdb-3ef2-4cbf-a926-70ac47e83f9c","Type":"ContainerStarted","Data":"e5792e694767396afd0d72438cdca8edc60170a9816990ce2ce09a9240899d0b"} Feb 19 09:57:15 crc kubenswrapper[4965]: I0219 09:57:15.078646 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-kkrqs" podStartSLOduration=1.970904043 podStartE2EDuration="4.078615337s" podCreationTimestamp="2026-02-19 09:57:11 +0000 UTC" firstStartedPulling="2026-02-19 09:57:12.557588644 +0000 UTC m=+888.178909964" lastFinishedPulling="2026-02-19 09:57:14.665299938 +0000 UTC m=+890.286621258" observedRunningTime="2026-02-19 09:57:15.076865905 +0000 UTC m=+890.698187215" watchObservedRunningTime="2026-02-19 09:57:15.078615337 +0000 UTC m=+890.699936687" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.006880 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-jnl6n"] Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.009181 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-jnl6n" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.016542 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-9qtzm" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.023343 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxbh5\" (UniqueName: \"kubernetes.io/projected/fbb4bfee-56b1-49ff-ae41-a6ea373fd06a-kube-api-access-rxbh5\") pod \"nmstate-metrics-58c85c668d-jnl6n\" (UID: \"fbb4bfee-56b1-49ff-ae41-a6ea373fd06a\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-jnl6n" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.072542 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-8gm87"] Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.073536 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-8gm87" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.076275 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-jnl6n"] Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.076405 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.081002 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-ftvpm"] Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.081739 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-ftvpm" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.097207 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-8gm87"] Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.131808 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxbh5\" (UniqueName: \"kubernetes.io/projected/fbb4bfee-56b1-49ff-ae41-a6ea373fd06a-kube-api-access-rxbh5\") pod \"nmstate-metrics-58c85c668d-jnl6n\" (UID: \"fbb4bfee-56b1-49ff-ae41-a6ea373fd06a\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-jnl6n" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.168851 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxbh5\" (UniqueName: \"kubernetes.io/projected/fbb4bfee-56b1-49ff-ae41-a6ea373fd06a-kube-api-access-rxbh5\") pod \"nmstate-metrics-58c85c668d-jnl6n\" (UID: \"fbb4bfee-56b1-49ff-ae41-a6ea373fd06a\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-jnl6n" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.175151 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5xfsg"] Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.178889 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5xfsg" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.182754 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.182859 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-txb6c" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.183050 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.190274 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5xfsg"] Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.234335 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcqnl\" (UniqueName: \"kubernetes.io/projected/75ab303c-d1a1-45fd-b457-b5c2a118e898-kube-api-access-hcqnl\") pod \"nmstate-handler-ftvpm\" (UID: \"75ab303c-d1a1-45fd-b457-b5c2a118e898\") " pod="openshift-nmstate/nmstate-handler-ftvpm" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.234381 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtcpc\" (UniqueName: \"kubernetes.io/projected/85cb536f-7492-4fb3-90dd-d71c7d207771-kube-api-access-wtcpc\") pod \"nmstate-console-plugin-5c78fc5d65-5xfsg\" (UID: \"85cb536f-7492-4fb3-90dd-d71c7d207771\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5xfsg" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.234404 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/75ab303c-d1a1-45fd-b457-b5c2a118e898-ovs-socket\") pod \"nmstate-handler-ftvpm\" (UID: \"75ab303c-d1a1-45fd-b457-b5c2a118e898\") " pod="openshift-nmstate/nmstate-handler-ftvpm" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.234432 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/85cb536f-7492-4fb3-90dd-d71c7d207771-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-5xfsg\" (UID: \"85cb536f-7492-4fb3-90dd-d71c7d207771\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5xfsg" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.234448 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/85cb536f-7492-4fb3-90dd-d71c7d207771-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-5xfsg\" (UID: \"85cb536f-7492-4fb3-90dd-d71c7d207771\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5xfsg" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.234480 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/75ab303c-d1a1-45fd-b457-b5c2a118e898-dbus-socket\") pod \"nmstate-handler-ftvpm\" (UID: \"75ab303c-d1a1-45fd-b457-b5c2a118e898\") " pod="openshift-nmstate/nmstate-handler-ftvpm" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.234503 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vljc6\" (UniqueName: \"kubernetes.io/projected/1b0feb3d-7d0d-43b4-bf7c-afd4e11dc0b7-kube-api-access-vljc6\") pod \"nmstate-webhook-866bcb46dc-8gm87\" (UID: \"1b0feb3d-7d0d-43b4-bf7c-afd4e11dc0b7\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-8gm87" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.234521 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/75ab303c-d1a1-45fd-b457-b5c2a118e898-nmstate-lock\") pod \"nmstate-handler-ftvpm\" (UID: \"75ab303c-d1a1-45fd-b457-b5c2a118e898\") " pod="openshift-nmstate/nmstate-handler-ftvpm" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.234567 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/1b0feb3d-7d0d-43b4-bf7c-afd4e11dc0b7-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-8gm87\" (UID: \"1b0feb3d-7d0d-43b4-bf7c-afd4e11dc0b7\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-8gm87" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.335568 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcqnl\" (UniqueName: \"kubernetes.io/projected/75ab303c-d1a1-45fd-b457-b5c2a118e898-kube-api-access-hcqnl\") pod \"nmstate-handler-ftvpm\" (UID: \"75ab303c-d1a1-45fd-b457-b5c2a118e898\") " pod="openshift-nmstate/nmstate-handler-ftvpm" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.335616 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtcpc\" (UniqueName: \"kubernetes.io/projected/85cb536f-7492-4fb3-90dd-d71c7d207771-kube-api-access-wtcpc\") pod \"nmstate-console-plugin-5c78fc5d65-5xfsg\" (UID: \"85cb536f-7492-4fb3-90dd-d71c7d207771\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5xfsg" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.335638 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/75ab303c-d1a1-45fd-b457-b5c2a118e898-ovs-socket\") pod \"nmstate-handler-ftvpm\" (UID: \"75ab303c-d1a1-45fd-b457-b5c2a118e898\") " pod="openshift-nmstate/nmstate-handler-ftvpm" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.335663 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/85cb536f-7492-4fb3-90dd-d71c7d207771-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-5xfsg\" (UID: \"85cb536f-7492-4fb3-90dd-d71c7d207771\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5xfsg" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.335681 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/85cb536f-7492-4fb3-90dd-d71c7d207771-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-5xfsg\" (UID: \"85cb536f-7492-4fb3-90dd-d71c7d207771\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5xfsg" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.335709 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/75ab303c-d1a1-45fd-b457-b5c2a118e898-dbus-socket\") pod \"nmstate-handler-ftvpm\" (UID: \"75ab303c-d1a1-45fd-b457-b5c2a118e898\") " pod="openshift-nmstate/nmstate-handler-ftvpm" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.335732 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vljc6\" (UniqueName: \"kubernetes.io/projected/1b0feb3d-7d0d-43b4-bf7c-afd4e11dc0b7-kube-api-access-vljc6\") pod \"nmstate-webhook-866bcb46dc-8gm87\" (UID: \"1b0feb3d-7d0d-43b4-bf7c-afd4e11dc0b7\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-8gm87" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.335748 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/75ab303c-d1a1-45fd-b457-b5c2a118e898-nmstate-lock\") pod \"nmstate-handler-ftvpm\" (UID: \"75ab303c-d1a1-45fd-b457-b5c2a118e898\") " pod="openshift-nmstate/nmstate-handler-ftvpm" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.335785 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/1b0feb3d-7d0d-43b4-bf7c-afd4e11dc0b7-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-8gm87\" (UID: \"1b0feb3d-7d0d-43b4-bf7c-afd4e11dc0b7\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-8gm87" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.335996 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/75ab303c-d1a1-45fd-b457-b5c2a118e898-ovs-socket\") pod \"nmstate-handler-ftvpm\" (UID: \"75ab303c-d1a1-45fd-b457-b5c2a118e898\") " pod="openshift-nmstate/nmstate-handler-ftvpm" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.336148 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/75ab303c-d1a1-45fd-b457-b5c2a118e898-nmstate-lock\") pod \"nmstate-handler-ftvpm\" (UID: \"75ab303c-d1a1-45fd-b457-b5c2a118e898\") " pod="openshift-nmstate/nmstate-handler-ftvpm" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.336500 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/75ab303c-d1a1-45fd-b457-b5c2a118e898-dbus-socket\") pod \"nmstate-handler-ftvpm\" (UID: \"75ab303c-d1a1-45fd-b457-b5c2a118e898\") " pod="openshift-nmstate/nmstate-handler-ftvpm" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.336999 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/85cb536f-7492-4fb3-90dd-d71c7d207771-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-5xfsg\" (UID: \"85cb536f-7492-4fb3-90dd-d71c7d207771\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5xfsg" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.341474 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/1b0feb3d-7d0d-43b4-bf7c-afd4e11dc0b7-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-8gm87\" (UID: \"1b0feb3d-7d0d-43b4-bf7c-afd4e11dc0b7\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-8gm87" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.355157 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5d89f4d66b-h95v2"] Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.356185 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d89f4d66b-h95v2" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.356978 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/85cb536f-7492-4fb3-90dd-d71c7d207771-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-5xfsg\" (UID: \"85cb536f-7492-4fb3-90dd-d71c7d207771\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5xfsg" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.361519 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcqnl\" (UniqueName: \"kubernetes.io/projected/75ab303c-d1a1-45fd-b457-b5c2a118e898-kube-api-access-hcqnl\") pod \"nmstate-handler-ftvpm\" (UID: \"75ab303c-d1a1-45fd-b457-b5c2a118e898\") " pod="openshift-nmstate/nmstate-handler-ftvpm" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.363998 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtcpc\" (UniqueName: \"kubernetes.io/projected/85cb536f-7492-4fb3-90dd-d71c7d207771-kube-api-access-wtcpc\") pod \"nmstate-console-plugin-5c78fc5d65-5xfsg\" (UID: \"85cb536f-7492-4fb3-90dd-d71c7d207771\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5xfsg" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.364830 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5d89f4d66b-h95v2"] Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.369578 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vljc6\" (UniqueName: \"kubernetes.io/projected/1b0feb3d-7d0d-43b4-bf7c-afd4e11dc0b7-kube-api-access-vljc6\") pod \"nmstate-webhook-866bcb46dc-8gm87\" (UID: \"1b0feb3d-7d0d-43b4-bf7c-afd4e11dc0b7\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-8gm87" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.387681 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-jnl6n" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.402607 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-8gm87" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.413964 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-ftvpm" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.436650 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb12d567-ad8c-4ffb-bef0-ba27b6a3a6d5-trusted-ca-bundle\") pod \"console-5d89f4d66b-h95v2\" (UID: \"cb12d567-ad8c-4ffb-bef0-ba27b6a3a6d5\") " pod="openshift-console/console-5d89f4d66b-h95v2" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.436696 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cb12d567-ad8c-4ffb-bef0-ba27b6a3a6d5-service-ca\") pod \"console-5d89f4d66b-h95v2\" (UID: \"cb12d567-ad8c-4ffb-bef0-ba27b6a3a6d5\") " pod="openshift-console/console-5d89f4d66b-h95v2" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.436729 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cb12d567-ad8c-4ffb-bef0-ba27b6a3a6d5-console-serving-cert\") pod \"console-5d89f4d66b-h95v2\" (UID: \"cb12d567-ad8c-4ffb-bef0-ba27b6a3a6d5\") " pod="openshift-console/console-5d89f4d66b-h95v2" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.436769 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cb12d567-ad8c-4ffb-bef0-ba27b6a3a6d5-console-oauth-config\") pod \"console-5d89f4d66b-h95v2\" (UID: \"cb12d567-ad8c-4ffb-bef0-ba27b6a3a6d5\") " pod="openshift-console/console-5d89f4d66b-h95v2" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.436799 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cb12d567-ad8c-4ffb-bef0-ba27b6a3a6d5-oauth-serving-cert\") pod \"console-5d89f4d66b-h95v2\" (UID: \"cb12d567-ad8c-4ffb-bef0-ba27b6a3a6d5\") " pod="openshift-console/console-5d89f4d66b-h95v2" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.436815 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cb12d567-ad8c-4ffb-bef0-ba27b6a3a6d5-console-config\") pod \"console-5d89f4d66b-h95v2\" (UID: \"cb12d567-ad8c-4ffb-bef0-ba27b6a3a6d5\") " pod="openshift-console/console-5d89f4d66b-h95v2" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.436829 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sx5v\" (UniqueName: \"kubernetes.io/projected/cb12d567-ad8c-4ffb-bef0-ba27b6a3a6d5-kube-api-access-8sx5v\") pod \"console-5d89f4d66b-h95v2\" (UID: \"cb12d567-ad8c-4ffb-bef0-ba27b6a3a6d5\") " pod="openshift-console/console-5d89f4d66b-h95v2" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.503133 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5xfsg" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.537735 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cb12d567-ad8c-4ffb-bef0-ba27b6a3a6d5-oauth-serving-cert\") pod \"console-5d89f4d66b-h95v2\" (UID: \"cb12d567-ad8c-4ffb-bef0-ba27b6a3a6d5\") " pod="openshift-console/console-5d89f4d66b-h95v2" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.537784 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cb12d567-ad8c-4ffb-bef0-ba27b6a3a6d5-console-config\") pod \"console-5d89f4d66b-h95v2\" (UID: \"cb12d567-ad8c-4ffb-bef0-ba27b6a3a6d5\") " pod="openshift-console/console-5d89f4d66b-h95v2" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.537810 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sx5v\" (UniqueName: \"kubernetes.io/projected/cb12d567-ad8c-4ffb-bef0-ba27b6a3a6d5-kube-api-access-8sx5v\") pod \"console-5d89f4d66b-h95v2\" (UID: \"cb12d567-ad8c-4ffb-bef0-ba27b6a3a6d5\") " pod="openshift-console/console-5d89f4d66b-h95v2" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.552306 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cb12d567-ad8c-4ffb-bef0-ba27b6a3a6d5-oauth-serving-cert\") pod \"console-5d89f4d66b-h95v2\" (UID: \"cb12d567-ad8c-4ffb-bef0-ba27b6a3a6d5\") " pod="openshift-console/console-5d89f4d66b-h95v2" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.552674 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb12d567-ad8c-4ffb-bef0-ba27b6a3a6d5-trusted-ca-bundle\") pod \"console-5d89f4d66b-h95v2\" (UID: \"cb12d567-ad8c-4ffb-bef0-ba27b6a3a6d5\") " pod="openshift-console/console-5d89f4d66b-h95v2" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.552846 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cb12d567-ad8c-4ffb-bef0-ba27b6a3a6d5-console-config\") pod \"console-5d89f4d66b-h95v2\" (UID: \"cb12d567-ad8c-4ffb-bef0-ba27b6a3a6d5\") " pod="openshift-console/console-5d89f4d66b-h95v2" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.559343 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb12d567-ad8c-4ffb-bef0-ba27b6a3a6d5-trusted-ca-bundle\") pod \"console-5d89f4d66b-h95v2\" (UID: \"cb12d567-ad8c-4ffb-bef0-ba27b6a3a6d5\") " pod="openshift-console/console-5d89f4d66b-h95v2" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.559475 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cb12d567-ad8c-4ffb-bef0-ba27b6a3a6d5-service-ca\") pod \"console-5d89f4d66b-h95v2\" (UID: \"cb12d567-ad8c-4ffb-bef0-ba27b6a3a6d5\") " pod="openshift-console/console-5d89f4d66b-h95v2" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.559523 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cb12d567-ad8c-4ffb-bef0-ba27b6a3a6d5-console-serving-cert\") pod \"console-5d89f4d66b-h95v2\" (UID: \"cb12d567-ad8c-4ffb-bef0-ba27b6a3a6d5\") " pod="openshift-console/console-5d89f4d66b-h95v2" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.561813 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cb12d567-ad8c-4ffb-bef0-ba27b6a3a6d5-service-ca\") pod \"console-5d89f4d66b-h95v2\" (UID: \"cb12d567-ad8c-4ffb-bef0-ba27b6a3a6d5\") " pod="openshift-console/console-5d89f4d66b-h95v2" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.564403 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cb12d567-ad8c-4ffb-bef0-ba27b6a3a6d5-console-oauth-config\") pod \"console-5d89f4d66b-h95v2\" (UID: \"cb12d567-ad8c-4ffb-bef0-ba27b6a3a6d5\") " pod="openshift-console/console-5d89f4d66b-h95v2" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.569708 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cb12d567-ad8c-4ffb-bef0-ba27b6a3a6d5-console-serving-cert\") pod \"console-5d89f4d66b-h95v2\" (UID: \"cb12d567-ad8c-4ffb-bef0-ba27b6a3a6d5\") " pod="openshift-console/console-5d89f4d66b-h95v2" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.570292 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cb12d567-ad8c-4ffb-bef0-ba27b6a3a6d5-console-oauth-config\") pod \"console-5d89f4d66b-h95v2\" (UID: \"cb12d567-ad8c-4ffb-bef0-ba27b6a3a6d5\") " pod="openshift-console/console-5d89f4d66b-h95v2" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.574889 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sx5v\" (UniqueName: \"kubernetes.io/projected/cb12d567-ad8c-4ffb-bef0-ba27b6a3a6d5-kube-api-access-8sx5v\") pod \"console-5d89f4d66b-h95v2\" (UID: \"cb12d567-ad8c-4ffb-bef0-ba27b6a3a6d5\") " pod="openshift-console/console-5d89f4d66b-h95v2" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.604235 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-jnl6n"] Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.664601 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-8gm87"] Feb 19 09:57:21 crc kubenswrapper[4965]: W0219 09:57:21.677426 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b0feb3d_7d0d_43b4_bf7c_afd4e11dc0b7.slice/crio-ad60517463df0c75e30195e92fcb42b58938eeab2219c509667e536d292f1248 WatchSource:0}: Error finding container ad60517463df0c75e30195e92fcb42b58938eeab2219c509667e536d292f1248: Status 404 returned error can't find the container with id ad60517463df0c75e30195e92fcb42b58938eeab2219c509667e536d292f1248 Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.734425 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5xfsg"] Feb 19 09:57:21 crc kubenswrapper[4965]: W0219 09:57:21.742349 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod85cb536f_7492_4fb3_90dd_d71c7d207771.slice/crio-1a93a104eca862dd8ab0ae9b4680da3a782c06fe5e3c4d973be73a1434bda521 WatchSource:0}: Error finding container 1a93a104eca862dd8ab0ae9b4680da3a782c06fe5e3c4d973be73a1434bda521: Status 404 returned error can't find the container with id 1a93a104eca862dd8ab0ae9b4680da3a782c06fe5e3c4d973be73a1434bda521 Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.748725 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d89f4d66b-h95v2" Feb 19 09:57:21 crc kubenswrapper[4965]: I0219 09:57:21.936407 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5d89f4d66b-h95v2"] Feb 19 09:57:21 crc kubenswrapper[4965]: W0219 09:57:21.941581 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb12d567_ad8c_4ffb_bef0_ba27b6a3a6d5.slice/crio-522b8f68cbb8b200202839333cb07a558c1e23af4df3bbe4b52e0110b3c6fb27 WatchSource:0}: Error finding container 522b8f68cbb8b200202839333cb07a558c1e23af4df3bbe4b52e0110b3c6fb27: Status 404 returned error can't find the container with id 522b8f68cbb8b200202839333cb07a558c1e23af4df3bbe4b52e0110b3c6fb27 Feb 19 09:57:22 crc kubenswrapper[4965]: I0219 09:57:22.106366 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-ftvpm" event={"ID":"75ab303c-d1a1-45fd-b457-b5c2a118e898","Type":"ContainerStarted","Data":"bf1822798170bc878bdff1effb95dcbff0326c6c22e758a45796a14efa2e6c8f"} Feb 19 09:57:22 crc kubenswrapper[4965]: I0219 09:57:22.108545 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5xfsg" event={"ID":"85cb536f-7492-4fb3-90dd-d71c7d207771","Type":"ContainerStarted","Data":"1a93a104eca862dd8ab0ae9b4680da3a782c06fe5e3c4d973be73a1434bda521"} Feb 19 09:57:22 crc kubenswrapper[4965]: I0219 09:57:22.111012 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-jnl6n" event={"ID":"fbb4bfee-56b1-49ff-ae41-a6ea373fd06a","Type":"ContainerStarted","Data":"9c8278ee4d3457f33a8d1f1113a2d5e20cac161052c0373bb058fe3330160caa"} Feb 19 09:57:22 crc kubenswrapper[4965]: I0219 09:57:22.113891 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d89f4d66b-h95v2" event={"ID":"cb12d567-ad8c-4ffb-bef0-ba27b6a3a6d5","Type":"ContainerStarted","Data":"795085222ee65db5fd191bf6586e856dbbef66c546b12eccb0df983a86a54f9d"} Feb 19 09:57:22 crc kubenswrapper[4965]: I0219 09:57:22.113968 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d89f4d66b-h95v2" event={"ID":"cb12d567-ad8c-4ffb-bef0-ba27b6a3a6d5","Type":"ContainerStarted","Data":"522b8f68cbb8b200202839333cb07a558c1e23af4df3bbe4b52e0110b3c6fb27"} Feb 19 09:57:22 crc kubenswrapper[4965]: I0219 09:57:22.116868 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-8gm87" event={"ID":"1b0feb3d-7d0d-43b4-bf7c-afd4e11dc0b7","Type":"ContainerStarted","Data":"ad60517463df0c75e30195e92fcb42b58938eeab2219c509667e536d292f1248"} Feb 19 09:57:22 crc kubenswrapper[4965]: I0219 09:57:22.137649 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5d89f4d66b-h95v2" podStartSLOduration=1.137606427 podStartE2EDuration="1.137606427s" podCreationTimestamp="2026-02-19 09:57:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:57:22.133661961 +0000 UTC m=+897.754983321" watchObservedRunningTime="2026-02-19 09:57:22.137606427 +0000 UTC m=+897.758927747" Feb 19 09:57:26 crc kubenswrapper[4965]: I0219 09:57:26.158123 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-ftvpm" event={"ID":"75ab303c-d1a1-45fd-b457-b5c2a118e898","Type":"ContainerStarted","Data":"42fb96de2500f1bda78dfc09706fb6b2baa9ed2fbe5541e0ee272c840fa2ed17"} Feb 19 09:57:26 crc kubenswrapper[4965]: I0219 09:57:26.159022 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-ftvpm" Feb 19 09:57:26 crc kubenswrapper[4965]: I0219 09:57:26.162896 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5xfsg" event={"ID":"85cb536f-7492-4fb3-90dd-d71c7d207771","Type":"ContainerStarted","Data":"49f7ac5b0309ddf3695464f9a53a7ba1d3ee5cabd353bb3e3ed5558df2ff3604"} Feb 19 09:57:26 crc kubenswrapper[4965]: I0219 09:57:26.165418 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-jnl6n" event={"ID":"fbb4bfee-56b1-49ff-ae41-a6ea373fd06a","Type":"ContainerStarted","Data":"bc2b5ed33359938421f678d1642479580553bdecf247f70985e1145d0b2d59f7"} Feb 19 09:57:26 crc kubenswrapper[4965]: I0219 09:57:26.167045 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-8gm87" event={"ID":"1b0feb3d-7d0d-43b4-bf7c-afd4e11dc0b7","Type":"ContainerStarted","Data":"1057875b5dba6c2a4fd99fe49b12927d4908bb91c263892044f8e1b20e11d2ae"} Feb 19 09:57:26 crc kubenswrapper[4965]: I0219 09:57:26.167230 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-8gm87" Feb 19 09:57:26 crc kubenswrapper[4965]: I0219 09:57:26.184829 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-ftvpm" podStartSLOduration=1.637119829 podStartE2EDuration="5.18480666s" podCreationTimestamp="2026-02-19 09:57:21 +0000 UTC" firstStartedPulling="2026-02-19 09:57:21.441726454 +0000 UTC m=+897.063047764" lastFinishedPulling="2026-02-19 09:57:24.989413285 +0000 UTC m=+900.610734595" observedRunningTime="2026-02-19 09:57:26.181725235 +0000 UTC m=+901.803046615" watchObservedRunningTime="2026-02-19 09:57:26.18480666 +0000 UTC m=+901.806128010" Feb 19 09:57:26 crc kubenswrapper[4965]: I0219 09:57:26.210474 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-8gm87" podStartSLOduration=1.8826632129999998 podStartE2EDuration="5.210452043s" podCreationTimestamp="2026-02-19 09:57:21 +0000 UTC" firstStartedPulling="2026-02-19 09:57:21.680872143 +0000 UTC m=+897.302193453" lastFinishedPulling="2026-02-19 09:57:25.008660973 +0000 UTC m=+900.629982283" observedRunningTime="2026-02-19 09:57:26.203951215 +0000 UTC m=+901.825272585" watchObservedRunningTime="2026-02-19 09:57:26.210452043 +0000 UTC m=+901.831773373" Feb 19 09:57:26 crc kubenswrapper[4965]: I0219 09:57:26.239013 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5xfsg" podStartSLOduration=2.010324634 podStartE2EDuration="5.238990206s" podCreationTimestamp="2026-02-19 09:57:21 +0000 UTC" firstStartedPulling="2026-02-19 09:57:21.744017097 +0000 UTC m=+897.365338407" lastFinishedPulling="2026-02-19 09:57:24.972682679 +0000 UTC m=+900.594003979" observedRunningTime="2026-02-19 09:57:26.235002029 +0000 UTC m=+901.856323379" watchObservedRunningTime="2026-02-19 09:57:26.238990206 +0000 UTC m=+901.860311526" Feb 19 09:57:28 crc kubenswrapper[4965]: I0219 09:57:28.185787 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-jnl6n" event={"ID":"fbb4bfee-56b1-49ff-ae41-a6ea373fd06a","Type":"ContainerStarted","Data":"ddacbfe3099388fdd39b68773649428b968a5c630c9b3135119f6e3d1b283726"} Feb 19 09:57:28 crc kubenswrapper[4965]: I0219 09:57:28.216469 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-jnl6n" podStartSLOduration=2.372421691 podStartE2EDuration="8.216452318s" podCreationTimestamp="2026-02-19 09:57:20 +0000 UTC" firstStartedPulling="2026-02-19 09:57:21.619696947 +0000 UTC m=+897.241018247" lastFinishedPulling="2026-02-19 09:57:27.463727554 +0000 UTC m=+903.085048874" observedRunningTime="2026-02-19 09:57:28.211863925 +0000 UTC m=+903.833185245" watchObservedRunningTime="2026-02-19 09:57:28.216452318 +0000 UTC m=+903.837773628" Feb 19 09:57:31 crc kubenswrapper[4965]: I0219 09:57:31.450848 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-ftvpm" Feb 19 09:57:31 crc kubenswrapper[4965]: I0219 09:57:31.749437 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5d89f4d66b-h95v2" Feb 19 09:57:31 crc kubenswrapper[4965]: I0219 09:57:31.749556 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5d89f4d66b-h95v2" Feb 19 09:57:31 crc kubenswrapper[4965]: I0219 09:57:31.756917 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5d89f4d66b-h95v2" Feb 19 09:57:32 crc kubenswrapper[4965]: I0219 09:57:32.264353 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5d89f4d66b-h95v2" Feb 19 09:57:32 crc kubenswrapper[4965]: I0219 09:57:32.334437 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-hgzq5"] Feb 19 09:57:41 crc kubenswrapper[4965]: I0219 09:57:41.412236 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-8gm87" Feb 19 09:57:46 crc kubenswrapper[4965]: I0219 09:57:46.601151 4965 patch_prober.go:28] interesting pod/machine-config-daemon-7mhh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 09:57:46 crc kubenswrapper[4965]: I0219 09:57:46.601745 4965 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 09:57:57 crc kubenswrapper[4965]: I0219 09:57:57.391799 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-hgzq5" podUID="91fd349f-c4be-4636-a5a9-76ed721d9afa" containerName="console" containerID="cri-o://0fbbf58178d7bb77de8f645e6b9b5209d0f9eca6ff3a1b19f345fb0cb4e93c29" gracePeriod=15 Feb 19 09:57:57 crc kubenswrapper[4965]: I0219 09:57:57.518207 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l45v6"] Feb 19 09:57:57 crc kubenswrapper[4965]: I0219 09:57:57.519303 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l45v6" Feb 19 09:57:57 crc kubenswrapper[4965]: I0219 09:57:57.522011 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 19 09:57:57 crc kubenswrapper[4965]: I0219 09:57:57.548283 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l45v6"] Feb 19 09:57:57 crc kubenswrapper[4965]: I0219 09:57:57.561820 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbwkf\" (UniqueName: \"kubernetes.io/projected/38b3ecaf-956c-479a-8c6e-4e0dd083f186-kube-api-access-fbwkf\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l45v6\" (UID: \"38b3ecaf-956c-479a-8c6e-4e0dd083f186\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l45v6" Feb 19 09:57:57 crc kubenswrapper[4965]: I0219 09:57:57.561881 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/38b3ecaf-956c-479a-8c6e-4e0dd083f186-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l45v6\" (UID: \"38b3ecaf-956c-479a-8c6e-4e0dd083f186\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l45v6" Feb 19 09:57:57 crc kubenswrapper[4965]: I0219 09:57:57.561920 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/38b3ecaf-956c-479a-8c6e-4e0dd083f186-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l45v6\" (UID: \"38b3ecaf-956c-479a-8c6e-4e0dd083f186\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l45v6" Feb 19 09:57:57 crc kubenswrapper[4965]: I0219 09:57:57.663526 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbwkf\" (UniqueName: \"kubernetes.io/projected/38b3ecaf-956c-479a-8c6e-4e0dd083f186-kube-api-access-fbwkf\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l45v6\" (UID: \"38b3ecaf-956c-479a-8c6e-4e0dd083f186\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l45v6" Feb 19 09:57:57 crc kubenswrapper[4965]: I0219 09:57:57.663609 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/38b3ecaf-956c-479a-8c6e-4e0dd083f186-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l45v6\" (UID: \"38b3ecaf-956c-479a-8c6e-4e0dd083f186\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l45v6" Feb 19 09:57:57 crc kubenswrapper[4965]: I0219 09:57:57.663667 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/38b3ecaf-956c-479a-8c6e-4e0dd083f186-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l45v6\" (UID: \"38b3ecaf-956c-479a-8c6e-4e0dd083f186\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l45v6" Feb 19 09:57:57 crc kubenswrapper[4965]: I0219 09:57:57.664797 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/38b3ecaf-956c-479a-8c6e-4e0dd083f186-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l45v6\" (UID: \"38b3ecaf-956c-479a-8c6e-4e0dd083f186\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l45v6" Feb 19 09:57:57 crc kubenswrapper[4965]: I0219 09:57:57.664907 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/38b3ecaf-956c-479a-8c6e-4e0dd083f186-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l45v6\" (UID: \"38b3ecaf-956c-479a-8c6e-4e0dd083f186\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l45v6" Feb 19 09:57:57 crc kubenswrapper[4965]: I0219 09:57:57.691259 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbwkf\" (UniqueName: \"kubernetes.io/projected/38b3ecaf-956c-479a-8c6e-4e0dd083f186-kube-api-access-fbwkf\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l45v6\" (UID: \"38b3ecaf-956c-479a-8c6e-4e0dd083f186\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l45v6" Feb 19 09:57:57 crc kubenswrapper[4965]: I0219 09:57:57.787145 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-hgzq5_91fd349f-c4be-4636-a5a9-76ed721d9afa/console/0.log" Feb 19 09:57:57 crc kubenswrapper[4965]: I0219 09:57:57.787240 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-hgzq5" Feb 19 09:57:57 crc kubenswrapper[4965]: I0219 09:57:57.834297 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l45v6" Feb 19 09:57:57 crc kubenswrapper[4965]: I0219 09:57:57.866398 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/91fd349f-c4be-4636-a5a9-76ed721d9afa-console-serving-cert\") pod \"91fd349f-c4be-4636-a5a9-76ed721d9afa\" (UID: \"91fd349f-c4be-4636-a5a9-76ed721d9afa\") " Feb 19 09:57:57 crc kubenswrapper[4965]: I0219 09:57:57.866509 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91fd349f-c4be-4636-a5a9-76ed721d9afa-trusted-ca-bundle\") pod \"91fd349f-c4be-4636-a5a9-76ed721d9afa\" (UID: \"91fd349f-c4be-4636-a5a9-76ed721d9afa\") " Feb 19 09:57:57 crc kubenswrapper[4965]: I0219 09:57:57.866538 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7b5f\" (UniqueName: \"kubernetes.io/projected/91fd349f-c4be-4636-a5a9-76ed721d9afa-kube-api-access-q7b5f\") pod \"91fd349f-c4be-4636-a5a9-76ed721d9afa\" (UID: \"91fd349f-c4be-4636-a5a9-76ed721d9afa\") " Feb 19 09:57:57 crc kubenswrapper[4965]: I0219 09:57:57.866602 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/91fd349f-c4be-4636-a5a9-76ed721d9afa-oauth-serving-cert\") pod \"91fd349f-c4be-4636-a5a9-76ed721d9afa\" (UID: \"91fd349f-c4be-4636-a5a9-76ed721d9afa\") " Feb 19 09:57:57 crc kubenswrapper[4965]: I0219 09:57:57.866633 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/91fd349f-c4be-4636-a5a9-76ed721d9afa-console-oauth-config\") pod \"91fd349f-c4be-4636-a5a9-76ed721d9afa\" (UID: \"91fd349f-c4be-4636-a5a9-76ed721d9afa\") " Feb 19 09:57:57 crc kubenswrapper[4965]: I0219 09:57:57.866663 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/91fd349f-c4be-4636-a5a9-76ed721d9afa-console-config\") pod \"91fd349f-c4be-4636-a5a9-76ed721d9afa\" (UID: \"91fd349f-c4be-4636-a5a9-76ed721d9afa\") " Feb 19 09:57:57 crc kubenswrapper[4965]: I0219 09:57:57.866685 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/91fd349f-c4be-4636-a5a9-76ed721d9afa-service-ca\") pod \"91fd349f-c4be-4636-a5a9-76ed721d9afa\" (UID: \"91fd349f-c4be-4636-a5a9-76ed721d9afa\") " Feb 19 09:57:57 crc kubenswrapper[4965]: I0219 09:57:57.867254 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91fd349f-c4be-4636-a5a9-76ed721d9afa-service-ca" (OuterVolumeSpecName: "service-ca") pod "91fd349f-c4be-4636-a5a9-76ed721d9afa" (UID: "91fd349f-c4be-4636-a5a9-76ed721d9afa"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:57:57 crc kubenswrapper[4965]: I0219 09:57:57.867275 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91fd349f-c4be-4636-a5a9-76ed721d9afa-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "91fd349f-c4be-4636-a5a9-76ed721d9afa" (UID: "91fd349f-c4be-4636-a5a9-76ed721d9afa"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:57:57 crc kubenswrapper[4965]: I0219 09:57:57.867344 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91fd349f-c4be-4636-a5a9-76ed721d9afa-console-config" (OuterVolumeSpecName: "console-config") pod "91fd349f-c4be-4636-a5a9-76ed721d9afa" (UID: "91fd349f-c4be-4636-a5a9-76ed721d9afa"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:57:57 crc kubenswrapper[4965]: I0219 09:57:57.867370 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91fd349f-c4be-4636-a5a9-76ed721d9afa-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "91fd349f-c4be-4636-a5a9-76ed721d9afa" (UID: "91fd349f-c4be-4636-a5a9-76ed721d9afa"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 09:57:57 crc kubenswrapper[4965]: I0219 09:57:57.867663 4965 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91fd349f-c4be-4636-a5a9-76ed721d9afa-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 09:57:57 crc kubenswrapper[4965]: I0219 09:57:57.867683 4965 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/91fd349f-c4be-4636-a5a9-76ed721d9afa-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:57:57 crc kubenswrapper[4965]: I0219 09:57:57.867697 4965 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/91fd349f-c4be-4636-a5a9-76ed721d9afa-console-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:57:57 crc kubenswrapper[4965]: I0219 09:57:57.867708 4965 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/91fd349f-c4be-4636-a5a9-76ed721d9afa-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 09:57:57 crc kubenswrapper[4965]: I0219 09:57:57.871062 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91fd349f-c4be-4636-a5a9-76ed721d9afa-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "91fd349f-c4be-4636-a5a9-76ed721d9afa" (UID: "91fd349f-c4be-4636-a5a9-76ed721d9afa"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:57:57 crc kubenswrapper[4965]: I0219 09:57:57.871078 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91fd349f-c4be-4636-a5a9-76ed721d9afa-kube-api-access-q7b5f" (OuterVolumeSpecName: "kube-api-access-q7b5f") pod "91fd349f-c4be-4636-a5a9-76ed721d9afa" (UID: "91fd349f-c4be-4636-a5a9-76ed721d9afa"). InnerVolumeSpecName "kube-api-access-q7b5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:57:57 crc kubenswrapper[4965]: I0219 09:57:57.872620 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91fd349f-c4be-4636-a5a9-76ed721d9afa-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "91fd349f-c4be-4636-a5a9-76ed721d9afa" (UID: "91fd349f-c4be-4636-a5a9-76ed721d9afa"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 09:57:57 crc kubenswrapper[4965]: I0219 09:57:57.968379 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7b5f\" (UniqueName: \"kubernetes.io/projected/91fd349f-c4be-4636-a5a9-76ed721d9afa-kube-api-access-q7b5f\") on node \"crc\" DevicePath \"\"" Feb 19 09:57:57 crc kubenswrapper[4965]: I0219 09:57:57.968412 4965 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/91fd349f-c4be-4636-a5a9-76ed721d9afa-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 19 09:57:57 crc kubenswrapper[4965]: I0219 09:57:57.968421 4965 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/91fd349f-c4be-4636-a5a9-76ed721d9afa-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 09:57:58 crc kubenswrapper[4965]: I0219 09:57:58.024256 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l45v6"] Feb 19 09:57:58 crc kubenswrapper[4965]: I0219 09:57:58.467316 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-hgzq5_91fd349f-c4be-4636-a5a9-76ed721d9afa/console/0.log" Feb 19 09:57:58 crc kubenswrapper[4965]: I0219 09:57:58.467739 4965 generic.go:334] "Generic (PLEG): container finished" podID="91fd349f-c4be-4636-a5a9-76ed721d9afa" containerID="0fbbf58178d7bb77de8f645e6b9b5209d0f9eca6ff3a1b19f345fb0cb4e93c29" exitCode=2 Feb 19 09:57:58 crc kubenswrapper[4965]: I0219 09:57:58.467815 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-hgzq5" Feb 19 09:57:58 crc kubenswrapper[4965]: I0219 09:57:58.467832 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-hgzq5" event={"ID":"91fd349f-c4be-4636-a5a9-76ed721d9afa","Type":"ContainerDied","Data":"0fbbf58178d7bb77de8f645e6b9b5209d0f9eca6ff3a1b19f345fb0cb4e93c29"} Feb 19 09:57:58 crc kubenswrapper[4965]: I0219 09:57:58.467872 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-hgzq5" event={"ID":"91fd349f-c4be-4636-a5a9-76ed721d9afa","Type":"ContainerDied","Data":"a613bbd6017aa896e75425cc8b56aaab8c3cb8f219f72fa25f980d60a1fbe4c6"} Feb 19 09:57:58 crc kubenswrapper[4965]: I0219 09:57:58.467898 4965 scope.go:117] "RemoveContainer" containerID="0fbbf58178d7bb77de8f645e6b9b5209d0f9eca6ff3a1b19f345fb0cb4e93c29" Feb 19 09:57:58 crc kubenswrapper[4965]: I0219 09:57:58.482175 4965 generic.go:334] "Generic (PLEG): container finished" podID="38b3ecaf-956c-479a-8c6e-4e0dd083f186" containerID="da566686da0fecf0758d05b1a98ba9bea8239f5dbe83e2e7a36a298c9bff469f" exitCode=0 Feb 19 09:57:58 crc kubenswrapper[4965]: I0219 09:57:58.482255 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l45v6" event={"ID":"38b3ecaf-956c-479a-8c6e-4e0dd083f186","Type":"ContainerDied","Data":"da566686da0fecf0758d05b1a98ba9bea8239f5dbe83e2e7a36a298c9bff469f"} Feb 19 09:57:58 crc kubenswrapper[4965]: I0219 09:57:58.482312 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l45v6" event={"ID":"38b3ecaf-956c-479a-8c6e-4e0dd083f186","Type":"ContainerStarted","Data":"64add9067f8a8eeea50f5a7fa12fa3b1bfa8c26b6ba91b11b8cc4e9163221e34"} Feb 19 09:57:58 crc kubenswrapper[4965]: I0219 09:57:58.488051 4965 scope.go:117] "RemoveContainer" containerID="0fbbf58178d7bb77de8f645e6b9b5209d0f9eca6ff3a1b19f345fb0cb4e93c29" Feb 19 09:57:58 crc kubenswrapper[4965]: E0219 09:57:58.489062 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fbbf58178d7bb77de8f645e6b9b5209d0f9eca6ff3a1b19f345fb0cb4e93c29\": container with ID starting with 0fbbf58178d7bb77de8f645e6b9b5209d0f9eca6ff3a1b19f345fb0cb4e93c29 not found: ID does not exist" containerID="0fbbf58178d7bb77de8f645e6b9b5209d0f9eca6ff3a1b19f345fb0cb4e93c29" Feb 19 09:57:58 crc kubenswrapper[4965]: I0219 09:57:58.489111 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fbbf58178d7bb77de8f645e6b9b5209d0f9eca6ff3a1b19f345fb0cb4e93c29"} err="failed to get container status \"0fbbf58178d7bb77de8f645e6b9b5209d0f9eca6ff3a1b19f345fb0cb4e93c29\": rpc error: code = NotFound desc = could not find container \"0fbbf58178d7bb77de8f645e6b9b5209d0f9eca6ff3a1b19f345fb0cb4e93c29\": container with ID starting with 0fbbf58178d7bb77de8f645e6b9b5209d0f9eca6ff3a1b19f345fb0cb4e93c29 not found: ID does not exist" Feb 19 09:57:58 crc kubenswrapper[4965]: I0219 09:57:58.528758 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-hgzq5"] Feb 19 09:57:58 crc kubenswrapper[4965]: I0219 09:57:58.532964 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-hgzq5"] Feb 19 09:57:58 crc kubenswrapper[4965]: I0219 09:57:58.622813 4965 patch_prober.go:28] interesting pod/console-f9d7485db-hgzq5 container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/health\": context deadline exceeded" start-of-body= Feb 19 09:57:58 crc kubenswrapper[4965]: I0219 09:57:58.622889 4965 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-f9d7485db-hgzq5" podUID="91fd349f-c4be-4636-a5a9-76ed721d9afa" containerName="console" probeResult="failure" output="Get \"https://10.217.0.14:8443/health\": context deadline exceeded" Feb 19 09:57:59 crc kubenswrapper[4965]: I0219 09:57:59.206885 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91fd349f-c4be-4636-a5a9-76ed721d9afa" path="/var/lib/kubelet/pods/91fd349f-c4be-4636-a5a9-76ed721d9afa/volumes" Feb 19 09:58:00 crc kubenswrapper[4965]: I0219 09:58:00.500328 4965 generic.go:334] "Generic (PLEG): container finished" podID="38b3ecaf-956c-479a-8c6e-4e0dd083f186" containerID="5b75e4592f422652e47341b989f5684b6ba15779a916ea8d6b59943b98c21573" exitCode=0 Feb 19 09:58:00 crc kubenswrapper[4965]: I0219 09:58:00.500389 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l45v6" event={"ID":"38b3ecaf-956c-479a-8c6e-4e0dd083f186","Type":"ContainerDied","Data":"5b75e4592f422652e47341b989f5684b6ba15779a916ea8d6b59943b98c21573"} Feb 19 09:58:01 crc kubenswrapper[4965]: I0219 09:58:01.507691 4965 generic.go:334] "Generic (PLEG): container finished" podID="38b3ecaf-956c-479a-8c6e-4e0dd083f186" containerID="62ab354c3b59b58c942272e76f5b8f13563991cfdf16d430d98a544b728f9095" exitCode=0 Feb 19 09:58:01 crc kubenswrapper[4965]: I0219 09:58:01.507745 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l45v6" event={"ID":"38b3ecaf-956c-479a-8c6e-4e0dd083f186","Type":"ContainerDied","Data":"62ab354c3b59b58c942272e76f5b8f13563991cfdf16d430d98a544b728f9095"} Feb 19 09:58:02 crc kubenswrapper[4965]: I0219 09:58:02.761918 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l45v6" Feb 19 09:58:02 crc kubenswrapper[4965]: I0219 09:58:02.835083 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/38b3ecaf-956c-479a-8c6e-4e0dd083f186-util\") pod \"38b3ecaf-956c-479a-8c6e-4e0dd083f186\" (UID: \"38b3ecaf-956c-479a-8c6e-4e0dd083f186\") " Feb 19 09:58:02 crc kubenswrapper[4965]: I0219 09:58:02.835362 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbwkf\" (UniqueName: \"kubernetes.io/projected/38b3ecaf-956c-479a-8c6e-4e0dd083f186-kube-api-access-fbwkf\") pod \"38b3ecaf-956c-479a-8c6e-4e0dd083f186\" (UID: \"38b3ecaf-956c-479a-8c6e-4e0dd083f186\") " Feb 19 09:58:02 crc kubenswrapper[4965]: I0219 09:58:02.835404 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/38b3ecaf-956c-479a-8c6e-4e0dd083f186-bundle\") pod \"38b3ecaf-956c-479a-8c6e-4e0dd083f186\" (UID: \"38b3ecaf-956c-479a-8c6e-4e0dd083f186\") " Feb 19 09:58:02 crc kubenswrapper[4965]: I0219 09:58:02.836353 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38b3ecaf-956c-479a-8c6e-4e0dd083f186-bundle" (OuterVolumeSpecName: "bundle") pod "38b3ecaf-956c-479a-8c6e-4e0dd083f186" (UID: "38b3ecaf-956c-479a-8c6e-4e0dd083f186"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:58:02 crc kubenswrapper[4965]: I0219 09:58:02.844236 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38b3ecaf-956c-479a-8c6e-4e0dd083f186-kube-api-access-fbwkf" (OuterVolumeSpecName: "kube-api-access-fbwkf") pod "38b3ecaf-956c-479a-8c6e-4e0dd083f186" (UID: "38b3ecaf-956c-479a-8c6e-4e0dd083f186"). InnerVolumeSpecName "kube-api-access-fbwkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:58:02 crc kubenswrapper[4965]: I0219 09:58:02.849530 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38b3ecaf-956c-479a-8c6e-4e0dd083f186-util" (OuterVolumeSpecName: "util") pod "38b3ecaf-956c-479a-8c6e-4e0dd083f186" (UID: "38b3ecaf-956c-479a-8c6e-4e0dd083f186"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:58:02 crc kubenswrapper[4965]: I0219 09:58:02.937322 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbwkf\" (UniqueName: \"kubernetes.io/projected/38b3ecaf-956c-479a-8c6e-4e0dd083f186-kube-api-access-fbwkf\") on node \"crc\" DevicePath \"\"" Feb 19 09:58:02 crc kubenswrapper[4965]: I0219 09:58:02.937397 4965 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/38b3ecaf-956c-479a-8c6e-4e0dd083f186-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 09:58:02 crc kubenswrapper[4965]: I0219 09:58:02.937409 4965 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/38b3ecaf-956c-479a-8c6e-4e0dd083f186-util\") on node \"crc\" DevicePath \"\"" Feb 19 09:58:03 crc kubenswrapper[4965]: I0219 09:58:03.525120 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l45v6" event={"ID":"38b3ecaf-956c-479a-8c6e-4e0dd083f186","Type":"ContainerDied","Data":"64add9067f8a8eeea50f5a7fa12fa3b1bfa8c26b6ba91b11b8cc4e9163221e34"} Feb 19 09:58:03 crc kubenswrapper[4965]: I0219 09:58:03.525227 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64add9067f8a8eeea50f5a7fa12fa3b1bfa8c26b6ba91b11b8cc4e9163221e34" Feb 19 09:58:03 crc kubenswrapper[4965]: I0219 09:58:03.525323 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l45v6" Feb 19 09:58:11 crc kubenswrapper[4965]: I0219 09:58:11.558988 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-7df4b8cb75-tnc6t"] Feb 19 09:58:11 crc kubenswrapper[4965]: E0219 09:58:11.559790 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38b3ecaf-956c-479a-8c6e-4e0dd083f186" containerName="extract" Feb 19 09:58:11 crc kubenswrapper[4965]: I0219 09:58:11.559805 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="38b3ecaf-956c-479a-8c6e-4e0dd083f186" containerName="extract" Feb 19 09:58:11 crc kubenswrapper[4965]: E0219 09:58:11.559817 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91fd349f-c4be-4636-a5a9-76ed721d9afa" containerName="console" Feb 19 09:58:11 crc kubenswrapper[4965]: I0219 09:58:11.559825 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="91fd349f-c4be-4636-a5a9-76ed721d9afa" containerName="console" Feb 19 09:58:11 crc kubenswrapper[4965]: E0219 09:58:11.559838 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38b3ecaf-956c-479a-8c6e-4e0dd083f186" containerName="pull" Feb 19 09:58:11 crc kubenswrapper[4965]: I0219 09:58:11.559847 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="38b3ecaf-956c-479a-8c6e-4e0dd083f186" containerName="pull" Feb 19 09:58:11 crc kubenswrapper[4965]: E0219 09:58:11.559864 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38b3ecaf-956c-479a-8c6e-4e0dd083f186" containerName="util" Feb 19 09:58:11 crc kubenswrapper[4965]: I0219 09:58:11.559872 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="38b3ecaf-956c-479a-8c6e-4e0dd083f186" containerName="util" Feb 19 09:58:11 crc kubenswrapper[4965]: I0219 09:58:11.559992 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="91fd349f-c4be-4636-a5a9-76ed721d9afa" containerName="console" Feb 19 09:58:11 crc kubenswrapper[4965]: I0219 09:58:11.560017 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="38b3ecaf-956c-479a-8c6e-4e0dd083f186" containerName="extract" Feb 19 09:58:11 crc kubenswrapper[4965]: I0219 09:58:11.560552 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7df4b8cb75-tnc6t" Feb 19 09:58:11 crc kubenswrapper[4965]: I0219 09:58:11.567004 4965 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 19 09:58:11 crc kubenswrapper[4965]: I0219 09:58:11.567225 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 19 09:58:11 crc kubenswrapper[4965]: I0219 09:58:11.567272 4965 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-mp2bd" Feb 19 09:58:11 crc kubenswrapper[4965]: I0219 09:58:11.568322 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 19 09:58:11 crc kubenswrapper[4965]: I0219 09:58:11.569538 4965 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 19 09:58:11 crc kubenswrapper[4965]: I0219 09:58:11.579492 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7df4b8cb75-tnc6t"] Feb 19 09:58:11 crc kubenswrapper[4965]: I0219 09:58:11.662379 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/281afb41-32a0-42c3-b25c-e2b5ee969867-apiservice-cert\") pod \"metallb-operator-controller-manager-7df4b8cb75-tnc6t\" (UID: \"281afb41-32a0-42c3-b25c-e2b5ee969867\") " pod="metallb-system/metallb-operator-controller-manager-7df4b8cb75-tnc6t" Feb 19 09:58:11 crc kubenswrapper[4965]: I0219 09:58:11.662475 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/281afb41-32a0-42c3-b25c-e2b5ee969867-webhook-cert\") pod \"metallb-operator-controller-manager-7df4b8cb75-tnc6t\" (UID: \"281afb41-32a0-42c3-b25c-e2b5ee969867\") " pod="metallb-system/metallb-operator-controller-manager-7df4b8cb75-tnc6t" Feb 19 09:58:11 crc kubenswrapper[4965]: I0219 09:58:11.662504 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgwsk\" (UniqueName: \"kubernetes.io/projected/281afb41-32a0-42c3-b25c-e2b5ee969867-kube-api-access-bgwsk\") pod \"metallb-operator-controller-manager-7df4b8cb75-tnc6t\" (UID: \"281afb41-32a0-42c3-b25c-e2b5ee969867\") " pod="metallb-system/metallb-operator-controller-manager-7df4b8cb75-tnc6t" Feb 19 09:58:11 crc kubenswrapper[4965]: I0219 09:58:11.764458 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/281afb41-32a0-42c3-b25c-e2b5ee969867-webhook-cert\") pod \"metallb-operator-controller-manager-7df4b8cb75-tnc6t\" (UID: \"281afb41-32a0-42c3-b25c-e2b5ee969867\") " pod="metallb-system/metallb-operator-controller-manager-7df4b8cb75-tnc6t" Feb 19 09:58:11 crc kubenswrapper[4965]: I0219 09:58:11.764517 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgwsk\" (UniqueName: \"kubernetes.io/projected/281afb41-32a0-42c3-b25c-e2b5ee969867-kube-api-access-bgwsk\") pod \"metallb-operator-controller-manager-7df4b8cb75-tnc6t\" (UID: \"281afb41-32a0-42c3-b25c-e2b5ee969867\") " pod="metallb-system/metallb-operator-controller-manager-7df4b8cb75-tnc6t" Feb 19 09:58:11 crc kubenswrapper[4965]: I0219 09:58:11.764627 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/281afb41-32a0-42c3-b25c-e2b5ee969867-apiservice-cert\") pod \"metallb-operator-controller-manager-7df4b8cb75-tnc6t\" (UID: \"281afb41-32a0-42c3-b25c-e2b5ee969867\") " pod="metallb-system/metallb-operator-controller-manager-7df4b8cb75-tnc6t" Feb 19 09:58:11 crc kubenswrapper[4965]: I0219 09:58:11.773781 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/281afb41-32a0-42c3-b25c-e2b5ee969867-webhook-cert\") pod \"metallb-operator-controller-manager-7df4b8cb75-tnc6t\" (UID: \"281afb41-32a0-42c3-b25c-e2b5ee969867\") " pod="metallb-system/metallb-operator-controller-manager-7df4b8cb75-tnc6t" Feb 19 09:58:11 crc kubenswrapper[4965]: I0219 09:58:11.773826 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/281afb41-32a0-42c3-b25c-e2b5ee969867-apiservice-cert\") pod \"metallb-operator-controller-manager-7df4b8cb75-tnc6t\" (UID: \"281afb41-32a0-42c3-b25c-e2b5ee969867\") " pod="metallb-system/metallb-operator-controller-manager-7df4b8cb75-tnc6t" Feb 19 09:58:11 crc kubenswrapper[4965]: I0219 09:58:11.798292 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgwsk\" (UniqueName: \"kubernetes.io/projected/281afb41-32a0-42c3-b25c-e2b5ee969867-kube-api-access-bgwsk\") pod \"metallb-operator-controller-manager-7df4b8cb75-tnc6t\" (UID: \"281afb41-32a0-42c3-b25c-e2b5ee969867\") " pod="metallb-system/metallb-operator-controller-manager-7df4b8cb75-tnc6t" Feb 19 09:58:11 crc kubenswrapper[4965]: I0219 09:58:11.883183 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7df4b8cb75-tnc6t" Feb 19 09:58:12 crc kubenswrapper[4965]: I0219 09:58:12.137323 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-fc95c66df-lk6qw"] Feb 19 09:58:12 crc kubenswrapper[4965]: I0219 09:58:12.143528 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-fc95c66df-lk6qw" Feb 19 09:58:12 crc kubenswrapper[4965]: I0219 09:58:12.148389 4965 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 19 09:58:12 crc kubenswrapper[4965]: I0219 09:58:12.148876 4965 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 19 09:58:12 crc kubenswrapper[4965]: I0219 09:58:12.149043 4965 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-9c5rh" Feb 19 09:58:12 crc kubenswrapper[4965]: I0219 09:58:12.159498 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-fc95c66df-lk6qw"] Feb 19 09:58:12 crc kubenswrapper[4965]: I0219 09:58:12.270821 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6be4b034-d7e8-410b-bbef-e4989108becd-apiservice-cert\") pod \"metallb-operator-webhook-server-fc95c66df-lk6qw\" (UID: \"6be4b034-d7e8-410b-bbef-e4989108becd\") " pod="metallb-system/metallb-operator-webhook-server-fc95c66df-lk6qw" Feb 19 09:58:12 crc kubenswrapper[4965]: I0219 09:58:12.270889 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47g4z\" (UniqueName: \"kubernetes.io/projected/6be4b034-d7e8-410b-bbef-e4989108becd-kube-api-access-47g4z\") pod \"metallb-operator-webhook-server-fc95c66df-lk6qw\" (UID: \"6be4b034-d7e8-410b-bbef-e4989108becd\") " pod="metallb-system/metallb-operator-webhook-server-fc95c66df-lk6qw" Feb 19 09:58:12 crc kubenswrapper[4965]: I0219 09:58:12.270945 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6be4b034-d7e8-410b-bbef-e4989108becd-webhook-cert\") pod \"metallb-operator-webhook-server-fc95c66df-lk6qw\" (UID: \"6be4b034-d7e8-410b-bbef-e4989108becd\") " pod="metallb-system/metallb-operator-webhook-server-fc95c66df-lk6qw" Feb 19 09:58:12 crc kubenswrapper[4965]: I0219 09:58:12.274362 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7df4b8cb75-tnc6t"] Feb 19 09:58:12 crc kubenswrapper[4965]: W0219 09:58:12.293536 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod281afb41_32a0_42c3_b25c_e2b5ee969867.slice/crio-20111e82e75fa7d04dea4c4da02076986f17f2a43114f055376c0d3e9096aae5 WatchSource:0}: Error finding container 20111e82e75fa7d04dea4c4da02076986f17f2a43114f055376c0d3e9096aae5: Status 404 returned error can't find the container with id 20111e82e75fa7d04dea4c4da02076986f17f2a43114f055376c0d3e9096aae5 Feb 19 09:58:12 crc kubenswrapper[4965]: I0219 09:58:12.372674 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6be4b034-d7e8-410b-bbef-e4989108becd-apiservice-cert\") pod \"metallb-operator-webhook-server-fc95c66df-lk6qw\" (UID: \"6be4b034-d7e8-410b-bbef-e4989108becd\") " pod="metallb-system/metallb-operator-webhook-server-fc95c66df-lk6qw" Feb 19 09:58:12 crc kubenswrapper[4965]: I0219 09:58:12.372747 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47g4z\" (UniqueName: \"kubernetes.io/projected/6be4b034-d7e8-410b-bbef-e4989108becd-kube-api-access-47g4z\") pod \"metallb-operator-webhook-server-fc95c66df-lk6qw\" (UID: \"6be4b034-d7e8-410b-bbef-e4989108becd\") " pod="metallb-system/metallb-operator-webhook-server-fc95c66df-lk6qw" Feb 19 09:58:12 crc kubenswrapper[4965]: I0219 09:58:12.372809 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6be4b034-d7e8-410b-bbef-e4989108becd-webhook-cert\") pod \"metallb-operator-webhook-server-fc95c66df-lk6qw\" (UID: \"6be4b034-d7e8-410b-bbef-e4989108becd\") " pod="metallb-system/metallb-operator-webhook-server-fc95c66df-lk6qw" Feb 19 09:58:12 crc kubenswrapper[4965]: I0219 09:58:12.378849 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6be4b034-d7e8-410b-bbef-e4989108becd-webhook-cert\") pod \"metallb-operator-webhook-server-fc95c66df-lk6qw\" (UID: \"6be4b034-d7e8-410b-bbef-e4989108becd\") " pod="metallb-system/metallb-operator-webhook-server-fc95c66df-lk6qw" Feb 19 09:58:12 crc kubenswrapper[4965]: I0219 09:58:12.378850 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6be4b034-d7e8-410b-bbef-e4989108becd-apiservice-cert\") pod \"metallb-operator-webhook-server-fc95c66df-lk6qw\" (UID: \"6be4b034-d7e8-410b-bbef-e4989108becd\") " pod="metallb-system/metallb-operator-webhook-server-fc95c66df-lk6qw" Feb 19 09:58:12 crc kubenswrapper[4965]: I0219 09:58:12.388653 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47g4z\" (UniqueName: \"kubernetes.io/projected/6be4b034-d7e8-410b-bbef-e4989108becd-kube-api-access-47g4z\") pod \"metallb-operator-webhook-server-fc95c66df-lk6qw\" (UID: \"6be4b034-d7e8-410b-bbef-e4989108becd\") " pod="metallb-system/metallb-operator-webhook-server-fc95c66df-lk6qw" Feb 19 09:58:12 crc kubenswrapper[4965]: I0219 09:58:12.477424 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-fc95c66df-lk6qw" Feb 19 09:58:12 crc kubenswrapper[4965]: I0219 09:58:12.596596 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7df4b8cb75-tnc6t" event={"ID":"281afb41-32a0-42c3-b25c-e2b5ee969867","Type":"ContainerStarted","Data":"20111e82e75fa7d04dea4c4da02076986f17f2a43114f055376c0d3e9096aae5"} Feb 19 09:58:12 crc kubenswrapper[4965]: I0219 09:58:12.762843 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-fc95c66df-lk6qw"] Feb 19 09:58:12 crc kubenswrapper[4965]: W0219 09:58:12.774460 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6be4b034_d7e8_410b_bbef_e4989108becd.slice/crio-f3353791c7085cf7df7efb7435a80c95ab694ab2cfedb875345dd1ce3ba1bf20 WatchSource:0}: Error finding container f3353791c7085cf7df7efb7435a80c95ab694ab2cfedb875345dd1ce3ba1bf20: Status 404 returned error can't find the container with id f3353791c7085cf7df7efb7435a80c95ab694ab2cfedb875345dd1ce3ba1bf20 Feb 19 09:58:13 crc kubenswrapper[4965]: I0219 09:58:13.604559 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-fc95c66df-lk6qw" event={"ID":"6be4b034-d7e8-410b-bbef-e4989108becd","Type":"ContainerStarted","Data":"f3353791c7085cf7df7efb7435a80c95ab694ab2cfedb875345dd1ce3ba1bf20"} Feb 19 09:58:16 crc kubenswrapper[4965]: I0219 09:58:16.601074 4965 patch_prober.go:28] interesting pod/machine-config-daemon-7mhh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 09:58:16 crc kubenswrapper[4965]: I0219 09:58:16.601723 4965 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 09:58:16 crc kubenswrapper[4965]: I0219 09:58:16.628673 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7df4b8cb75-tnc6t" event={"ID":"281afb41-32a0-42c3-b25c-e2b5ee969867","Type":"ContainerStarted","Data":"7149bc5ce41162b699b3db0e022b1d19ea5e93d3312c7215ac62f9a301038b4c"} Feb 19 09:58:16 crc kubenswrapper[4965]: I0219 09:58:16.629616 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7df4b8cb75-tnc6t" Feb 19 09:58:16 crc kubenswrapper[4965]: I0219 09:58:16.654938 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-7df4b8cb75-tnc6t" podStartSLOduration=1.780123021 podStartE2EDuration="5.654916278s" podCreationTimestamp="2026-02-19 09:58:11 +0000 UTC" firstStartedPulling="2026-02-19 09:58:12.299180082 +0000 UTC m=+947.920501392" lastFinishedPulling="2026-02-19 09:58:16.173973329 +0000 UTC m=+951.795294649" observedRunningTime="2026-02-19 09:58:16.654276153 +0000 UTC m=+952.275597473" watchObservedRunningTime="2026-02-19 09:58:16.654916278 +0000 UTC m=+952.276237588" Feb 19 09:58:18 crc kubenswrapper[4965]: I0219 09:58:18.650596 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-fc95c66df-lk6qw" event={"ID":"6be4b034-d7e8-410b-bbef-e4989108becd","Type":"ContainerStarted","Data":"b1237cbb737fb78629070e2ad45bb453843230c329965ebf68b316efc8d31b40"} Feb 19 09:58:18 crc kubenswrapper[4965]: I0219 09:58:18.651267 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-fc95c66df-lk6qw" Feb 19 09:58:18 crc kubenswrapper[4965]: I0219 09:58:18.688723 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-fc95c66df-lk6qw" podStartSLOduration=1.6809984980000001 podStartE2EDuration="6.688685477s" podCreationTimestamp="2026-02-19 09:58:12 +0000 UTC" firstStartedPulling="2026-02-19 09:58:12.777889777 +0000 UTC m=+948.399211087" lastFinishedPulling="2026-02-19 09:58:17.785576756 +0000 UTC m=+953.406898066" observedRunningTime="2026-02-19 09:58:18.678443057 +0000 UTC m=+954.299764377" watchObservedRunningTime="2026-02-19 09:58:18.688685477 +0000 UTC m=+954.310006827" Feb 19 09:58:32 crc kubenswrapper[4965]: I0219 09:58:32.482267 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-fc95c66df-lk6qw" Feb 19 09:58:46 crc kubenswrapper[4965]: I0219 09:58:46.601426 4965 patch_prober.go:28] interesting pod/machine-config-daemon-7mhh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 09:58:46 crc kubenswrapper[4965]: I0219 09:58:46.601840 4965 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 09:58:46 crc kubenswrapper[4965]: I0219 09:58:46.601885 4965 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" Feb 19 09:58:46 crc kubenswrapper[4965]: I0219 09:58:46.602423 4965 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2381a024086baeb4b1c2a62ae636f4e796e3ec1a1ca046d7c801db6f42b09ff3"} pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 09:58:46 crc kubenswrapper[4965]: I0219 09:58:46.602474 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerName="machine-config-daemon" containerID="cri-o://2381a024086baeb4b1c2a62ae636f4e796e3ec1a1ca046d7c801db6f42b09ff3" gracePeriod=600 Feb 19 09:58:46 crc kubenswrapper[4965]: I0219 09:58:46.835152 4965 generic.go:334] "Generic (PLEG): container finished" podID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerID="2381a024086baeb4b1c2a62ae636f4e796e3ec1a1ca046d7c801db6f42b09ff3" exitCode=0 Feb 19 09:58:46 crc kubenswrapper[4965]: I0219 09:58:46.835277 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" event={"ID":"63ef3eb8-6103-492d-b6ef-f16081d15e83","Type":"ContainerDied","Data":"2381a024086baeb4b1c2a62ae636f4e796e3ec1a1ca046d7c801db6f42b09ff3"} Feb 19 09:58:46 crc kubenswrapper[4965]: I0219 09:58:46.835359 4965 scope.go:117] "RemoveContainer" containerID="b59d24bc3fa01905164aa2b246a7f2c9309e5d002a2ffc3bd7f13562cf306e5b" Feb 19 09:58:47 crc kubenswrapper[4965]: I0219 09:58:47.849025 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" event={"ID":"63ef3eb8-6103-492d-b6ef-f16081d15e83","Type":"ContainerStarted","Data":"ac6c3a11724d0b4226206f45a1c130a82ce4948594339da20a6fb6307209a67e"} Feb 19 09:58:51 crc kubenswrapper[4965]: I0219 09:58:51.886370 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-7df4b8cb75-tnc6t" Feb 19 09:58:52 crc kubenswrapper[4965]: I0219 09:58:52.722271 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-hcb66"] Feb 19 09:58:52 crc kubenswrapper[4965]: I0219 09:58:52.725274 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-hcb66" Feb 19 09:58:52 crc kubenswrapper[4965]: I0219 09:58:52.728011 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-l7r8n"] Feb 19 09:58:52 crc kubenswrapper[4965]: I0219 09:58:52.729533 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-l7r8n" Feb 19 09:58:52 crc kubenswrapper[4965]: I0219 09:58:52.730101 4965 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-z6758" Feb 19 09:58:52 crc kubenswrapper[4965]: I0219 09:58:52.730512 4965 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 19 09:58:52 crc kubenswrapper[4965]: I0219 09:58:52.730947 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 19 09:58:52 crc kubenswrapper[4965]: I0219 09:58:52.731789 4965 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 19 09:58:52 crc kubenswrapper[4965]: I0219 09:58:52.747416 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-l7r8n"] Feb 19 09:58:52 crc kubenswrapper[4965]: I0219 09:58:52.826997 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-5f69j"] Feb 19 09:58:52 crc kubenswrapper[4965]: I0219 09:58:52.827890 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-5f69j" Feb 19 09:58:52 crc kubenswrapper[4965]: I0219 09:58:52.830072 4965 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 19 09:58:52 crc kubenswrapper[4965]: I0219 09:58:52.830635 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 19 09:58:52 crc kubenswrapper[4965]: I0219 09:58:52.831187 4965 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 19 09:58:52 crc kubenswrapper[4965]: I0219 09:58:52.831551 4965 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-q52tx" Feb 19 09:58:52 crc kubenswrapper[4965]: I0219 09:58:52.852006 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-62qgz"] Feb 19 09:58:52 crc kubenswrapper[4965]: I0219 09:58:52.853171 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-62qgz" Feb 19 09:58:52 crc kubenswrapper[4965]: I0219 09:58:52.854163 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/684dceb2-01ab-4856-b857-0d6ade07aadd-reloader\") pod \"frr-k8s-hcb66\" (UID: \"684dceb2-01ab-4856-b857-0d6ade07aadd\") " pod="metallb-system/frr-k8s-hcb66" Feb 19 09:58:52 crc kubenswrapper[4965]: I0219 09:58:52.854216 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/684dceb2-01ab-4856-b857-0d6ade07aadd-frr-startup\") pod \"frr-k8s-hcb66\" (UID: \"684dceb2-01ab-4856-b857-0d6ade07aadd\") " pod="metallb-system/frr-k8s-hcb66" Feb 19 09:58:52 crc kubenswrapper[4965]: I0219 09:58:52.854247 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/684dceb2-01ab-4856-b857-0d6ade07aadd-metrics\") pod \"frr-k8s-hcb66\" (UID: \"684dceb2-01ab-4856-b857-0d6ade07aadd\") " pod="metallb-system/frr-k8s-hcb66" Feb 19 09:58:52 crc kubenswrapper[4965]: I0219 09:58:52.854299 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/de0e351f-d402-4a5b-8942-d22a20ad2fa4-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-l7r8n\" (UID: \"de0e351f-d402-4a5b-8942-d22a20ad2fa4\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-l7r8n" Feb 19 09:58:52 crc kubenswrapper[4965]: I0219 09:58:52.854318 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/684dceb2-01ab-4856-b857-0d6ade07aadd-frr-conf\") pod \"frr-k8s-hcb66\" (UID: \"684dceb2-01ab-4856-b857-0d6ade07aadd\") " pod="metallb-system/frr-k8s-hcb66" Feb 19 09:58:52 crc kubenswrapper[4965]: I0219 09:58:52.854345 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-br65w\" (UniqueName: \"kubernetes.io/projected/684dceb2-01ab-4856-b857-0d6ade07aadd-kube-api-access-br65w\") pod \"frr-k8s-hcb66\" (UID: \"684dceb2-01ab-4856-b857-0d6ade07aadd\") " pod="metallb-system/frr-k8s-hcb66" Feb 19 09:58:52 crc kubenswrapper[4965]: I0219 09:58:52.854379 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwlq5\" (UniqueName: \"kubernetes.io/projected/de0e351f-d402-4a5b-8942-d22a20ad2fa4-kube-api-access-qwlq5\") pod \"frr-k8s-webhook-server-78b44bf5bb-l7r8n\" (UID: \"de0e351f-d402-4a5b-8942-d22a20ad2fa4\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-l7r8n" Feb 19 09:58:52 crc kubenswrapper[4965]: I0219 09:58:52.854396 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/684dceb2-01ab-4856-b857-0d6ade07aadd-frr-sockets\") pod \"frr-k8s-hcb66\" (UID: \"684dceb2-01ab-4856-b857-0d6ade07aadd\") " pod="metallb-system/frr-k8s-hcb66" Feb 19 09:58:52 crc kubenswrapper[4965]: I0219 09:58:52.854417 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/684dceb2-01ab-4856-b857-0d6ade07aadd-metrics-certs\") pod \"frr-k8s-hcb66\" (UID: \"684dceb2-01ab-4856-b857-0d6ade07aadd\") " pod="metallb-system/frr-k8s-hcb66" Feb 19 09:58:52 crc kubenswrapper[4965]: I0219 09:58:52.858777 4965 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 19 09:58:52 crc kubenswrapper[4965]: I0219 09:58:52.872065 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-62qgz"] Feb 19 09:58:52 crc kubenswrapper[4965]: I0219 09:58:52.955748 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/684dceb2-01ab-4856-b857-0d6ade07aadd-metrics-certs\") pod \"frr-k8s-hcb66\" (UID: \"684dceb2-01ab-4856-b857-0d6ade07aadd\") " pod="metallb-system/frr-k8s-hcb66" Feb 19 09:58:52 crc kubenswrapper[4965]: I0219 09:58:52.955794 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4fb263d7-f864-47e8-ba07-5a8860db5d11-cert\") pod \"controller-69bbfbf88f-62qgz\" (UID: \"4fb263d7-f864-47e8-ba07-5a8860db5d11\") " pod="metallb-system/controller-69bbfbf88f-62qgz" Feb 19 09:58:52 crc kubenswrapper[4965]: I0219 09:58:52.955822 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b86wl\" (UniqueName: \"kubernetes.io/projected/2a12c073-8d46-4579-a422-6344a8a4959f-kube-api-access-b86wl\") pod \"speaker-5f69j\" (UID: \"2a12c073-8d46-4579-a422-6344a8a4959f\") " pod="metallb-system/speaker-5f69j" Feb 19 09:58:52 crc kubenswrapper[4965]: I0219 09:58:52.955848 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2a12c073-8d46-4579-a422-6344a8a4959f-metrics-certs\") pod \"speaker-5f69j\" (UID: \"2a12c073-8d46-4579-a422-6344a8a4959f\") " pod="metallb-system/speaker-5f69j" Feb 19 09:58:52 crc kubenswrapper[4965]: I0219 09:58:52.955867 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/684dceb2-01ab-4856-b857-0d6ade07aadd-reloader\") pod \"frr-k8s-hcb66\" (UID: \"684dceb2-01ab-4856-b857-0d6ade07aadd\") " pod="metallb-system/frr-k8s-hcb66" Feb 19 09:58:52 crc kubenswrapper[4965]: I0219 09:58:52.955891 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/684dceb2-01ab-4856-b857-0d6ade07aadd-frr-startup\") pod \"frr-k8s-hcb66\" (UID: \"684dceb2-01ab-4856-b857-0d6ade07aadd\") " pod="metallb-system/frr-k8s-hcb66" Feb 19 09:58:52 crc kubenswrapper[4965]: I0219 09:58:52.955916 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhd8n\" (UniqueName: \"kubernetes.io/projected/4fb263d7-f864-47e8-ba07-5a8860db5d11-kube-api-access-fhd8n\") pod \"controller-69bbfbf88f-62qgz\" (UID: \"4fb263d7-f864-47e8-ba07-5a8860db5d11\") " pod="metallb-system/controller-69bbfbf88f-62qgz" Feb 19 09:58:52 crc kubenswrapper[4965]: E0219 09:58:52.955968 4965 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Feb 19 09:58:52 crc kubenswrapper[4965]: I0219 09:58:52.956097 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/684dceb2-01ab-4856-b857-0d6ade07aadd-metrics\") pod \"frr-k8s-hcb66\" (UID: \"684dceb2-01ab-4856-b857-0d6ade07aadd\") " pod="metallb-system/frr-k8s-hcb66" Feb 19 09:58:52 crc kubenswrapper[4965]: E0219 09:58:52.956108 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/684dceb2-01ab-4856-b857-0d6ade07aadd-metrics-certs podName:684dceb2-01ab-4856-b857-0d6ade07aadd nodeName:}" failed. No retries permitted until 2026-02-19 09:58:53.456083535 +0000 UTC m=+989.077404935 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/684dceb2-01ab-4856-b857-0d6ade07aadd-metrics-certs") pod "frr-k8s-hcb66" (UID: "684dceb2-01ab-4856-b857-0d6ade07aadd") : secret "frr-k8s-certs-secret" not found Feb 19 09:58:52 crc kubenswrapper[4965]: I0219 09:58:52.956158 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2a12c073-8d46-4579-a422-6344a8a4959f-memberlist\") pod \"speaker-5f69j\" (UID: \"2a12c073-8d46-4579-a422-6344a8a4959f\") " pod="metallb-system/speaker-5f69j" Feb 19 09:58:52 crc kubenswrapper[4965]: I0219 09:58:52.956217 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/de0e351f-d402-4a5b-8942-d22a20ad2fa4-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-l7r8n\" (UID: \"de0e351f-d402-4a5b-8942-d22a20ad2fa4\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-l7r8n" Feb 19 09:58:52 crc kubenswrapper[4965]: I0219 09:58:52.956242 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/684dceb2-01ab-4856-b857-0d6ade07aadd-frr-conf\") pod \"frr-k8s-hcb66\" (UID: \"684dceb2-01ab-4856-b857-0d6ade07aadd\") " pod="metallb-system/frr-k8s-hcb66" Feb 19 09:58:52 crc kubenswrapper[4965]: I0219 09:58:52.956295 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/684dceb2-01ab-4856-b857-0d6ade07aadd-reloader\") pod \"frr-k8s-hcb66\" (UID: \"684dceb2-01ab-4856-b857-0d6ade07aadd\") " pod="metallb-system/frr-k8s-hcb66" Feb 19 09:58:52 crc kubenswrapper[4965]: E0219 09:58:52.956332 4965 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Feb 19 09:58:52 crc kubenswrapper[4965]: I0219 09:58:52.956302 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-br65w\" (UniqueName: \"kubernetes.io/projected/684dceb2-01ab-4856-b857-0d6ade07aadd-kube-api-access-br65w\") pod \"frr-k8s-hcb66\" (UID: \"684dceb2-01ab-4856-b857-0d6ade07aadd\") " pod="metallb-system/frr-k8s-hcb66" Feb 19 09:58:52 crc kubenswrapper[4965]: E0219 09:58:52.956397 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/de0e351f-d402-4a5b-8942-d22a20ad2fa4-cert podName:de0e351f-d402-4a5b-8942-d22a20ad2fa4 nodeName:}" failed. No retries permitted until 2026-02-19 09:58:53.456383382 +0000 UTC m=+989.077704792 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/de0e351f-d402-4a5b-8942-d22a20ad2fa4-cert") pod "frr-k8s-webhook-server-78b44bf5bb-l7r8n" (UID: "de0e351f-d402-4a5b-8942-d22a20ad2fa4") : secret "frr-k8s-webhook-server-cert" not found Feb 19 09:58:52 crc kubenswrapper[4965]: I0219 09:58:52.956434 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/684dceb2-01ab-4856-b857-0d6ade07aadd-metrics\") pod \"frr-k8s-hcb66\" (UID: \"684dceb2-01ab-4856-b857-0d6ade07aadd\") " pod="metallb-system/frr-k8s-hcb66" Feb 19 09:58:52 crc kubenswrapper[4965]: I0219 09:58:52.956465 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/2a12c073-8d46-4579-a422-6344a8a4959f-metallb-excludel2\") pod \"speaker-5f69j\" (UID: \"2a12c073-8d46-4579-a422-6344a8a4959f\") " pod="metallb-system/speaker-5f69j" Feb 19 09:58:52 crc kubenswrapper[4965]: I0219 09:58:52.956515 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4fb263d7-f864-47e8-ba07-5a8860db5d11-metrics-certs\") pod \"controller-69bbfbf88f-62qgz\" (UID: \"4fb263d7-f864-47e8-ba07-5a8860db5d11\") " pod="metallb-system/controller-69bbfbf88f-62qgz" Feb 19 09:58:52 crc kubenswrapper[4965]: I0219 09:58:52.956581 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwlq5\" (UniqueName: \"kubernetes.io/projected/de0e351f-d402-4a5b-8942-d22a20ad2fa4-kube-api-access-qwlq5\") pod \"frr-k8s-webhook-server-78b44bf5bb-l7r8n\" (UID: \"de0e351f-d402-4a5b-8942-d22a20ad2fa4\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-l7r8n" Feb 19 09:58:52 crc kubenswrapper[4965]: I0219 09:58:52.956607 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/684dceb2-01ab-4856-b857-0d6ade07aadd-frr-sockets\") pod \"frr-k8s-hcb66\" (UID: \"684dceb2-01ab-4856-b857-0d6ade07aadd\") " pod="metallb-system/frr-k8s-hcb66" Feb 19 09:58:52 crc kubenswrapper[4965]: I0219 09:58:52.956679 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/684dceb2-01ab-4856-b857-0d6ade07aadd-frr-conf\") pod \"frr-k8s-hcb66\" (UID: \"684dceb2-01ab-4856-b857-0d6ade07aadd\") " pod="metallb-system/frr-k8s-hcb66" Feb 19 09:58:52 crc kubenswrapper[4965]: I0219 09:58:52.957031 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/684dceb2-01ab-4856-b857-0d6ade07aadd-frr-sockets\") pod \"frr-k8s-hcb66\" (UID: \"684dceb2-01ab-4856-b857-0d6ade07aadd\") " pod="metallb-system/frr-k8s-hcb66" Feb 19 09:58:52 crc kubenswrapper[4965]: I0219 09:58:52.957411 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/684dceb2-01ab-4856-b857-0d6ade07aadd-frr-startup\") pod \"frr-k8s-hcb66\" (UID: \"684dceb2-01ab-4856-b857-0d6ade07aadd\") " pod="metallb-system/frr-k8s-hcb66" Feb 19 09:58:52 crc kubenswrapper[4965]: I0219 09:58:52.978648 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwlq5\" (UniqueName: \"kubernetes.io/projected/de0e351f-d402-4a5b-8942-d22a20ad2fa4-kube-api-access-qwlq5\") pod \"frr-k8s-webhook-server-78b44bf5bb-l7r8n\" (UID: \"de0e351f-d402-4a5b-8942-d22a20ad2fa4\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-l7r8n" Feb 19 09:58:52 crc kubenswrapper[4965]: I0219 09:58:52.990805 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-br65w\" (UniqueName: \"kubernetes.io/projected/684dceb2-01ab-4856-b857-0d6ade07aadd-kube-api-access-br65w\") pod \"frr-k8s-hcb66\" (UID: \"684dceb2-01ab-4856-b857-0d6ade07aadd\") " pod="metallb-system/frr-k8s-hcb66" Feb 19 09:58:53 crc kubenswrapper[4965]: I0219 09:58:53.057618 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/2a12c073-8d46-4579-a422-6344a8a4959f-metallb-excludel2\") pod \"speaker-5f69j\" (UID: \"2a12c073-8d46-4579-a422-6344a8a4959f\") " pod="metallb-system/speaker-5f69j" Feb 19 09:58:53 crc kubenswrapper[4965]: I0219 09:58:53.057702 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4fb263d7-f864-47e8-ba07-5a8860db5d11-metrics-certs\") pod \"controller-69bbfbf88f-62qgz\" (UID: \"4fb263d7-f864-47e8-ba07-5a8860db5d11\") " pod="metallb-system/controller-69bbfbf88f-62qgz" Feb 19 09:58:53 crc kubenswrapper[4965]: I0219 09:58:53.057786 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4fb263d7-f864-47e8-ba07-5a8860db5d11-cert\") pod \"controller-69bbfbf88f-62qgz\" (UID: \"4fb263d7-f864-47e8-ba07-5a8860db5d11\") " pod="metallb-system/controller-69bbfbf88f-62qgz" Feb 19 09:58:53 crc kubenswrapper[4965]: I0219 09:58:53.057833 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b86wl\" (UniqueName: \"kubernetes.io/projected/2a12c073-8d46-4579-a422-6344a8a4959f-kube-api-access-b86wl\") pod \"speaker-5f69j\" (UID: \"2a12c073-8d46-4579-a422-6344a8a4959f\") " pod="metallb-system/speaker-5f69j" Feb 19 09:58:53 crc kubenswrapper[4965]: I0219 09:58:53.057860 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2a12c073-8d46-4579-a422-6344a8a4959f-metrics-certs\") pod \"speaker-5f69j\" (UID: \"2a12c073-8d46-4579-a422-6344a8a4959f\") " pod="metallb-system/speaker-5f69j" Feb 19 09:58:53 crc kubenswrapper[4965]: I0219 09:58:53.057922 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhd8n\" (UniqueName: \"kubernetes.io/projected/4fb263d7-f864-47e8-ba07-5a8860db5d11-kube-api-access-fhd8n\") pod \"controller-69bbfbf88f-62qgz\" (UID: \"4fb263d7-f864-47e8-ba07-5a8860db5d11\") " pod="metallb-system/controller-69bbfbf88f-62qgz" Feb 19 09:58:53 crc kubenswrapper[4965]: I0219 09:58:53.057949 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2a12c073-8d46-4579-a422-6344a8a4959f-memberlist\") pod \"speaker-5f69j\" (UID: \"2a12c073-8d46-4579-a422-6344a8a4959f\") " pod="metallb-system/speaker-5f69j" Feb 19 09:58:53 crc kubenswrapper[4965]: E0219 09:58:53.058168 4965 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 19 09:58:53 crc kubenswrapper[4965]: E0219 09:58:53.058257 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2a12c073-8d46-4579-a422-6344a8a4959f-memberlist podName:2a12c073-8d46-4579-a422-6344a8a4959f nodeName:}" failed. No retries permitted until 2026-02-19 09:58:53.558236366 +0000 UTC m=+989.179557676 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/2a12c073-8d46-4579-a422-6344a8a4959f-memberlist") pod "speaker-5f69j" (UID: "2a12c073-8d46-4579-a422-6344a8a4959f") : secret "metallb-memberlist" not found Feb 19 09:58:53 crc kubenswrapper[4965]: I0219 09:58:53.058762 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/2a12c073-8d46-4579-a422-6344a8a4959f-metallb-excludel2\") pod \"speaker-5f69j\" (UID: \"2a12c073-8d46-4579-a422-6344a8a4959f\") " pod="metallb-system/speaker-5f69j" Feb 19 09:58:53 crc kubenswrapper[4965]: I0219 09:58:53.062126 4965 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 19 09:58:53 crc kubenswrapper[4965]: I0219 09:58:53.062389 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2a12c073-8d46-4579-a422-6344a8a4959f-metrics-certs\") pod \"speaker-5f69j\" (UID: \"2a12c073-8d46-4579-a422-6344a8a4959f\") " pod="metallb-system/speaker-5f69j" Feb 19 09:58:53 crc kubenswrapper[4965]: I0219 09:58:53.062924 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4fb263d7-f864-47e8-ba07-5a8860db5d11-metrics-certs\") pod \"controller-69bbfbf88f-62qgz\" (UID: \"4fb263d7-f864-47e8-ba07-5a8860db5d11\") " pod="metallb-system/controller-69bbfbf88f-62qgz" Feb 19 09:58:53 crc kubenswrapper[4965]: I0219 09:58:53.071616 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4fb263d7-f864-47e8-ba07-5a8860db5d11-cert\") pod \"controller-69bbfbf88f-62qgz\" (UID: \"4fb263d7-f864-47e8-ba07-5a8860db5d11\") " pod="metallb-system/controller-69bbfbf88f-62qgz" Feb 19 09:58:53 crc kubenswrapper[4965]: I0219 09:58:53.075489 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhd8n\" (UniqueName: \"kubernetes.io/projected/4fb263d7-f864-47e8-ba07-5a8860db5d11-kube-api-access-fhd8n\") pod \"controller-69bbfbf88f-62qgz\" (UID: \"4fb263d7-f864-47e8-ba07-5a8860db5d11\") " pod="metallb-system/controller-69bbfbf88f-62qgz" Feb 19 09:58:53 crc kubenswrapper[4965]: I0219 09:58:53.078158 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b86wl\" (UniqueName: \"kubernetes.io/projected/2a12c073-8d46-4579-a422-6344a8a4959f-kube-api-access-b86wl\") pod \"speaker-5f69j\" (UID: \"2a12c073-8d46-4579-a422-6344a8a4959f\") " pod="metallb-system/speaker-5f69j" Feb 19 09:58:53 crc kubenswrapper[4965]: I0219 09:58:53.167915 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-62qgz" Feb 19 09:58:53 crc kubenswrapper[4965]: I0219 09:58:53.465697 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/de0e351f-d402-4a5b-8942-d22a20ad2fa4-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-l7r8n\" (UID: \"de0e351f-d402-4a5b-8942-d22a20ad2fa4\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-l7r8n" Feb 19 09:58:53 crc kubenswrapper[4965]: I0219 09:58:53.465762 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/684dceb2-01ab-4856-b857-0d6ade07aadd-metrics-certs\") pod \"frr-k8s-hcb66\" (UID: \"684dceb2-01ab-4856-b857-0d6ade07aadd\") " pod="metallb-system/frr-k8s-hcb66" Feb 19 09:58:53 crc kubenswrapper[4965]: I0219 09:58:53.469128 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/684dceb2-01ab-4856-b857-0d6ade07aadd-metrics-certs\") pod \"frr-k8s-hcb66\" (UID: \"684dceb2-01ab-4856-b857-0d6ade07aadd\") " pod="metallb-system/frr-k8s-hcb66" Feb 19 09:58:53 crc kubenswrapper[4965]: I0219 09:58:53.469776 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/de0e351f-d402-4a5b-8942-d22a20ad2fa4-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-l7r8n\" (UID: \"de0e351f-d402-4a5b-8942-d22a20ad2fa4\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-l7r8n" Feb 19 09:58:53 crc kubenswrapper[4965]: I0219 09:58:53.567331 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2a12c073-8d46-4579-a422-6344a8a4959f-memberlist\") pod \"speaker-5f69j\" (UID: \"2a12c073-8d46-4579-a422-6344a8a4959f\") " pod="metallb-system/speaker-5f69j" Feb 19 09:58:53 crc kubenswrapper[4965]: E0219 09:58:53.567563 4965 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 19 09:58:53 crc kubenswrapper[4965]: E0219 09:58:53.567817 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2a12c073-8d46-4579-a422-6344a8a4959f-memberlist podName:2a12c073-8d46-4579-a422-6344a8a4959f nodeName:}" failed. No retries permitted until 2026-02-19 09:58:54.56779913 +0000 UTC m=+990.189120440 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/2a12c073-8d46-4579-a422-6344a8a4959f-memberlist") pod "speaker-5f69j" (UID: "2a12c073-8d46-4579-a422-6344a8a4959f") : secret "metallb-memberlist" not found Feb 19 09:58:53 crc kubenswrapper[4965]: I0219 09:58:53.654161 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-hcb66" Feb 19 09:58:53 crc kubenswrapper[4965]: I0219 09:58:53.662534 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-l7r8n" Feb 19 09:58:53 crc kubenswrapper[4965]: I0219 09:58:53.693895 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-62qgz"] Feb 19 09:58:53 crc kubenswrapper[4965]: W0219 09:58:53.709993 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fb263d7_f864_47e8_ba07_5a8860db5d11.slice/crio-8b77b50597c15dba565afcb600b625c1d82460951ee1cd679d8413d729ea8601 WatchSource:0}: Error finding container 8b77b50597c15dba565afcb600b625c1d82460951ee1cd679d8413d729ea8601: Status 404 returned error can't find the container with id 8b77b50597c15dba565afcb600b625c1d82460951ee1cd679d8413d729ea8601 Feb 19 09:58:53 crc kubenswrapper[4965]: I0219 09:58:53.891641 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-62qgz" event={"ID":"4fb263d7-f864-47e8-ba07-5a8860db5d11","Type":"ContainerStarted","Data":"8b77b50597c15dba565afcb600b625c1d82460951ee1cd679d8413d729ea8601"} Feb 19 09:58:53 crc kubenswrapper[4965]: I0219 09:58:53.892978 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hcb66" event={"ID":"684dceb2-01ab-4856-b857-0d6ade07aadd","Type":"ContainerStarted","Data":"65d7087bddf084a93f15c283fc7b57295a86f61f2f9a2b70252e9621668cdee7"} Feb 19 09:58:53 crc kubenswrapper[4965]: I0219 09:58:53.943792 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-l7r8n"] Feb 19 09:58:53 crc kubenswrapper[4965]: W0219 09:58:53.955157 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde0e351f_d402_4a5b_8942_d22a20ad2fa4.slice/crio-a09d7a5081e72c9d888699dc679f822cc0250841c504570166f55e059b0ef175 WatchSource:0}: Error finding container a09d7a5081e72c9d888699dc679f822cc0250841c504570166f55e059b0ef175: Status 404 returned error can't find the container with id a09d7a5081e72c9d888699dc679f822cc0250841c504570166f55e059b0ef175 Feb 19 09:58:54 crc kubenswrapper[4965]: I0219 09:58:54.586932 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2a12c073-8d46-4579-a422-6344a8a4959f-memberlist\") pod \"speaker-5f69j\" (UID: \"2a12c073-8d46-4579-a422-6344a8a4959f\") " pod="metallb-system/speaker-5f69j" Feb 19 09:58:54 crc kubenswrapper[4965]: I0219 09:58:54.601081 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2a12c073-8d46-4579-a422-6344a8a4959f-memberlist\") pod \"speaker-5f69j\" (UID: \"2a12c073-8d46-4579-a422-6344a8a4959f\") " pod="metallb-system/speaker-5f69j" Feb 19 09:58:54 crc kubenswrapper[4965]: I0219 09:58:54.642706 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-5f69j" Feb 19 09:58:54 crc kubenswrapper[4965]: I0219 09:58:54.908991 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-l7r8n" event={"ID":"de0e351f-d402-4a5b-8942-d22a20ad2fa4","Type":"ContainerStarted","Data":"a09d7a5081e72c9d888699dc679f822cc0250841c504570166f55e059b0ef175"} Feb 19 09:58:54 crc kubenswrapper[4965]: I0219 09:58:54.911616 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-62qgz" event={"ID":"4fb263d7-f864-47e8-ba07-5a8860db5d11","Type":"ContainerStarted","Data":"d8959c9330258a801ad75cf97f050407b8f4e01513d7e8a7888bbae5a48822aa"} Feb 19 09:58:54 crc kubenswrapper[4965]: I0219 09:58:54.911647 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-62qgz" event={"ID":"4fb263d7-f864-47e8-ba07-5a8860db5d11","Type":"ContainerStarted","Data":"e60636a6fd613cda4764111cb764060549672620646166a25c446600673299a4"} Feb 19 09:58:54 crc kubenswrapper[4965]: I0219 09:58:54.912130 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-62qgz" Feb 19 09:58:54 crc kubenswrapper[4965]: I0219 09:58:54.914154 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-5f69j" event={"ID":"2a12c073-8d46-4579-a422-6344a8a4959f","Type":"ContainerStarted","Data":"c14178bb3191fdeae0bdc8dfae66c4da766cf7451f1147761464cbf8afd5d8d2"} Feb 19 09:58:54 crc kubenswrapper[4965]: I0219 09:58:54.948058 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-62qgz" podStartSLOduration=2.948019888 podStartE2EDuration="2.948019888s" podCreationTimestamp="2026-02-19 09:58:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:58:54.933921985 +0000 UTC m=+990.555243305" watchObservedRunningTime="2026-02-19 09:58:54.948019888 +0000 UTC m=+990.569341238" Feb 19 09:58:55 crc kubenswrapper[4965]: I0219 09:58:55.925034 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-5f69j" event={"ID":"2a12c073-8d46-4579-a422-6344a8a4959f","Type":"ContainerStarted","Data":"1b76acd2e4bdd06ad7e28710761756505bc24e92d2c97edb25c53f4d134301aa"} Feb 19 09:58:55 crc kubenswrapper[4965]: I0219 09:58:55.925466 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-5f69j" event={"ID":"2a12c073-8d46-4579-a422-6344a8a4959f","Type":"ContainerStarted","Data":"0cea63cd4dd845e86558c23c8683d047a1be8161b2084ada1ae757021ddcadc0"} Feb 19 09:58:55 crc kubenswrapper[4965]: I0219 09:58:55.925482 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-5f69j" Feb 19 09:58:55 crc kubenswrapper[4965]: I0219 09:58:55.946309 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-5f69j" podStartSLOduration=3.9462895 podStartE2EDuration="3.9462895s" podCreationTimestamp="2026-02-19 09:58:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 09:58:55.941521013 +0000 UTC m=+991.562842333" watchObservedRunningTime="2026-02-19 09:58:55.9462895 +0000 UTC m=+991.567610810" Feb 19 09:59:02 crc kubenswrapper[4965]: I0219 09:59:02.009384 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-l7r8n" event={"ID":"de0e351f-d402-4a5b-8942-d22a20ad2fa4","Type":"ContainerStarted","Data":"6033cb055a541119cca2e9021fd074de17d3fd7beef55b325cf0c51c0bb0a425"} Feb 19 09:59:02 crc kubenswrapper[4965]: I0219 09:59:02.010633 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-l7r8n" Feb 19 09:59:02 crc kubenswrapper[4965]: I0219 09:59:02.011411 4965 generic.go:334] "Generic (PLEG): container finished" podID="684dceb2-01ab-4856-b857-0d6ade07aadd" containerID="a5a2f04fca40e10218e5343a4bbab723158dc738bb6e3a61ec2bbb194f6de8bf" exitCode=0 Feb 19 09:59:02 crc kubenswrapper[4965]: I0219 09:59:02.011461 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hcb66" event={"ID":"684dceb2-01ab-4856-b857-0d6ade07aadd","Type":"ContainerDied","Data":"a5a2f04fca40e10218e5343a4bbab723158dc738bb6e3a61ec2bbb194f6de8bf"} Feb 19 09:59:02 crc kubenswrapper[4965]: I0219 09:59:02.029451 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-l7r8n" podStartSLOduration=2.420816075 podStartE2EDuration="10.029433794s" podCreationTimestamp="2026-02-19 09:58:52 +0000 UTC" firstStartedPulling="2026-02-19 09:58:53.959859941 +0000 UTC m=+989.581181251" lastFinishedPulling="2026-02-19 09:59:01.56847762 +0000 UTC m=+997.189798970" observedRunningTime="2026-02-19 09:59:02.025777816 +0000 UTC m=+997.647099136" watchObservedRunningTime="2026-02-19 09:59:02.029433794 +0000 UTC m=+997.650755104" Feb 19 09:59:03 crc kubenswrapper[4965]: I0219 09:59:03.021626 4965 generic.go:334] "Generic (PLEG): container finished" podID="684dceb2-01ab-4856-b857-0d6ade07aadd" containerID="af20f0ca76c05804d97f4736f0f6b041c88a89879e7865c614971dac2747df3d" exitCode=0 Feb 19 09:59:03 crc kubenswrapper[4965]: I0219 09:59:03.021748 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hcb66" event={"ID":"684dceb2-01ab-4856-b857-0d6ade07aadd","Type":"ContainerDied","Data":"af20f0ca76c05804d97f4736f0f6b041c88a89879e7865c614971dac2747df3d"} Feb 19 09:59:03 crc kubenswrapper[4965]: I0219 09:59:03.172258 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-62qgz" Feb 19 09:59:04 crc kubenswrapper[4965]: I0219 09:59:04.035090 4965 generic.go:334] "Generic (PLEG): container finished" podID="684dceb2-01ab-4856-b857-0d6ade07aadd" containerID="c643b829f5b3056e110538090787f25f3f8bb43a4aa4257ac39493a5b46828bb" exitCode=0 Feb 19 09:59:04 crc kubenswrapper[4965]: I0219 09:59:04.035160 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hcb66" event={"ID":"684dceb2-01ab-4856-b857-0d6ade07aadd","Type":"ContainerDied","Data":"c643b829f5b3056e110538090787f25f3f8bb43a4aa4257ac39493a5b46828bb"} Feb 19 09:59:04 crc kubenswrapper[4965]: I0219 09:59:04.646767 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-5f69j" Feb 19 09:59:05 crc kubenswrapper[4965]: I0219 09:59:05.048477 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hcb66" event={"ID":"684dceb2-01ab-4856-b857-0d6ade07aadd","Type":"ContainerStarted","Data":"3abd44216b74d3a14d5f52043fa93075ee3d7abaa0208db4fdb5974dba3a1219"} Feb 19 09:59:05 crc kubenswrapper[4965]: I0219 09:59:05.048529 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hcb66" event={"ID":"684dceb2-01ab-4856-b857-0d6ade07aadd","Type":"ContainerStarted","Data":"285cc2744fb2c333dee13e25ee6ad58ad7f1d465bea8e6a00eeb00f06031efe8"} Feb 19 09:59:05 crc kubenswrapper[4965]: I0219 09:59:05.048547 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hcb66" event={"ID":"684dceb2-01ab-4856-b857-0d6ade07aadd","Type":"ContainerStarted","Data":"f97f7ac5c1324f8d922293e4e55fc2fb873426f20e7c0203f9114bd5a2e275d3"} Feb 19 09:59:05 crc kubenswrapper[4965]: I0219 09:59:05.048562 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hcb66" event={"ID":"684dceb2-01ab-4856-b857-0d6ade07aadd","Type":"ContainerStarted","Data":"d406283fc3d17363d95e0e52b94cecc690ebfbb5cb555b5b93735c1f2f1e2949"} Feb 19 09:59:05 crc kubenswrapper[4965]: I0219 09:59:05.048578 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hcb66" event={"ID":"684dceb2-01ab-4856-b857-0d6ade07aadd","Type":"ContainerStarted","Data":"badfc87a9cf5ca48c49868000a32709e2f174b1b454a5c3b4b5b0c833f8e4c73"} Feb 19 09:59:06 crc kubenswrapper[4965]: I0219 09:59:06.062372 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hcb66" event={"ID":"684dceb2-01ab-4856-b857-0d6ade07aadd","Type":"ContainerStarted","Data":"7a66fe997d36bfed93c4e6c30e91c14b9fa83cfb69cadef4100f1ac978410839"} Feb 19 09:59:06 crc kubenswrapper[4965]: I0219 09:59:06.062684 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-hcb66" Feb 19 09:59:06 crc kubenswrapper[4965]: I0219 09:59:06.095442 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-hcb66" podStartSLOduration=6.401298417 podStartE2EDuration="14.095419333s" podCreationTimestamp="2026-02-19 09:58:52 +0000 UTC" firstStartedPulling="2026-02-19 09:58:53.844164321 +0000 UTC m=+989.465485631" lastFinishedPulling="2026-02-19 09:59:01.538285227 +0000 UTC m=+997.159606547" observedRunningTime="2026-02-19 09:59:06.088170948 +0000 UTC m=+1001.709492298" watchObservedRunningTime="2026-02-19 09:59:06.095419333 +0000 UTC m=+1001.716740673" Feb 19 09:59:07 crc kubenswrapper[4965]: I0219 09:59:07.715526 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-gbfbh"] Feb 19 09:59:07 crc kubenswrapper[4965]: I0219 09:59:07.716661 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-gbfbh" Feb 19 09:59:07 crc kubenswrapper[4965]: I0219 09:59:07.719969 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-9cfzm" Feb 19 09:59:07 crc kubenswrapper[4965]: I0219 09:59:07.720252 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 19 09:59:07 crc kubenswrapper[4965]: I0219 09:59:07.720466 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 19 09:59:07 crc kubenswrapper[4965]: I0219 09:59:07.743657 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-gbfbh"] Feb 19 09:59:07 crc kubenswrapper[4965]: I0219 09:59:07.798524 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgdqt\" (UniqueName: \"kubernetes.io/projected/9036b58f-2984-45ba-98d0-76ff599cda43-kube-api-access-dgdqt\") pod \"openstack-operator-index-gbfbh\" (UID: \"9036b58f-2984-45ba-98d0-76ff599cda43\") " pod="openstack-operators/openstack-operator-index-gbfbh" Feb 19 09:59:07 crc kubenswrapper[4965]: I0219 09:59:07.899694 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgdqt\" (UniqueName: \"kubernetes.io/projected/9036b58f-2984-45ba-98d0-76ff599cda43-kube-api-access-dgdqt\") pod \"openstack-operator-index-gbfbh\" (UID: \"9036b58f-2984-45ba-98d0-76ff599cda43\") " pod="openstack-operators/openstack-operator-index-gbfbh" Feb 19 09:59:07 crc kubenswrapper[4965]: I0219 09:59:07.917159 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgdqt\" (UniqueName: \"kubernetes.io/projected/9036b58f-2984-45ba-98d0-76ff599cda43-kube-api-access-dgdqt\") pod \"openstack-operator-index-gbfbh\" (UID: \"9036b58f-2984-45ba-98d0-76ff599cda43\") " pod="openstack-operators/openstack-operator-index-gbfbh" Feb 19 09:59:08 crc kubenswrapper[4965]: I0219 09:59:08.039708 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-gbfbh" Feb 19 09:59:08 crc kubenswrapper[4965]: I0219 09:59:08.448160 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-gbfbh"] Feb 19 09:59:08 crc kubenswrapper[4965]: I0219 09:59:08.655417 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-hcb66" Feb 19 09:59:08 crc kubenswrapper[4965]: I0219 09:59:08.695523 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-hcb66" Feb 19 09:59:09 crc kubenswrapper[4965]: I0219 09:59:09.128595 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-gbfbh" event={"ID":"9036b58f-2984-45ba-98d0-76ff599cda43","Type":"ContainerStarted","Data":"36bdfb3dca97baf6db284b982c3bdfdf67898723eaacdeafc5c8c7b9b7ce1e91"} Feb 19 09:59:11 crc kubenswrapper[4965]: I0219 09:59:11.146391 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-gbfbh" event={"ID":"9036b58f-2984-45ba-98d0-76ff599cda43","Type":"ContainerStarted","Data":"a959f47a51f1f5933298a4235769b63c24e54dd0790f1fec18ff14b90fc1a4e5"} Feb 19 09:59:11 crc kubenswrapper[4965]: I0219 09:59:11.167063 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-gbfbh" podStartSLOduration=1.8487604960000001 podStartE2EDuration="4.167029534s" podCreationTimestamp="2026-02-19 09:59:07 +0000 UTC" firstStartedPulling="2026-02-19 09:59:08.46018382 +0000 UTC m=+1004.081505150" lastFinishedPulling="2026-02-19 09:59:10.778452878 +0000 UTC m=+1006.399774188" observedRunningTime="2026-02-19 09:59:11.160783712 +0000 UTC m=+1006.782105062" watchObservedRunningTime="2026-02-19 09:59:11.167029534 +0000 UTC m=+1006.788350854" Feb 19 09:59:11 crc kubenswrapper[4965]: I0219 09:59:11.692057 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-gbfbh"] Feb 19 09:59:12 crc kubenswrapper[4965]: I0219 09:59:12.502588 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-wpjxk"] Feb 19 09:59:12 crc kubenswrapper[4965]: I0219 09:59:12.504236 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-wpjxk" Feb 19 09:59:12 crc kubenswrapper[4965]: I0219 09:59:12.531863 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-wpjxk"] Feb 19 09:59:12 crc kubenswrapper[4965]: I0219 09:59:12.592938 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpczq\" (UniqueName: \"kubernetes.io/projected/2c206b8c-0a2e-4081-8f51-29977545ef20-kube-api-access-lpczq\") pod \"openstack-operator-index-wpjxk\" (UID: \"2c206b8c-0a2e-4081-8f51-29977545ef20\") " pod="openstack-operators/openstack-operator-index-wpjxk" Feb 19 09:59:12 crc kubenswrapper[4965]: I0219 09:59:12.695013 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpczq\" (UniqueName: \"kubernetes.io/projected/2c206b8c-0a2e-4081-8f51-29977545ef20-kube-api-access-lpczq\") pod \"openstack-operator-index-wpjxk\" (UID: \"2c206b8c-0a2e-4081-8f51-29977545ef20\") " pod="openstack-operators/openstack-operator-index-wpjxk" Feb 19 09:59:12 crc kubenswrapper[4965]: I0219 09:59:12.723714 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpczq\" (UniqueName: \"kubernetes.io/projected/2c206b8c-0a2e-4081-8f51-29977545ef20-kube-api-access-lpczq\") pod \"openstack-operator-index-wpjxk\" (UID: \"2c206b8c-0a2e-4081-8f51-29977545ef20\") " pod="openstack-operators/openstack-operator-index-wpjxk" Feb 19 09:59:12 crc kubenswrapper[4965]: I0219 09:59:12.837605 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-wpjxk" Feb 19 09:59:13 crc kubenswrapper[4965]: I0219 09:59:13.102762 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rhbxn"] Feb 19 09:59:13 crc kubenswrapper[4965]: I0219 09:59:13.105725 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rhbxn" Feb 19 09:59:13 crc kubenswrapper[4965]: I0219 09:59:13.128580 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rhbxn"] Feb 19 09:59:13 crc kubenswrapper[4965]: I0219 09:59:13.166157 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-gbfbh" podUID="9036b58f-2984-45ba-98d0-76ff599cda43" containerName="registry-server" containerID="cri-o://a959f47a51f1f5933298a4235769b63c24e54dd0790f1fec18ff14b90fc1a4e5" gracePeriod=2 Feb 19 09:59:13 crc kubenswrapper[4965]: I0219 09:59:13.206274 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hprmz\" (UniqueName: \"kubernetes.io/projected/72a5cc6e-2093-4868-96a1-2eb8348e4e25-kube-api-access-hprmz\") pod \"certified-operators-rhbxn\" (UID: \"72a5cc6e-2093-4868-96a1-2eb8348e4e25\") " pod="openshift-marketplace/certified-operators-rhbxn" Feb 19 09:59:13 crc kubenswrapper[4965]: I0219 09:59:13.206439 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72a5cc6e-2093-4868-96a1-2eb8348e4e25-catalog-content\") pod \"certified-operators-rhbxn\" (UID: \"72a5cc6e-2093-4868-96a1-2eb8348e4e25\") " pod="openshift-marketplace/certified-operators-rhbxn" Feb 19 09:59:13 crc kubenswrapper[4965]: I0219 09:59:13.206521 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72a5cc6e-2093-4868-96a1-2eb8348e4e25-utilities\") pod \"certified-operators-rhbxn\" (UID: \"72a5cc6e-2093-4868-96a1-2eb8348e4e25\") " pod="openshift-marketplace/certified-operators-rhbxn" Feb 19 09:59:13 crc kubenswrapper[4965]: I0219 09:59:13.307766 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hprmz\" (UniqueName: \"kubernetes.io/projected/72a5cc6e-2093-4868-96a1-2eb8348e4e25-kube-api-access-hprmz\") pod \"certified-operators-rhbxn\" (UID: \"72a5cc6e-2093-4868-96a1-2eb8348e4e25\") " pod="openshift-marketplace/certified-operators-rhbxn" Feb 19 09:59:13 crc kubenswrapper[4965]: I0219 09:59:13.307822 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72a5cc6e-2093-4868-96a1-2eb8348e4e25-catalog-content\") pod \"certified-operators-rhbxn\" (UID: \"72a5cc6e-2093-4868-96a1-2eb8348e4e25\") " pod="openshift-marketplace/certified-operators-rhbxn" Feb 19 09:59:13 crc kubenswrapper[4965]: I0219 09:59:13.307849 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72a5cc6e-2093-4868-96a1-2eb8348e4e25-utilities\") pod \"certified-operators-rhbxn\" (UID: \"72a5cc6e-2093-4868-96a1-2eb8348e4e25\") " pod="openshift-marketplace/certified-operators-rhbxn" Feb 19 09:59:13 crc kubenswrapper[4965]: I0219 09:59:13.308557 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72a5cc6e-2093-4868-96a1-2eb8348e4e25-utilities\") pod \"certified-operators-rhbxn\" (UID: \"72a5cc6e-2093-4868-96a1-2eb8348e4e25\") " pod="openshift-marketplace/certified-operators-rhbxn" Feb 19 09:59:13 crc kubenswrapper[4965]: I0219 09:59:13.308642 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72a5cc6e-2093-4868-96a1-2eb8348e4e25-catalog-content\") pod \"certified-operators-rhbxn\" (UID: \"72a5cc6e-2093-4868-96a1-2eb8348e4e25\") " pod="openshift-marketplace/certified-operators-rhbxn" Feb 19 09:59:13 crc kubenswrapper[4965]: I0219 09:59:13.337171 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hprmz\" (UniqueName: \"kubernetes.io/projected/72a5cc6e-2093-4868-96a1-2eb8348e4e25-kube-api-access-hprmz\") pod \"certified-operators-rhbxn\" (UID: \"72a5cc6e-2093-4868-96a1-2eb8348e4e25\") " pod="openshift-marketplace/certified-operators-rhbxn" Feb 19 09:59:13 crc kubenswrapper[4965]: I0219 09:59:13.378368 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-wpjxk"] Feb 19 09:59:13 crc kubenswrapper[4965]: W0219 09:59:13.389533 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c206b8c_0a2e_4081_8f51_29977545ef20.slice/crio-574634eb0ab009e6d69ff8d8dc49ed85b67e0cee6a106e3d8b70d26053fc0b48 WatchSource:0}: Error finding container 574634eb0ab009e6d69ff8d8dc49ed85b67e0cee6a106e3d8b70d26053fc0b48: Status 404 returned error can't find the container with id 574634eb0ab009e6d69ff8d8dc49ed85b67e0cee6a106e3d8b70d26053fc0b48 Feb 19 09:59:13 crc kubenswrapper[4965]: I0219 09:59:13.440177 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rhbxn" Feb 19 09:59:13 crc kubenswrapper[4965]: I0219 09:59:13.610372 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-gbfbh" Feb 19 09:59:13 crc kubenswrapper[4965]: I0219 09:59:13.679332 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-l7r8n" Feb 19 09:59:13 crc kubenswrapper[4965]: I0219 09:59:13.716257 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgdqt\" (UniqueName: \"kubernetes.io/projected/9036b58f-2984-45ba-98d0-76ff599cda43-kube-api-access-dgdqt\") pod \"9036b58f-2984-45ba-98d0-76ff599cda43\" (UID: \"9036b58f-2984-45ba-98d0-76ff599cda43\") " Feb 19 09:59:13 crc kubenswrapper[4965]: I0219 09:59:13.728369 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9036b58f-2984-45ba-98d0-76ff599cda43-kube-api-access-dgdqt" (OuterVolumeSpecName: "kube-api-access-dgdqt") pod "9036b58f-2984-45ba-98d0-76ff599cda43" (UID: "9036b58f-2984-45ba-98d0-76ff599cda43"). InnerVolumeSpecName "kube-api-access-dgdqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:59:13 crc kubenswrapper[4965]: I0219 09:59:13.817724 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgdqt\" (UniqueName: \"kubernetes.io/projected/9036b58f-2984-45ba-98d0-76ff599cda43-kube-api-access-dgdqt\") on node \"crc\" DevicePath \"\"" Feb 19 09:59:13 crc kubenswrapper[4965]: I0219 09:59:13.986135 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rhbxn"] Feb 19 09:59:14 crc kubenswrapper[4965]: I0219 09:59:14.174238 4965 generic.go:334] "Generic (PLEG): container finished" podID="9036b58f-2984-45ba-98d0-76ff599cda43" containerID="a959f47a51f1f5933298a4235769b63c24e54dd0790f1fec18ff14b90fc1a4e5" exitCode=0 Feb 19 09:59:14 crc kubenswrapper[4965]: I0219 09:59:14.174326 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-gbfbh" event={"ID":"9036b58f-2984-45ba-98d0-76ff599cda43","Type":"ContainerDied","Data":"a959f47a51f1f5933298a4235769b63c24e54dd0790f1fec18ff14b90fc1a4e5"} Feb 19 09:59:14 crc kubenswrapper[4965]: I0219 09:59:14.174377 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-gbfbh" event={"ID":"9036b58f-2984-45ba-98d0-76ff599cda43","Type":"ContainerDied","Data":"36bdfb3dca97baf6db284b982c3bdfdf67898723eaacdeafc5c8c7b9b7ce1e91"} Feb 19 09:59:14 crc kubenswrapper[4965]: I0219 09:59:14.174396 4965 scope.go:117] "RemoveContainer" containerID="a959f47a51f1f5933298a4235769b63c24e54dd0790f1fec18ff14b90fc1a4e5" Feb 19 09:59:14 crc kubenswrapper[4965]: I0219 09:59:14.174793 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-gbfbh" Feb 19 09:59:14 crc kubenswrapper[4965]: I0219 09:59:14.176398 4965 generic.go:334] "Generic (PLEG): container finished" podID="72a5cc6e-2093-4868-96a1-2eb8348e4e25" containerID="ad637eeb316d9d9fe2e18a8decaf4a37108aff26d250073000c72f53b68daaab" exitCode=0 Feb 19 09:59:14 crc kubenswrapper[4965]: I0219 09:59:14.176531 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhbxn" event={"ID":"72a5cc6e-2093-4868-96a1-2eb8348e4e25","Type":"ContainerDied","Data":"ad637eeb316d9d9fe2e18a8decaf4a37108aff26d250073000c72f53b68daaab"} Feb 19 09:59:14 crc kubenswrapper[4965]: I0219 09:59:14.176574 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhbxn" event={"ID":"72a5cc6e-2093-4868-96a1-2eb8348e4e25","Type":"ContainerStarted","Data":"d38fba734fd68cd598fa7c4e14d8e657949e9f04f76b6f876bd432aa0109b444"} Feb 19 09:59:14 crc kubenswrapper[4965]: I0219 09:59:14.178306 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-wpjxk" event={"ID":"2c206b8c-0a2e-4081-8f51-29977545ef20","Type":"ContainerStarted","Data":"2285397934b7e71e67562c92596841ed38c545fb756177ea647091d4e35299ba"} Feb 19 09:59:14 crc kubenswrapper[4965]: I0219 09:59:14.178430 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-wpjxk" event={"ID":"2c206b8c-0a2e-4081-8f51-29977545ef20","Type":"ContainerStarted","Data":"574634eb0ab009e6d69ff8d8dc49ed85b67e0cee6a106e3d8b70d26053fc0b48"} Feb 19 09:59:14 crc kubenswrapper[4965]: I0219 09:59:14.197673 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-wpjxk" podStartSLOduration=2.110305309 podStartE2EDuration="2.1976492s" podCreationTimestamp="2026-02-19 09:59:12 +0000 UTC" firstStartedPulling="2026-02-19 09:59:13.393340718 +0000 UTC m=+1009.014662028" lastFinishedPulling="2026-02-19 09:59:13.480684599 +0000 UTC m=+1009.102005919" observedRunningTime="2026-02-19 09:59:14.193894009 +0000 UTC m=+1009.815215359" watchObservedRunningTime="2026-02-19 09:59:14.1976492 +0000 UTC m=+1009.818970510" Feb 19 09:59:14 crc kubenswrapper[4965]: I0219 09:59:14.215445 4965 scope.go:117] "RemoveContainer" containerID="a959f47a51f1f5933298a4235769b63c24e54dd0790f1fec18ff14b90fc1a4e5" Feb 19 09:59:14 crc kubenswrapper[4965]: E0219 09:59:14.216115 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a959f47a51f1f5933298a4235769b63c24e54dd0790f1fec18ff14b90fc1a4e5\": container with ID starting with a959f47a51f1f5933298a4235769b63c24e54dd0790f1fec18ff14b90fc1a4e5 not found: ID does not exist" containerID="a959f47a51f1f5933298a4235769b63c24e54dd0790f1fec18ff14b90fc1a4e5" Feb 19 09:59:14 crc kubenswrapper[4965]: I0219 09:59:14.216153 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a959f47a51f1f5933298a4235769b63c24e54dd0790f1fec18ff14b90fc1a4e5"} err="failed to get container status \"a959f47a51f1f5933298a4235769b63c24e54dd0790f1fec18ff14b90fc1a4e5\": rpc error: code = NotFound desc = could not find container \"a959f47a51f1f5933298a4235769b63c24e54dd0790f1fec18ff14b90fc1a4e5\": container with ID starting with a959f47a51f1f5933298a4235769b63c24e54dd0790f1fec18ff14b90fc1a4e5 not found: ID does not exist" Feb 19 09:59:14 crc kubenswrapper[4965]: I0219 09:59:14.229292 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-gbfbh"] Feb 19 09:59:14 crc kubenswrapper[4965]: I0219 09:59:14.233288 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-gbfbh"] Feb 19 09:59:15 crc kubenswrapper[4965]: I0219 09:59:15.187951 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhbxn" event={"ID":"72a5cc6e-2093-4868-96a1-2eb8348e4e25","Type":"ContainerStarted","Data":"c2be811b2623333b3afb7b8112517031cc6dc7f8c100eb514cc6d872cc74d9cc"} Feb 19 09:59:15 crc kubenswrapper[4965]: I0219 09:59:15.220746 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9036b58f-2984-45ba-98d0-76ff599cda43" path="/var/lib/kubelet/pods/9036b58f-2984-45ba-98d0-76ff599cda43/volumes" Feb 19 09:59:16 crc kubenswrapper[4965]: I0219 09:59:16.214432 4965 generic.go:334] "Generic (PLEG): container finished" podID="72a5cc6e-2093-4868-96a1-2eb8348e4e25" containerID="c2be811b2623333b3afb7b8112517031cc6dc7f8c100eb514cc6d872cc74d9cc" exitCode=0 Feb 19 09:59:16 crc kubenswrapper[4965]: I0219 09:59:16.215790 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhbxn" event={"ID":"72a5cc6e-2093-4868-96a1-2eb8348e4e25","Type":"ContainerDied","Data":"c2be811b2623333b3afb7b8112517031cc6dc7f8c100eb514cc6d872cc74d9cc"} Feb 19 09:59:17 crc kubenswrapper[4965]: I0219 09:59:17.225441 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhbxn" event={"ID":"72a5cc6e-2093-4868-96a1-2eb8348e4e25","Type":"ContainerStarted","Data":"29e2692dae74d0185b9e2dc32d724d979c237c672077d725d6d5a273a28b40c1"} Feb 19 09:59:17 crc kubenswrapper[4965]: I0219 09:59:17.252706 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rhbxn" podStartSLOduration=1.843989157 podStartE2EDuration="4.252685389s" podCreationTimestamp="2026-02-19 09:59:13 +0000 UTC" firstStartedPulling="2026-02-19 09:59:14.179119671 +0000 UTC m=+1009.800441001" lastFinishedPulling="2026-02-19 09:59:16.587815883 +0000 UTC m=+1012.209137233" observedRunningTime="2026-02-19 09:59:17.251595893 +0000 UTC m=+1012.872917243" watchObservedRunningTime="2026-02-19 09:59:17.252685389 +0000 UTC m=+1012.874006719" Feb 19 09:59:22 crc kubenswrapper[4965]: I0219 09:59:22.838781 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-wpjxk" Feb 19 09:59:22 crc kubenswrapper[4965]: I0219 09:59:22.839593 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-wpjxk" Feb 19 09:59:22 crc kubenswrapper[4965]: I0219 09:59:22.884343 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-wpjxk" Feb 19 09:59:23 crc kubenswrapper[4965]: I0219 09:59:23.310796 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-wpjxk" Feb 19 09:59:23 crc kubenswrapper[4965]: I0219 09:59:23.441371 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rhbxn" Feb 19 09:59:23 crc kubenswrapper[4965]: I0219 09:59:23.441449 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rhbxn" Feb 19 09:59:23 crc kubenswrapper[4965]: I0219 09:59:23.516701 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rhbxn" Feb 19 09:59:23 crc kubenswrapper[4965]: I0219 09:59:23.663656 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-hcb66" Feb 19 09:59:24 crc kubenswrapper[4965]: I0219 09:59:24.357851 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rhbxn" Feb 19 09:59:26 crc kubenswrapper[4965]: I0219 09:59:26.096940 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rhbxn"] Feb 19 09:59:26 crc kubenswrapper[4965]: I0219 09:59:26.297426 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rhbxn" podUID="72a5cc6e-2093-4868-96a1-2eb8348e4e25" containerName="registry-server" containerID="cri-o://29e2692dae74d0185b9e2dc32d724d979c237c672077d725d6d5a273a28b40c1" gracePeriod=2 Feb 19 09:59:26 crc kubenswrapper[4965]: I0219 09:59:26.822262 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rhbxn" Feb 19 09:59:26 crc kubenswrapper[4965]: I0219 09:59:26.910485 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72a5cc6e-2093-4868-96a1-2eb8348e4e25-catalog-content\") pod \"72a5cc6e-2093-4868-96a1-2eb8348e4e25\" (UID: \"72a5cc6e-2093-4868-96a1-2eb8348e4e25\") " Feb 19 09:59:26 crc kubenswrapper[4965]: I0219 09:59:26.910729 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hprmz\" (UniqueName: \"kubernetes.io/projected/72a5cc6e-2093-4868-96a1-2eb8348e4e25-kube-api-access-hprmz\") pod \"72a5cc6e-2093-4868-96a1-2eb8348e4e25\" (UID: \"72a5cc6e-2093-4868-96a1-2eb8348e4e25\") " Feb 19 09:59:26 crc kubenswrapper[4965]: I0219 09:59:26.910944 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72a5cc6e-2093-4868-96a1-2eb8348e4e25-utilities\") pod \"72a5cc6e-2093-4868-96a1-2eb8348e4e25\" (UID: \"72a5cc6e-2093-4868-96a1-2eb8348e4e25\") " Feb 19 09:59:26 crc kubenswrapper[4965]: I0219 09:59:26.912658 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72a5cc6e-2093-4868-96a1-2eb8348e4e25-utilities" (OuterVolumeSpecName: "utilities") pod "72a5cc6e-2093-4868-96a1-2eb8348e4e25" (UID: "72a5cc6e-2093-4868-96a1-2eb8348e4e25"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:59:26 crc kubenswrapper[4965]: I0219 09:59:26.920566 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72a5cc6e-2093-4868-96a1-2eb8348e4e25-kube-api-access-hprmz" (OuterVolumeSpecName: "kube-api-access-hprmz") pod "72a5cc6e-2093-4868-96a1-2eb8348e4e25" (UID: "72a5cc6e-2093-4868-96a1-2eb8348e4e25"). InnerVolumeSpecName "kube-api-access-hprmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:59:26 crc kubenswrapper[4965]: I0219 09:59:26.994448 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72a5cc6e-2093-4868-96a1-2eb8348e4e25-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "72a5cc6e-2093-4868-96a1-2eb8348e4e25" (UID: "72a5cc6e-2093-4868-96a1-2eb8348e4e25"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:59:27 crc kubenswrapper[4965]: I0219 09:59:27.013608 4965 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72a5cc6e-2093-4868-96a1-2eb8348e4e25-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 09:59:27 crc kubenswrapper[4965]: I0219 09:59:27.013690 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hprmz\" (UniqueName: \"kubernetes.io/projected/72a5cc6e-2093-4868-96a1-2eb8348e4e25-kube-api-access-hprmz\") on node \"crc\" DevicePath \"\"" Feb 19 09:59:27 crc kubenswrapper[4965]: I0219 09:59:27.013903 4965 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72a5cc6e-2093-4868-96a1-2eb8348e4e25-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 09:59:27 crc kubenswrapper[4965]: I0219 09:59:27.307652 4965 generic.go:334] "Generic (PLEG): container finished" podID="72a5cc6e-2093-4868-96a1-2eb8348e4e25" containerID="29e2692dae74d0185b9e2dc32d724d979c237c672077d725d6d5a273a28b40c1" exitCode=0 Feb 19 09:59:27 crc kubenswrapper[4965]: I0219 09:59:27.307733 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhbxn" event={"ID":"72a5cc6e-2093-4868-96a1-2eb8348e4e25","Type":"ContainerDied","Data":"29e2692dae74d0185b9e2dc32d724d979c237c672077d725d6d5a273a28b40c1"} Feb 19 09:59:27 crc kubenswrapper[4965]: I0219 09:59:27.307777 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rhbxn" Feb 19 09:59:27 crc kubenswrapper[4965]: I0219 09:59:27.307795 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhbxn" event={"ID":"72a5cc6e-2093-4868-96a1-2eb8348e4e25","Type":"ContainerDied","Data":"d38fba734fd68cd598fa7c4e14d8e657949e9f04f76b6f876bd432aa0109b444"} Feb 19 09:59:27 crc kubenswrapper[4965]: I0219 09:59:27.307817 4965 scope.go:117] "RemoveContainer" containerID="29e2692dae74d0185b9e2dc32d724d979c237c672077d725d6d5a273a28b40c1" Feb 19 09:59:27 crc kubenswrapper[4965]: I0219 09:59:27.336817 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rhbxn"] Feb 19 09:59:27 crc kubenswrapper[4965]: I0219 09:59:27.337601 4965 scope.go:117] "RemoveContainer" containerID="c2be811b2623333b3afb7b8112517031cc6dc7f8c100eb514cc6d872cc74d9cc" Feb 19 09:59:27 crc kubenswrapper[4965]: I0219 09:59:27.343806 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rhbxn"] Feb 19 09:59:27 crc kubenswrapper[4965]: I0219 09:59:27.363253 4965 scope.go:117] "RemoveContainer" containerID="ad637eeb316d9d9fe2e18a8decaf4a37108aff26d250073000c72f53b68daaab" Feb 19 09:59:27 crc kubenswrapper[4965]: I0219 09:59:27.394723 4965 scope.go:117] "RemoveContainer" containerID="29e2692dae74d0185b9e2dc32d724d979c237c672077d725d6d5a273a28b40c1" Feb 19 09:59:27 crc kubenswrapper[4965]: E0219 09:59:27.395525 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29e2692dae74d0185b9e2dc32d724d979c237c672077d725d6d5a273a28b40c1\": container with ID starting with 29e2692dae74d0185b9e2dc32d724d979c237c672077d725d6d5a273a28b40c1 not found: ID does not exist" containerID="29e2692dae74d0185b9e2dc32d724d979c237c672077d725d6d5a273a28b40c1" Feb 19 09:59:27 crc kubenswrapper[4965]: I0219 09:59:27.395593 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29e2692dae74d0185b9e2dc32d724d979c237c672077d725d6d5a273a28b40c1"} err="failed to get container status \"29e2692dae74d0185b9e2dc32d724d979c237c672077d725d6d5a273a28b40c1\": rpc error: code = NotFound desc = could not find container \"29e2692dae74d0185b9e2dc32d724d979c237c672077d725d6d5a273a28b40c1\": container with ID starting with 29e2692dae74d0185b9e2dc32d724d979c237c672077d725d6d5a273a28b40c1 not found: ID does not exist" Feb 19 09:59:27 crc kubenswrapper[4965]: I0219 09:59:27.395635 4965 scope.go:117] "RemoveContainer" containerID="c2be811b2623333b3afb7b8112517031cc6dc7f8c100eb514cc6d872cc74d9cc" Feb 19 09:59:27 crc kubenswrapper[4965]: E0219 09:59:27.396146 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2be811b2623333b3afb7b8112517031cc6dc7f8c100eb514cc6d872cc74d9cc\": container with ID starting with c2be811b2623333b3afb7b8112517031cc6dc7f8c100eb514cc6d872cc74d9cc not found: ID does not exist" containerID="c2be811b2623333b3afb7b8112517031cc6dc7f8c100eb514cc6d872cc74d9cc" Feb 19 09:59:27 crc kubenswrapper[4965]: I0219 09:59:27.396224 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2be811b2623333b3afb7b8112517031cc6dc7f8c100eb514cc6d872cc74d9cc"} err="failed to get container status \"c2be811b2623333b3afb7b8112517031cc6dc7f8c100eb514cc6d872cc74d9cc\": rpc error: code = NotFound desc = could not find container \"c2be811b2623333b3afb7b8112517031cc6dc7f8c100eb514cc6d872cc74d9cc\": container with ID starting with c2be811b2623333b3afb7b8112517031cc6dc7f8c100eb514cc6d872cc74d9cc not found: ID does not exist" Feb 19 09:59:27 crc kubenswrapper[4965]: I0219 09:59:27.396252 4965 scope.go:117] "RemoveContainer" containerID="ad637eeb316d9d9fe2e18a8decaf4a37108aff26d250073000c72f53b68daaab" Feb 19 09:59:27 crc kubenswrapper[4965]: E0219 09:59:27.396966 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad637eeb316d9d9fe2e18a8decaf4a37108aff26d250073000c72f53b68daaab\": container with ID starting with ad637eeb316d9d9fe2e18a8decaf4a37108aff26d250073000c72f53b68daaab not found: ID does not exist" containerID="ad637eeb316d9d9fe2e18a8decaf4a37108aff26d250073000c72f53b68daaab" Feb 19 09:59:27 crc kubenswrapper[4965]: I0219 09:59:27.397004 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad637eeb316d9d9fe2e18a8decaf4a37108aff26d250073000c72f53b68daaab"} err="failed to get container status \"ad637eeb316d9d9fe2e18a8decaf4a37108aff26d250073000c72f53b68daaab\": rpc error: code = NotFound desc = could not find container \"ad637eeb316d9d9fe2e18a8decaf4a37108aff26d250073000c72f53b68daaab\": container with ID starting with ad637eeb316d9d9fe2e18a8decaf4a37108aff26d250073000c72f53b68daaab not found: ID does not exist" Feb 19 09:59:29 crc kubenswrapper[4965]: I0219 09:59:29.211541 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72a5cc6e-2093-4868-96a1-2eb8348e4e25" path="/var/lib/kubelet/pods/72a5cc6e-2093-4868-96a1-2eb8348e4e25/volumes" Feb 19 09:59:29 crc kubenswrapper[4965]: I0219 09:59:29.358162 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/b02772c0e8c97b926926e0a1d8d8c995d8f01d9d9d64402b28cb4393dfntdwr"] Feb 19 09:59:29 crc kubenswrapper[4965]: E0219 09:59:29.358692 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72a5cc6e-2093-4868-96a1-2eb8348e4e25" containerName="extract-content" Feb 19 09:59:29 crc kubenswrapper[4965]: I0219 09:59:29.358727 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="72a5cc6e-2093-4868-96a1-2eb8348e4e25" containerName="extract-content" Feb 19 09:59:29 crc kubenswrapper[4965]: E0219 09:59:29.358764 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72a5cc6e-2093-4868-96a1-2eb8348e4e25" containerName="extract-utilities" Feb 19 09:59:29 crc kubenswrapper[4965]: I0219 09:59:29.358780 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="72a5cc6e-2093-4868-96a1-2eb8348e4e25" containerName="extract-utilities" Feb 19 09:59:29 crc kubenswrapper[4965]: E0219 09:59:29.358807 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72a5cc6e-2093-4868-96a1-2eb8348e4e25" containerName="registry-server" Feb 19 09:59:29 crc kubenswrapper[4965]: I0219 09:59:29.358820 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="72a5cc6e-2093-4868-96a1-2eb8348e4e25" containerName="registry-server" Feb 19 09:59:29 crc kubenswrapper[4965]: E0219 09:59:29.358837 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9036b58f-2984-45ba-98d0-76ff599cda43" containerName="registry-server" Feb 19 09:59:29 crc kubenswrapper[4965]: I0219 09:59:29.358849 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="9036b58f-2984-45ba-98d0-76ff599cda43" containerName="registry-server" Feb 19 09:59:29 crc kubenswrapper[4965]: I0219 09:59:29.359058 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="72a5cc6e-2093-4868-96a1-2eb8348e4e25" containerName="registry-server" Feb 19 09:59:29 crc kubenswrapper[4965]: I0219 09:59:29.359092 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="9036b58f-2984-45ba-98d0-76ff599cda43" containerName="registry-server" Feb 19 09:59:29 crc kubenswrapper[4965]: I0219 09:59:29.360718 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b02772c0e8c97b926926e0a1d8d8c995d8f01d9d9d64402b28cb4393dfntdwr" Feb 19 09:59:29 crc kubenswrapper[4965]: I0219 09:59:29.364513 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-x57dp" Feb 19 09:59:29 crc kubenswrapper[4965]: I0219 09:59:29.379187 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b02772c0e8c97b926926e0a1d8d8c995d8f01d9d9d64402b28cb4393dfntdwr"] Feb 19 09:59:29 crc kubenswrapper[4965]: I0219 09:59:29.448807 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/62a8af9d-5a83-4c80-bc2e-49c0c576ed6e-bundle\") pod \"b02772c0e8c97b926926e0a1d8d8c995d8f01d9d9d64402b28cb4393dfntdwr\" (UID: \"62a8af9d-5a83-4c80-bc2e-49c0c576ed6e\") " pod="openstack-operators/b02772c0e8c97b926926e0a1d8d8c995d8f01d9d9d64402b28cb4393dfntdwr" Feb 19 09:59:29 crc kubenswrapper[4965]: I0219 09:59:29.448873 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/62a8af9d-5a83-4c80-bc2e-49c0c576ed6e-util\") pod \"b02772c0e8c97b926926e0a1d8d8c995d8f01d9d9d64402b28cb4393dfntdwr\" (UID: \"62a8af9d-5a83-4c80-bc2e-49c0c576ed6e\") " pod="openstack-operators/b02772c0e8c97b926926e0a1d8d8c995d8f01d9d9d64402b28cb4393dfntdwr" Feb 19 09:59:29 crc kubenswrapper[4965]: I0219 09:59:29.448925 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pk9p5\" (UniqueName: \"kubernetes.io/projected/62a8af9d-5a83-4c80-bc2e-49c0c576ed6e-kube-api-access-pk9p5\") pod \"b02772c0e8c97b926926e0a1d8d8c995d8f01d9d9d64402b28cb4393dfntdwr\" (UID: \"62a8af9d-5a83-4c80-bc2e-49c0c576ed6e\") " pod="openstack-operators/b02772c0e8c97b926926e0a1d8d8c995d8f01d9d9d64402b28cb4393dfntdwr" Feb 19 09:59:29 crc kubenswrapper[4965]: I0219 09:59:29.550521 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/62a8af9d-5a83-4c80-bc2e-49c0c576ed6e-bundle\") pod \"b02772c0e8c97b926926e0a1d8d8c995d8f01d9d9d64402b28cb4393dfntdwr\" (UID: \"62a8af9d-5a83-4c80-bc2e-49c0c576ed6e\") " pod="openstack-operators/b02772c0e8c97b926926e0a1d8d8c995d8f01d9d9d64402b28cb4393dfntdwr" Feb 19 09:59:29 crc kubenswrapper[4965]: I0219 09:59:29.551332 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/62a8af9d-5a83-4c80-bc2e-49c0c576ed6e-util\") pod \"b02772c0e8c97b926926e0a1d8d8c995d8f01d9d9d64402b28cb4393dfntdwr\" (UID: \"62a8af9d-5a83-4c80-bc2e-49c0c576ed6e\") " pod="openstack-operators/b02772c0e8c97b926926e0a1d8d8c995d8f01d9d9d64402b28cb4393dfntdwr" Feb 19 09:59:29 crc kubenswrapper[4965]: I0219 09:59:29.551468 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/62a8af9d-5a83-4c80-bc2e-49c0c576ed6e-util\") pod \"b02772c0e8c97b926926e0a1d8d8c995d8f01d9d9d64402b28cb4393dfntdwr\" (UID: \"62a8af9d-5a83-4c80-bc2e-49c0c576ed6e\") " pod="openstack-operators/b02772c0e8c97b926926e0a1d8d8c995d8f01d9d9d64402b28cb4393dfntdwr" Feb 19 09:59:29 crc kubenswrapper[4965]: I0219 09:59:29.551674 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pk9p5\" (UniqueName: \"kubernetes.io/projected/62a8af9d-5a83-4c80-bc2e-49c0c576ed6e-kube-api-access-pk9p5\") pod \"b02772c0e8c97b926926e0a1d8d8c995d8f01d9d9d64402b28cb4393dfntdwr\" (UID: \"62a8af9d-5a83-4c80-bc2e-49c0c576ed6e\") " pod="openstack-operators/b02772c0e8c97b926926e0a1d8d8c995d8f01d9d9d64402b28cb4393dfntdwr" Feb 19 09:59:29 crc kubenswrapper[4965]: I0219 09:59:29.551979 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/62a8af9d-5a83-4c80-bc2e-49c0c576ed6e-bundle\") pod \"b02772c0e8c97b926926e0a1d8d8c995d8f01d9d9d64402b28cb4393dfntdwr\" (UID: \"62a8af9d-5a83-4c80-bc2e-49c0c576ed6e\") " pod="openstack-operators/b02772c0e8c97b926926e0a1d8d8c995d8f01d9d9d64402b28cb4393dfntdwr" Feb 19 09:59:29 crc kubenswrapper[4965]: I0219 09:59:29.578629 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pk9p5\" (UniqueName: \"kubernetes.io/projected/62a8af9d-5a83-4c80-bc2e-49c0c576ed6e-kube-api-access-pk9p5\") pod \"b02772c0e8c97b926926e0a1d8d8c995d8f01d9d9d64402b28cb4393dfntdwr\" (UID: \"62a8af9d-5a83-4c80-bc2e-49c0c576ed6e\") " pod="openstack-operators/b02772c0e8c97b926926e0a1d8d8c995d8f01d9d9d64402b28cb4393dfntdwr" Feb 19 09:59:29 crc kubenswrapper[4965]: I0219 09:59:29.733821 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b02772c0e8c97b926926e0a1d8d8c995d8f01d9d9d64402b28cb4393dfntdwr" Feb 19 09:59:29 crc kubenswrapper[4965]: I0219 09:59:29.975212 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b02772c0e8c97b926926e0a1d8d8c995d8f01d9d9d64402b28cb4393dfntdwr"] Feb 19 09:59:30 crc kubenswrapper[4965]: I0219 09:59:30.343407 4965 generic.go:334] "Generic (PLEG): container finished" podID="62a8af9d-5a83-4c80-bc2e-49c0c576ed6e" containerID="b3985dc0ddbfd16d3caff80ad47ab576b5079eb1014553226750923e6f9cc016" exitCode=0 Feb 19 09:59:30 crc kubenswrapper[4965]: I0219 09:59:30.343488 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b02772c0e8c97b926926e0a1d8d8c995d8f01d9d9d64402b28cb4393dfntdwr" event={"ID":"62a8af9d-5a83-4c80-bc2e-49c0c576ed6e","Type":"ContainerDied","Data":"b3985dc0ddbfd16d3caff80ad47ab576b5079eb1014553226750923e6f9cc016"} Feb 19 09:59:30 crc kubenswrapper[4965]: I0219 09:59:30.343765 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b02772c0e8c97b926926e0a1d8d8c995d8f01d9d9d64402b28cb4393dfntdwr" event={"ID":"62a8af9d-5a83-4c80-bc2e-49c0c576ed6e","Type":"ContainerStarted","Data":"a16bb56dab7bcd843b018aad500909efd78b3016ed2e3809552a0a75ed7341c2"} Feb 19 09:59:30 crc kubenswrapper[4965]: I0219 09:59:30.346038 4965 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 09:59:31 crc kubenswrapper[4965]: I0219 09:59:31.353562 4965 generic.go:334] "Generic (PLEG): container finished" podID="62a8af9d-5a83-4c80-bc2e-49c0c576ed6e" containerID="7b11d81ce0b764a9141eaf3b12ed1d60693039515ad94236bee150d0422f81a7" exitCode=0 Feb 19 09:59:31 crc kubenswrapper[4965]: I0219 09:59:31.353670 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b02772c0e8c97b926926e0a1d8d8c995d8f01d9d9d64402b28cb4393dfntdwr" event={"ID":"62a8af9d-5a83-4c80-bc2e-49c0c576ed6e","Type":"ContainerDied","Data":"7b11d81ce0b764a9141eaf3b12ed1d60693039515ad94236bee150d0422f81a7"} Feb 19 09:59:32 crc kubenswrapper[4965]: I0219 09:59:32.363081 4965 generic.go:334] "Generic (PLEG): container finished" podID="62a8af9d-5a83-4c80-bc2e-49c0c576ed6e" containerID="022f348e26664ace901d43c510cb5c5a2aaefe61ca44ac7a9f466689ba98faa6" exitCode=0 Feb 19 09:59:32 crc kubenswrapper[4965]: I0219 09:59:32.363153 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b02772c0e8c97b926926e0a1d8d8c995d8f01d9d9d64402b28cb4393dfntdwr" event={"ID":"62a8af9d-5a83-4c80-bc2e-49c0c576ed6e","Type":"ContainerDied","Data":"022f348e26664ace901d43c510cb5c5a2aaefe61ca44ac7a9f466689ba98faa6"} Feb 19 09:59:33 crc kubenswrapper[4965]: I0219 09:59:33.730723 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b02772c0e8c97b926926e0a1d8d8c995d8f01d9d9d64402b28cb4393dfntdwr" Feb 19 09:59:33 crc kubenswrapper[4965]: I0219 09:59:33.816065 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/62a8af9d-5a83-4c80-bc2e-49c0c576ed6e-util\") pod \"62a8af9d-5a83-4c80-bc2e-49c0c576ed6e\" (UID: \"62a8af9d-5a83-4c80-bc2e-49c0c576ed6e\") " Feb 19 09:59:33 crc kubenswrapper[4965]: I0219 09:59:33.816223 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pk9p5\" (UniqueName: \"kubernetes.io/projected/62a8af9d-5a83-4c80-bc2e-49c0c576ed6e-kube-api-access-pk9p5\") pod \"62a8af9d-5a83-4c80-bc2e-49c0c576ed6e\" (UID: \"62a8af9d-5a83-4c80-bc2e-49c0c576ed6e\") " Feb 19 09:59:33 crc kubenswrapper[4965]: I0219 09:59:33.816369 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/62a8af9d-5a83-4c80-bc2e-49c0c576ed6e-bundle\") pod \"62a8af9d-5a83-4c80-bc2e-49c0c576ed6e\" (UID: \"62a8af9d-5a83-4c80-bc2e-49c0c576ed6e\") " Feb 19 09:59:33 crc kubenswrapper[4965]: I0219 09:59:33.817494 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62a8af9d-5a83-4c80-bc2e-49c0c576ed6e-bundle" (OuterVolumeSpecName: "bundle") pod "62a8af9d-5a83-4c80-bc2e-49c0c576ed6e" (UID: "62a8af9d-5a83-4c80-bc2e-49c0c576ed6e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:59:33 crc kubenswrapper[4965]: I0219 09:59:33.821790 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62a8af9d-5a83-4c80-bc2e-49c0c576ed6e-kube-api-access-pk9p5" (OuterVolumeSpecName: "kube-api-access-pk9p5") pod "62a8af9d-5a83-4c80-bc2e-49c0c576ed6e" (UID: "62a8af9d-5a83-4c80-bc2e-49c0c576ed6e"). InnerVolumeSpecName "kube-api-access-pk9p5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 09:59:33 crc kubenswrapper[4965]: I0219 09:59:33.849395 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62a8af9d-5a83-4c80-bc2e-49c0c576ed6e-util" (OuterVolumeSpecName: "util") pod "62a8af9d-5a83-4c80-bc2e-49c0c576ed6e" (UID: "62a8af9d-5a83-4c80-bc2e-49c0c576ed6e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 09:59:33 crc kubenswrapper[4965]: I0219 09:59:33.918372 4965 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/62a8af9d-5a83-4c80-bc2e-49c0c576ed6e-util\") on node \"crc\" DevicePath \"\"" Feb 19 09:59:33 crc kubenswrapper[4965]: I0219 09:59:33.918403 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pk9p5\" (UniqueName: \"kubernetes.io/projected/62a8af9d-5a83-4c80-bc2e-49c0c576ed6e-kube-api-access-pk9p5\") on node \"crc\" DevicePath \"\"" Feb 19 09:59:33 crc kubenswrapper[4965]: I0219 09:59:33.918417 4965 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/62a8af9d-5a83-4c80-bc2e-49c0c576ed6e-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 09:59:34 crc kubenswrapper[4965]: I0219 09:59:34.381278 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b02772c0e8c97b926926e0a1d8d8c995d8f01d9d9d64402b28cb4393dfntdwr" event={"ID":"62a8af9d-5a83-4c80-bc2e-49c0c576ed6e","Type":"ContainerDied","Data":"a16bb56dab7bcd843b018aad500909efd78b3016ed2e3809552a0a75ed7341c2"} Feb 19 09:59:34 crc kubenswrapper[4965]: I0219 09:59:34.381566 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a16bb56dab7bcd843b018aad500909efd78b3016ed2e3809552a0a75ed7341c2" Feb 19 09:59:34 crc kubenswrapper[4965]: I0219 09:59:34.381719 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b02772c0e8c97b926926e0a1d8d8c995d8f01d9d9d64402b28cb4393dfntdwr" Feb 19 09:59:36 crc kubenswrapper[4965]: I0219 09:59:36.125074 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-d86db9fbc-vplp8"] Feb 19 09:59:36 crc kubenswrapper[4965]: E0219 09:59:36.125525 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62a8af9d-5a83-4c80-bc2e-49c0c576ed6e" containerName="util" Feb 19 09:59:36 crc kubenswrapper[4965]: I0219 09:59:36.125536 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="62a8af9d-5a83-4c80-bc2e-49c0c576ed6e" containerName="util" Feb 19 09:59:36 crc kubenswrapper[4965]: E0219 09:59:36.125549 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62a8af9d-5a83-4c80-bc2e-49c0c576ed6e" containerName="pull" Feb 19 09:59:36 crc kubenswrapper[4965]: I0219 09:59:36.125556 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="62a8af9d-5a83-4c80-bc2e-49c0c576ed6e" containerName="pull" Feb 19 09:59:36 crc kubenswrapper[4965]: E0219 09:59:36.125573 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62a8af9d-5a83-4c80-bc2e-49c0c576ed6e" containerName="extract" Feb 19 09:59:36 crc kubenswrapper[4965]: I0219 09:59:36.125580 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="62a8af9d-5a83-4c80-bc2e-49c0c576ed6e" containerName="extract" Feb 19 09:59:36 crc kubenswrapper[4965]: I0219 09:59:36.125678 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="62a8af9d-5a83-4c80-bc2e-49c0c576ed6e" containerName="extract" Feb 19 09:59:36 crc kubenswrapper[4965]: I0219 09:59:36.126069 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-d86db9fbc-vplp8" Feb 19 09:59:36 crc kubenswrapper[4965]: I0219 09:59:36.128264 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-svds2" Feb 19 09:59:36 crc kubenswrapper[4965]: I0219 09:59:36.150047 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-d86db9fbc-vplp8"] Feb 19 09:59:36 crc kubenswrapper[4965]: I0219 09:59:36.251382 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngvx2\" (UniqueName: \"kubernetes.io/projected/5c35ac3d-3c0a-48b2-a17d-ce896fa8a00e-kube-api-access-ngvx2\") pod \"openstack-operator-controller-init-d86db9fbc-vplp8\" (UID: \"5c35ac3d-3c0a-48b2-a17d-ce896fa8a00e\") " pod="openstack-operators/openstack-operator-controller-init-d86db9fbc-vplp8" Feb 19 09:59:36 crc kubenswrapper[4965]: I0219 09:59:36.352828 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngvx2\" (UniqueName: \"kubernetes.io/projected/5c35ac3d-3c0a-48b2-a17d-ce896fa8a00e-kube-api-access-ngvx2\") pod \"openstack-operator-controller-init-d86db9fbc-vplp8\" (UID: \"5c35ac3d-3c0a-48b2-a17d-ce896fa8a00e\") " pod="openstack-operators/openstack-operator-controller-init-d86db9fbc-vplp8" Feb 19 09:59:36 crc kubenswrapper[4965]: I0219 09:59:36.369359 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngvx2\" (UniqueName: \"kubernetes.io/projected/5c35ac3d-3c0a-48b2-a17d-ce896fa8a00e-kube-api-access-ngvx2\") pod \"openstack-operator-controller-init-d86db9fbc-vplp8\" (UID: \"5c35ac3d-3c0a-48b2-a17d-ce896fa8a00e\") " pod="openstack-operators/openstack-operator-controller-init-d86db9fbc-vplp8" Feb 19 09:59:36 crc kubenswrapper[4965]: I0219 09:59:36.445023 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-d86db9fbc-vplp8" Feb 19 09:59:36 crc kubenswrapper[4965]: I0219 09:59:36.776476 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-d86db9fbc-vplp8"] Feb 19 09:59:37 crc kubenswrapper[4965]: I0219 09:59:37.400048 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-d86db9fbc-vplp8" event={"ID":"5c35ac3d-3c0a-48b2-a17d-ce896fa8a00e","Type":"ContainerStarted","Data":"df099ba46ed2e35e6608c83a07edbccadf88beea0301f9a5f5af3c3fa1ee9361"} Feb 19 09:59:41 crc kubenswrapper[4965]: I0219 09:59:41.428046 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-d86db9fbc-vplp8" event={"ID":"5c35ac3d-3c0a-48b2-a17d-ce896fa8a00e","Type":"ContainerStarted","Data":"c6e7aa2ac1878c4e645c60b3f727697f0dc3ea11341ed68610d65249f2fa262f"} Feb 19 09:59:41 crc kubenswrapper[4965]: I0219 09:59:41.429002 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-d86db9fbc-vplp8" Feb 19 09:59:41 crc kubenswrapper[4965]: I0219 09:59:41.466895 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-d86db9fbc-vplp8" podStartSLOduration=1.662197527 podStartE2EDuration="5.466875179s" podCreationTimestamp="2026-02-19 09:59:36 +0000 UTC" firstStartedPulling="2026-02-19 09:59:36.778851535 +0000 UTC m=+1032.400172855" lastFinishedPulling="2026-02-19 09:59:40.583529197 +0000 UTC m=+1036.204850507" observedRunningTime="2026-02-19 09:59:41.464140663 +0000 UTC m=+1037.085462013" watchObservedRunningTime="2026-02-19 09:59:41.466875179 +0000 UTC m=+1037.088196489" Feb 19 09:59:46 crc kubenswrapper[4965]: I0219 09:59:46.447414 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-d86db9fbc-vplp8" Feb 19 10:00:00 crc kubenswrapper[4965]: I0219 10:00:00.147771 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524920-vnskl"] Feb 19 10:00:00 crc kubenswrapper[4965]: I0219 10:00:00.149854 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524920-vnskl" Feb 19 10:00:00 crc kubenswrapper[4965]: I0219 10:00:00.157749 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524920-vnskl"] Feb 19 10:00:00 crc kubenswrapper[4965]: I0219 10:00:00.161346 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 10:00:00 crc kubenswrapper[4965]: I0219 10:00:00.161498 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 10:00:00 crc kubenswrapper[4965]: I0219 10:00:00.299342 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbmv5\" (UniqueName: \"kubernetes.io/projected/26362129-d9e2-4c99-925d-475b863b274a-kube-api-access-rbmv5\") pod \"collect-profiles-29524920-vnskl\" (UID: \"26362129-d9e2-4c99-925d-475b863b274a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524920-vnskl" Feb 19 10:00:00 crc kubenswrapper[4965]: I0219 10:00:00.299440 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26362129-d9e2-4c99-925d-475b863b274a-config-volume\") pod \"collect-profiles-29524920-vnskl\" (UID: \"26362129-d9e2-4c99-925d-475b863b274a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524920-vnskl" Feb 19 10:00:00 crc kubenswrapper[4965]: I0219 10:00:00.299475 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/26362129-d9e2-4c99-925d-475b863b274a-secret-volume\") pod \"collect-profiles-29524920-vnskl\" (UID: \"26362129-d9e2-4c99-925d-475b863b274a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524920-vnskl" Feb 19 10:00:00 crc kubenswrapper[4965]: I0219 10:00:00.400793 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbmv5\" (UniqueName: \"kubernetes.io/projected/26362129-d9e2-4c99-925d-475b863b274a-kube-api-access-rbmv5\") pod \"collect-profiles-29524920-vnskl\" (UID: \"26362129-d9e2-4c99-925d-475b863b274a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524920-vnskl" Feb 19 10:00:00 crc kubenswrapper[4965]: I0219 10:00:00.400896 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26362129-d9e2-4c99-925d-475b863b274a-config-volume\") pod \"collect-profiles-29524920-vnskl\" (UID: \"26362129-d9e2-4c99-925d-475b863b274a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524920-vnskl" Feb 19 10:00:00 crc kubenswrapper[4965]: I0219 10:00:00.400927 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/26362129-d9e2-4c99-925d-475b863b274a-secret-volume\") pod \"collect-profiles-29524920-vnskl\" (UID: \"26362129-d9e2-4c99-925d-475b863b274a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524920-vnskl" Feb 19 10:00:00 crc kubenswrapper[4965]: I0219 10:00:00.402672 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26362129-d9e2-4c99-925d-475b863b274a-config-volume\") pod \"collect-profiles-29524920-vnskl\" (UID: \"26362129-d9e2-4c99-925d-475b863b274a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524920-vnskl" Feb 19 10:00:00 crc kubenswrapper[4965]: I0219 10:00:00.414682 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/26362129-d9e2-4c99-925d-475b863b274a-secret-volume\") pod \"collect-profiles-29524920-vnskl\" (UID: \"26362129-d9e2-4c99-925d-475b863b274a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524920-vnskl" Feb 19 10:00:00 crc kubenswrapper[4965]: I0219 10:00:00.426917 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbmv5\" (UniqueName: \"kubernetes.io/projected/26362129-d9e2-4c99-925d-475b863b274a-kube-api-access-rbmv5\") pod \"collect-profiles-29524920-vnskl\" (UID: \"26362129-d9e2-4c99-925d-475b863b274a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524920-vnskl" Feb 19 10:00:00 crc kubenswrapper[4965]: I0219 10:00:00.475471 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524920-vnskl" Feb 19 10:00:00 crc kubenswrapper[4965]: I0219 10:00:00.729849 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524920-vnskl"] Feb 19 10:00:01 crc kubenswrapper[4965]: I0219 10:00:01.579736 4965 generic.go:334] "Generic (PLEG): container finished" podID="26362129-d9e2-4c99-925d-475b863b274a" containerID="2dec4872582dbc706006c2ed72bf111d3dc386fddffb27499276bb624188b106" exitCode=0 Feb 19 10:00:01 crc kubenswrapper[4965]: I0219 10:00:01.579810 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524920-vnskl" event={"ID":"26362129-d9e2-4c99-925d-475b863b274a","Type":"ContainerDied","Data":"2dec4872582dbc706006c2ed72bf111d3dc386fddffb27499276bb624188b106"} Feb 19 10:00:01 crc kubenswrapper[4965]: I0219 10:00:01.579992 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524920-vnskl" event={"ID":"26362129-d9e2-4c99-925d-475b863b274a","Type":"ContainerStarted","Data":"e51a0329c29d5bf9cb8a2b0f068a2e9b1469ffc0fbffeea7ddbfea6c1c542957"} Feb 19 10:00:02 crc kubenswrapper[4965]: I0219 10:00:02.929698 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524920-vnskl" Feb 19 10:00:03 crc kubenswrapper[4965]: I0219 10:00:03.034830 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/26362129-d9e2-4c99-925d-475b863b274a-secret-volume\") pod \"26362129-d9e2-4c99-925d-475b863b274a\" (UID: \"26362129-d9e2-4c99-925d-475b863b274a\") " Feb 19 10:00:03 crc kubenswrapper[4965]: I0219 10:00:03.034901 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbmv5\" (UniqueName: \"kubernetes.io/projected/26362129-d9e2-4c99-925d-475b863b274a-kube-api-access-rbmv5\") pod \"26362129-d9e2-4c99-925d-475b863b274a\" (UID: \"26362129-d9e2-4c99-925d-475b863b274a\") " Feb 19 10:00:03 crc kubenswrapper[4965]: I0219 10:00:03.034998 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26362129-d9e2-4c99-925d-475b863b274a-config-volume\") pod \"26362129-d9e2-4c99-925d-475b863b274a\" (UID: \"26362129-d9e2-4c99-925d-475b863b274a\") " Feb 19 10:00:03 crc kubenswrapper[4965]: I0219 10:00:03.035968 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26362129-d9e2-4c99-925d-475b863b274a-config-volume" (OuterVolumeSpecName: "config-volume") pod "26362129-d9e2-4c99-925d-475b863b274a" (UID: "26362129-d9e2-4c99-925d-475b863b274a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:00:03 crc kubenswrapper[4965]: I0219 10:00:03.040483 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26362129-d9e2-4c99-925d-475b863b274a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "26362129-d9e2-4c99-925d-475b863b274a" (UID: "26362129-d9e2-4c99-925d-475b863b274a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:00:03 crc kubenswrapper[4965]: I0219 10:00:03.040611 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26362129-d9e2-4c99-925d-475b863b274a-kube-api-access-rbmv5" (OuterVolumeSpecName: "kube-api-access-rbmv5") pod "26362129-d9e2-4c99-925d-475b863b274a" (UID: "26362129-d9e2-4c99-925d-475b863b274a"). InnerVolumeSpecName "kube-api-access-rbmv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:00:03 crc kubenswrapper[4965]: I0219 10:00:03.136572 4965 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/26362129-d9e2-4c99-925d-475b863b274a-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 10:00:03 crc kubenswrapper[4965]: I0219 10:00:03.136631 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbmv5\" (UniqueName: \"kubernetes.io/projected/26362129-d9e2-4c99-925d-475b863b274a-kube-api-access-rbmv5\") on node \"crc\" DevicePath \"\"" Feb 19 10:00:03 crc kubenswrapper[4965]: I0219 10:00:03.136644 4965 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26362129-d9e2-4c99-925d-475b863b274a-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 10:00:03 crc kubenswrapper[4965]: I0219 10:00:03.593572 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524920-vnskl" event={"ID":"26362129-d9e2-4c99-925d-475b863b274a","Type":"ContainerDied","Data":"e51a0329c29d5bf9cb8a2b0f068a2e9b1469ffc0fbffeea7ddbfea6c1c542957"} Feb 19 10:00:03 crc kubenswrapper[4965]: I0219 10:00:03.593886 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e51a0329c29d5bf9cb8a2b0f068a2e9b1469ffc0fbffeea7ddbfea6c1c542957" Feb 19 10:00:03 crc kubenswrapper[4965]: I0219 10:00:03.593652 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524920-vnskl" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.310189 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-jncdt"] Feb 19 10:00:07 crc kubenswrapper[4965]: E0219 10:00:07.310753 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26362129-d9e2-4c99-925d-475b863b274a" containerName="collect-profiles" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.310778 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="26362129-d9e2-4c99-925d-475b863b274a" containerName="collect-profiles" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.310949 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="26362129-d9e2-4c99-925d-475b863b274a" containerName="collect-profiles" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.311507 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jncdt" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.316783 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-rg7g2" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.319151 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-2g7mq"] Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.320081 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-2g7mq" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.321800 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-jncdt"] Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.331564 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-b65fp" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.338363 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-ztvs5"] Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.339312 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-ztvs5" Feb 19 10:00:07 crc kubenswrapper[4965]: W0219 10:00:07.340988 4965 reflector.go:561] object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-78bmk": failed to list *v1.Secret: secrets "designate-operator-controller-manager-dockercfg-78bmk" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack-operators": no relationship found between node 'crc' and this object Feb 19 10:00:07 crc kubenswrapper[4965]: E0219 10:00:07.341036 4965 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"designate-operator-controller-manager-dockercfg-78bmk\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"designate-operator-controller-manager-dockercfg-78bmk\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack-operators\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.376165 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-ztvs5"] Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.384943 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-2g7mq"] Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.419100 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8ss8\" (UniqueName: \"kubernetes.io/projected/24b54009-86e7-409a-991e-a406d38ab751-kube-api-access-c8ss8\") pod \"barbican-operator-controller-manager-868647ff47-jncdt\" (UID: \"24b54009-86e7-409a-991e-a406d38ab751\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jncdt" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.419208 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k29jj\" (UniqueName: \"kubernetes.io/projected/74f4ddc1-28bd-411f-8f0c-c5bfc3bfcec6-kube-api-access-k29jj\") pod \"cinder-operator-controller-manager-5d946d989d-2g7mq\" (UID: \"74f4ddc1-28bd-411f-8f0c-c5bfc3bfcec6\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-2g7mq" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.419285 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-bndgq"] Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.419301 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgkkd\" (UniqueName: \"kubernetes.io/projected/7c1737a3-9dfe-4208-a8da-8be7f09394d9-kube-api-access-hgkkd\") pod \"designate-operator-controller-manager-6d8bf5c495-ztvs5\" (UID: \"7c1737a3-9dfe-4208-a8da-8be7f09394d9\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-ztvs5" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.420147 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-bndgq" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.423548 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-tdkls" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.429180 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-4rtq9"] Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.429981 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-4rtq9" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.432023 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-dgkch" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.436261 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-bndgq"] Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.441941 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-vgqfx"] Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.442713 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-vgqfx" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.446987 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-4rtq9"] Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.448229 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-x64ls" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.478251 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-zqmsr"] Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.479073 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-zqmsr" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.482425 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.483925 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-ntszk" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.487802 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-vgqfx"] Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.514754 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-zqmsr"] Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.520930 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8ss8\" (UniqueName: \"kubernetes.io/projected/24b54009-86e7-409a-991e-a406d38ab751-kube-api-access-c8ss8\") pod \"barbican-operator-controller-manager-868647ff47-jncdt\" (UID: \"24b54009-86e7-409a-991e-a406d38ab751\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jncdt" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.521030 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq88x\" (UniqueName: \"kubernetes.io/projected/fe5bbdd4-d10a-4bc6-bd35-76c7abb54600-kube-api-access-cq88x\") pod \"horizon-operator-controller-manager-5b9b8895d5-vgqfx\" (UID: \"fe5bbdd4-d10a-4bc6-bd35-76c7abb54600\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-vgqfx" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.521060 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k29jj\" (UniqueName: \"kubernetes.io/projected/74f4ddc1-28bd-411f-8f0c-c5bfc3bfcec6-kube-api-access-k29jj\") pod \"cinder-operator-controller-manager-5d946d989d-2g7mq\" (UID: \"74f4ddc1-28bd-411f-8f0c-c5bfc3bfcec6\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-2g7mq" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.521109 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgkkd\" (UniqueName: \"kubernetes.io/projected/7c1737a3-9dfe-4208-a8da-8be7f09394d9-kube-api-access-hgkkd\") pod \"designate-operator-controller-manager-6d8bf5c495-ztvs5\" (UID: \"7c1737a3-9dfe-4208-a8da-8be7f09394d9\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-ztvs5" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.521131 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dmjs\" (UniqueName: \"kubernetes.io/projected/8e1c4dc5-2d5b-46fb-b3cc-1ae2749fd02c-kube-api-access-8dmjs\") pod \"heat-operator-controller-manager-69f49c598c-4rtq9\" (UID: \"8e1c4dc5-2d5b-46fb-b3cc-1ae2749fd02c\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-4rtq9" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.521149 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8x89\" (UniqueName: \"kubernetes.io/projected/ef077548-5e44-43f1-9f0d-3cf539bca16b-kube-api-access-k8x89\") pod \"glance-operator-controller-manager-77987464f4-bndgq\" (UID: \"ef077548-5e44-43f1-9f0d-3cf539bca16b\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-bndgq" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.533907 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-z78k7"] Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.534746 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-z78k7" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.538735 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-5dpwf" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.545907 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8ss8\" (UniqueName: \"kubernetes.io/projected/24b54009-86e7-409a-991e-a406d38ab751-kube-api-access-c8ss8\") pod \"barbican-operator-controller-manager-868647ff47-jncdt\" (UID: \"24b54009-86e7-409a-991e-a406d38ab751\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jncdt" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.546430 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k29jj\" (UniqueName: \"kubernetes.io/projected/74f4ddc1-28bd-411f-8f0c-c5bfc3bfcec6-kube-api-access-k29jj\") pod \"cinder-operator-controller-manager-5d946d989d-2g7mq\" (UID: \"74f4ddc1-28bd-411f-8f0c-c5bfc3bfcec6\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-2g7mq" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.552844 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgkkd\" (UniqueName: \"kubernetes.io/projected/7c1737a3-9dfe-4208-a8da-8be7f09394d9-kube-api-access-hgkkd\") pod \"designate-operator-controller-manager-6d8bf5c495-ztvs5\" (UID: \"7c1737a3-9dfe-4208-a8da-8be7f09394d9\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-ztvs5" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.554938 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-z78k7"] Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.570731 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-h5skt"] Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.571553 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-h5skt" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.579477 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-r96dn" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.598226 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-zh77z"] Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.599120 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-zh77z" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.608794 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-sxln7" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.608973 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-h5skt"] Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.623065 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2f161526-b0fd-453b-8ae7-7b9b7a485b97-cert\") pod \"infra-operator-controller-manager-79d975b745-zqmsr\" (UID: \"2f161526-b0fd-453b-8ae7-7b9b7a485b97\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-zqmsr" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.623134 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cq88x\" (UniqueName: \"kubernetes.io/projected/fe5bbdd4-d10a-4bc6-bd35-76c7abb54600-kube-api-access-cq88x\") pod \"horizon-operator-controller-manager-5b9b8895d5-vgqfx\" (UID: \"fe5bbdd4-d10a-4bc6-bd35-76c7abb54600\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-vgqfx" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.623167 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkzmc\" (UniqueName: \"kubernetes.io/projected/73c20094-0abc-4525-ae77-d571755841fa-kube-api-access-zkzmc\") pod \"ironic-operator-controller-manager-554564d7fc-z78k7\" (UID: \"73c20094-0abc-4525-ae77-d571755841fa\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-z78k7" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.623229 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgcn9\" (UniqueName: \"kubernetes.io/projected/2f161526-b0fd-453b-8ae7-7b9b7a485b97-kube-api-access-jgcn9\") pod \"infra-operator-controller-manager-79d975b745-zqmsr\" (UID: \"2f161526-b0fd-453b-8ae7-7b9b7a485b97\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-zqmsr" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.623270 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dmjs\" (UniqueName: \"kubernetes.io/projected/8e1c4dc5-2d5b-46fb-b3cc-1ae2749fd02c-kube-api-access-8dmjs\") pod \"heat-operator-controller-manager-69f49c598c-4rtq9\" (UID: \"8e1c4dc5-2d5b-46fb-b3cc-1ae2749fd02c\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-4rtq9" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.623406 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8x89\" (UniqueName: \"kubernetes.io/projected/ef077548-5e44-43f1-9f0d-3cf539bca16b-kube-api-access-k8x89\") pod \"glance-operator-controller-manager-77987464f4-bndgq\" (UID: \"ef077548-5e44-43f1-9f0d-3cf539bca16b\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-bndgq" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.623525 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6tt4\" (UniqueName: \"kubernetes.io/projected/5747cc94-5621-4a7d-b599-f2a0f2a2aa29-kube-api-access-q6tt4\") pod \"keystone-operator-controller-manager-b4d948c87-h5skt\" (UID: \"5747cc94-5621-4a7d-b599-f2a0f2a2aa29\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-h5skt" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.641799 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jncdt" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.672333 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-zh77z"] Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.698440 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-2g7mq" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.700934 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dmjs\" (UniqueName: \"kubernetes.io/projected/8e1c4dc5-2d5b-46fb-b3cc-1ae2749fd02c-kube-api-access-8dmjs\") pod \"heat-operator-controller-manager-69f49c598c-4rtq9\" (UID: \"8e1c4dc5-2d5b-46fb-b3cc-1ae2749fd02c\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-4rtq9" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.709813 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cq88x\" (UniqueName: \"kubernetes.io/projected/fe5bbdd4-d10a-4bc6-bd35-76c7abb54600-kube-api-access-cq88x\") pod \"horizon-operator-controller-manager-5b9b8895d5-vgqfx\" (UID: \"fe5bbdd4-d10a-4bc6-bd35-76c7abb54600\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-vgqfx" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.726058 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8x89\" (UniqueName: \"kubernetes.io/projected/ef077548-5e44-43f1-9f0d-3cf539bca16b-kube-api-access-k8x89\") pod \"glance-operator-controller-manager-77987464f4-bndgq\" (UID: \"ef077548-5e44-43f1-9f0d-3cf539bca16b\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-bndgq" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.734583 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgcn9\" (UniqueName: \"kubernetes.io/projected/2f161526-b0fd-453b-8ae7-7b9b7a485b97-kube-api-access-jgcn9\") pod \"infra-operator-controller-manager-79d975b745-zqmsr\" (UID: \"2f161526-b0fd-453b-8ae7-7b9b7a485b97\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-zqmsr" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.735270 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-49xr8"] Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.737716 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-49xr8" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.736126 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htkbk\" (UniqueName: \"kubernetes.io/projected/c94f0d1d-5edd-4b64-b2c7-85bdc5022ec3-kube-api-access-htkbk\") pod \"manila-operator-controller-manager-54f6768c69-zh77z\" (UID: \"c94f0d1d-5edd-4b64-b2c7-85bdc5022ec3\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-zh77z" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.738564 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6tt4\" (UniqueName: \"kubernetes.io/projected/5747cc94-5621-4a7d-b599-f2a0f2a2aa29-kube-api-access-q6tt4\") pod \"keystone-operator-controller-manager-b4d948c87-h5skt\" (UID: \"5747cc94-5621-4a7d-b599-f2a0f2a2aa29\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-h5skt" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.738681 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2f161526-b0fd-453b-8ae7-7b9b7a485b97-cert\") pod \"infra-operator-controller-manager-79d975b745-zqmsr\" (UID: \"2f161526-b0fd-453b-8ae7-7b9b7a485b97\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-zqmsr" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.738795 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkzmc\" (UniqueName: \"kubernetes.io/projected/73c20094-0abc-4525-ae77-d571755841fa-kube-api-access-zkzmc\") pod \"ironic-operator-controller-manager-554564d7fc-z78k7\" (UID: \"73c20094-0abc-4525-ae77-d571755841fa\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-z78k7" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.738481 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-bndgq" Feb 19 10:00:07 crc kubenswrapper[4965]: E0219 10:00:07.739076 4965 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 10:00:07 crc kubenswrapper[4965]: E0219 10:00:07.739964 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2f161526-b0fd-453b-8ae7-7b9b7a485b97-cert podName:2f161526-b0fd-453b-8ae7-7b9b7a485b97 nodeName:}" failed. No retries permitted until 2026-02-19 10:00:08.239947368 +0000 UTC m=+1063.861268678 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2f161526-b0fd-453b-8ae7-7b9b7a485b97-cert") pod "infra-operator-controller-manager-79d975b745-zqmsr" (UID: "2f161526-b0fd-453b-8ae7-7b9b7a485b97") : secret "infra-operator-webhook-server-cert" not found Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.742767 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-4md54"] Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.743917 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-4md54" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.744601 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-x7mmk" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.747259 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-4rtq9" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.747639 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-49xr8"] Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.755747 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-4md54"] Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.757041 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-sjfx7" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.764482 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-vgqfx" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.767161 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-7mzd9"] Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.768751 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-glzx9"] Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.769261 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-7mzd9"] Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.769327 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-glzx9" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.769631 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-7mzd9" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.770758 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkzmc\" (UniqueName: \"kubernetes.io/projected/73c20094-0abc-4525-ae77-d571755841fa-kube-api-access-zkzmc\") pod \"ironic-operator-controller-manager-554564d7fc-z78k7\" (UID: \"73c20094-0abc-4525-ae77-d571755841fa\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-z78k7" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.771853 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6tt4\" (UniqueName: \"kubernetes.io/projected/5747cc94-5621-4a7d-b599-f2a0f2a2aa29-kube-api-access-q6tt4\") pod \"keystone-operator-controller-manager-b4d948c87-h5skt\" (UID: \"5747cc94-5621-4a7d-b599-f2a0f2a2aa29\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-h5skt" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.772218 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-2v68s" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.772358 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-gsgpm" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.772642 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgcn9\" (UniqueName: \"kubernetes.io/projected/2f161526-b0fd-453b-8ae7-7b9b7a485b97-kube-api-access-jgcn9\") pod \"infra-operator-controller-manager-79d975b745-zqmsr\" (UID: \"2f161526-b0fd-453b-8ae7-7b9b7a485b97\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-zqmsr" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.795343 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-glzx9"] Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.820160 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpknzq"] Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.821072 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpknzq" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.824811 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-vdlt7" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.825002 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.834800 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-wp77d"] Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.837726 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-wp77d" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.839826 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htkbk\" (UniqueName: \"kubernetes.io/projected/c94f0d1d-5edd-4b64-b2c7-85bdc5022ec3-kube-api-access-htkbk\") pod \"manila-operator-controller-manager-54f6768c69-zh77z\" (UID: \"c94f0d1d-5edd-4b64-b2c7-85bdc5022ec3\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-zh77z" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.857047 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv9qp\" (UniqueName: \"kubernetes.io/projected/ec34bcd2-48d7-4522-a32a-268a3a1b385c-kube-api-access-kv9qp\") pod \"neutron-operator-controller-manager-64ddbf8bb-49xr8\" (UID: \"ec34bcd2-48d7-4522-a32a-268a3a1b385c\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-49xr8" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.857647 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkvdl\" (UniqueName: \"kubernetes.io/projected/a0ff2743-9ab6-4388-b0af-06e06c3e7587-kube-api-access-fkvdl\") pod \"octavia-operator-controller-manager-69f8888797-glzx9\" (UID: \"a0ff2743-9ab6-4388-b0af-06e06c3e7587\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-glzx9" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.857936 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4svgh\" (UniqueName: \"kubernetes.io/projected/9898282c-422b-49dd-b369-da910d49a2d8-kube-api-access-4svgh\") pod \"mariadb-operator-controller-manager-6994f66f48-4md54\" (UID: \"9898282c-422b-49dd-b369-da910d49a2d8\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-4md54" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.847579 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-wp77d"] Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.859274 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8c86\" (UniqueName: \"kubernetes.io/projected/18230479-3d13-49f7-a2a1-95a191acb3db-kube-api-access-n8c86\") pod \"nova-operator-controller-manager-567668f5cf-7mzd9\" (UID: \"18230479-3d13-49f7-a2a1-95a191acb3db\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-7mzd9" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.859462 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-h27hl"] Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.861517 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-v4bt8" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.867366 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-h27hl" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.869529 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpknzq"] Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.873918 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-q7ngp" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.877758 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htkbk\" (UniqueName: \"kubernetes.io/projected/c94f0d1d-5edd-4b64-b2c7-85bdc5022ec3-kube-api-access-htkbk\") pod \"manila-operator-controller-manager-54f6768c69-zh77z\" (UID: \"c94f0d1d-5edd-4b64-b2c7-85bdc5022ec3\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-zh77z" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.877811 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-h27hl"] Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.885343 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-h5rvt"] Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.886145 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-h5rvt" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.886997 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-h5rvt"] Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.890511 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-zh65m" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.897624 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-dff68c48-5928s"] Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.907086 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-z78k7" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.907431 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-dff68c48-5928s" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.909233 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-dff68c48-5928s"] Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.917511 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-h5skt" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.925531 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-5tfvt" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.933017 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-zh77z" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.937280 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-jzssc"] Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.938115 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-jzssc" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.947370 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-cm94j" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.962713 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d99hj\" (UniqueName: \"kubernetes.io/projected/e70fa350-bca9-4007-80a9-15cfb3a56b11-kube-api-access-d99hj\") pod \"swift-operator-controller-manager-68f46476f-h5rvt\" (UID: \"e70fa350-bca9-4007-80a9-15cfb3a56b11\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-h5rvt" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.962758 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8c86\" (UniqueName: \"kubernetes.io/projected/18230479-3d13-49f7-a2a1-95a191acb3db-kube-api-access-n8c86\") pod \"nova-operator-controller-manager-567668f5cf-7mzd9\" (UID: \"18230479-3d13-49f7-a2a1-95a191acb3db\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-7mzd9" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.962780 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58e82cd5-3bd0-4f99-b958-29e5541fa49a-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cpknzq\" (UID: \"58e82cd5-3bd0-4f99-b958-29e5541fa49a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpknzq" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.962812 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ff8b6\" (UniqueName: \"kubernetes.io/projected/ca57fee7-64f8-4c49-9170-6f6e618c78e7-kube-api-access-ff8b6\") pod \"ovn-operator-controller-manager-d44cf6b75-wp77d\" (UID: \"ca57fee7-64f8-4c49-9170-6f6e618c78e7\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-wp77d" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.962875 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kv9qp\" (UniqueName: \"kubernetes.io/projected/ec34bcd2-48d7-4522-a32a-268a3a1b385c-kube-api-access-kv9qp\") pod \"neutron-operator-controller-manager-64ddbf8bb-49xr8\" (UID: \"ec34bcd2-48d7-4522-a32a-268a3a1b385c\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-49xr8" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.962899 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnqm6\" (UniqueName: \"kubernetes.io/projected/f1fcb3fa-62de-4b0b-93db-3e401ff94fe4-kube-api-access-hnqm6\") pod \"telemetry-operator-controller-manager-dff68c48-5928s\" (UID: \"f1fcb3fa-62de-4b0b-93db-3e401ff94fe4\") " pod="openstack-operators/telemetry-operator-controller-manager-dff68c48-5928s" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.962919 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkvdl\" (UniqueName: \"kubernetes.io/projected/a0ff2743-9ab6-4388-b0af-06e06c3e7587-kube-api-access-fkvdl\") pod \"octavia-operator-controller-manager-69f8888797-glzx9\" (UID: \"a0ff2743-9ab6-4388-b0af-06e06c3e7587\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-glzx9" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.962949 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66f88\" (UniqueName: \"kubernetes.io/projected/58e82cd5-3bd0-4f99-b958-29e5541fa49a-kube-api-access-66f88\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cpknzq\" (UID: \"58e82cd5-3bd0-4f99-b958-29e5541fa49a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpknzq" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.962969 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4svgh\" (UniqueName: \"kubernetes.io/projected/9898282c-422b-49dd-b369-da910d49a2d8-kube-api-access-4svgh\") pod \"mariadb-operator-controller-manager-6994f66f48-4md54\" (UID: \"9898282c-422b-49dd-b369-da910d49a2d8\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-4md54" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.962987 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2jfb\" (UniqueName: \"kubernetes.io/projected/a354e865-3819-4147-a565-4682bc4c6a6c-kube-api-access-s2jfb\") pod \"placement-operator-controller-manager-8497b45c89-h27hl\" (UID: \"a354e865-3819-4147-a565-4682bc4c6a6c\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-h27hl" Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.964069 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-jzssc"] Feb 19 10:00:07 crc kubenswrapper[4965]: I0219 10:00:07.991856 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8c86\" (UniqueName: \"kubernetes.io/projected/18230479-3d13-49f7-a2a1-95a191acb3db-kube-api-access-n8c86\") pod \"nova-operator-controller-manager-567668f5cf-7mzd9\" (UID: \"18230479-3d13-49f7-a2a1-95a191acb3db\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-7mzd9" Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:07.997223 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkvdl\" (UniqueName: \"kubernetes.io/projected/a0ff2743-9ab6-4388-b0af-06e06c3e7587-kube-api-access-fkvdl\") pod \"octavia-operator-controller-manager-69f8888797-glzx9\" (UID: \"a0ff2743-9ab6-4388-b0af-06e06c3e7587\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-glzx9" Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.010504 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4svgh\" (UniqueName: \"kubernetes.io/projected/9898282c-422b-49dd-b369-da910d49a2d8-kube-api-access-4svgh\") pod \"mariadb-operator-controller-manager-6994f66f48-4md54\" (UID: \"9898282c-422b-49dd-b369-da910d49a2d8\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-4md54" Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.015449 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kv9qp\" (UniqueName: \"kubernetes.io/projected/ec34bcd2-48d7-4522-a32a-268a3a1b385c-kube-api-access-kv9qp\") pod \"neutron-operator-controller-manager-64ddbf8bb-49xr8\" (UID: \"ec34bcd2-48d7-4522-a32a-268a3a1b385c\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-49xr8" Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.062574 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-54vfn"] Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.063666 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-54vfn" Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.063855 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgnqx\" (UniqueName: \"kubernetes.io/projected/7e1ae3d6-7af0-406d-b740-98c9f5c9403c-kube-api-access-lgnqx\") pod \"test-operator-controller-manager-7866795846-jzssc\" (UID: \"7e1ae3d6-7af0-406d-b740-98c9f5c9403c\") " pod="openstack-operators/test-operator-controller-manager-7866795846-jzssc" Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.063917 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnqm6\" (UniqueName: \"kubernetes.io/projected/f1fcb3fa-62de-4b0b-93db-3e401ff94fe4-kube-api-access-hnqm6\") pod \"telemetry-operator-controller-manager-dff68c48-5928s\" (UID: \"f1fcb3fa-62de-4b0b-93db-3e401ff94fe4\") " pod="openstack-operators/telemetry-operator-controller-manager-dff68c48-5928s" Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.063969 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66f88\" (UniqueName: \"kubernetes.io/projected/58e82cd5-3bd0-4f99-b958-29e5541fa49a-kube-api-access-66f88\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cpknzq\" (UID: \"58e82cd5-3bd0-4f99-b958-29e5541fa49a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpknzq" Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.064003 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2jfb\" (UniqueName: \"kubernetes.io/projected/a354e865-3819-4147-a565-4682bc4c6a6c-kube-api-access-s2jfb\") pod \"placement-operator-controller-manager-8497b45c89-h27hl\" (UID: \"a354e865-3819-4147-a565-4682bc4c6a6c\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-h27hl" Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.064033 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d99hj\" (UniqueName: \"kubernetes.io/projected/e70fa350-bca9-4007-80a9-15cfb3a56b11-kube-api-access-d99hj\") pod \"swift-operator-controller-manager-68f46476f-h5rvt\" (UID: \"e70fa350-bca9-4007-80a9-15cfb3a56b11\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-h5rvt" Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.064054 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58e82cd5-3bd0-4f99-b958-29e5541fa49a-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cpknzq\" (UID: \"58e82cd5-3bd0-4f99-b958-29e5541fa49a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpknzq" Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.064091 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ff8b6\" (UniqueName: \"kubernetes.io/projected/ca57fee7-64f8-4c49-9170-6f6e618c78e7-kube-api-access-ff8b6\") pod \"ovn-operator-controller-manager-d44cf6b75-wp77d\" (UID: \"ca57fee7-64f8-4c49-9170-6f6e618c78e7\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-wp77d" Feb 19 10:00:08 crc kubenswrapper[4965]: E0219 10:00:08.065349 4965 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 10:00:08 crc kubenswrapper[4965]: E0219 10:00:08.065393 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58e82cd5-3bd0-4f99-b958-29e5541fa49a-cert podName:58e82cd5-3bd0-4f99-b958-29e5541fa49a nodeName:}" failed. No retries permitted until 2026-02-19 10:00:08.565379882 +0000 UTC m=+1064.186701192 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/58e82cd5-3bd0-4f99-b958-29e5541fa49a-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cpknzq" (UID: "58e82cd5-3bd0-4f99-b958-29e5541fa49a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.088973 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-r6s5t" Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.094683 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-54vfn"] Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.096129 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d99hj\" (UniqueName: \"kubernetes.io/projected/e70fa350-bca9-4007-80a9-15cfb3a56b11-kube-api-access-d99hj\") pod \"swift-operator-controller-manager-68f46476f-h5rvt\" (UID: \"e70fa350-bca9-4007-80a9-15cfb3a56b11\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-h5rvt" Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.098316 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnqm6\" (UniqueName: \"kubernetes.io/projected/f1fcb3fa-62de-4b0b-93db-3e401ff94fe4-kube-api-access-hnqm6\") pod \"telemetry-operator-controller-manager-dff68c48-5928s\" (UID: \"f1fcb3fa-62de-4b0b-93db-3e401ff94fe4\") " pod="openstack-operators/telemetry-operator-controller-manager-dff68c48-5928s" Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.120752 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2jfb\" (UniqueName: \"kubernetes.io/projected/a354e865-3819-4147-a565-4682bc4c6a6c-kube-api-access-s2jfb\") pod \"placement-operator-controller-manager-8497b45c89-h27hl\" (UID: \"a354e865-3819-4147-a565-4682bc4c6a6c\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-h27hl" Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.121700 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ff8b6\" (UniqueName: \"kubernetes.io/projected/ca57fee7-64f8-4c49-9170-6f6e618c78e7-kube-api-access-ff8b6\") pod \"ovn-operator-controller-manager-d44cf6b75-wp77d\" (UID: \"ca57fee7-64f8-4c49-9170-6f6e618c78e7\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-wp77d" Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.122373 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66f88\" (UniqueName: \"kubernetes.io/projected/58e82cd5-3bd0-4f99-b958-29e5541fa49a-kube-api-access-66f88\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cpknzq\" (UID: \"58e82cd5-3bd0-4f99-b958-29e5541fa49a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpknzq" Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.123953 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-49xr8" Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.139739 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-4md54" Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.178740 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-glzx9" Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.197157 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgnqx\" (UniqueName: \"kubernetes.io/projected/7e1ae3d6-7af0-406d-b740-98c9f5c9403c-kube-api-access-lgnqx\") pod \"test-operator-controller-manager-7866795846-jzssc\" (UID: \"7e1ae3d6-7af0-406d-b740-98c9f5c9403c\") " pod="openstack-operators/test-operator-controller-manager-7866795846-jzssc" Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.217058 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-7mzd9" Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.217637 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzdr4\" (UniqueName: \"kubernetes.io/projected/6bd1df07-8b75-44b8-91a3-4f612b64c279-kube-api-access-vzdr4\") pod \"watcher-operator-controller-manager-5db88f68c-54vfn\" (UID: \"6bd1df07-8b75-44b8-91a3-4f612b64c279\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-54vfn" Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.222550 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7f6588fc96-6phd8"] Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.227935 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7f6588fc96-6phd8" Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.237776 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgnqx\" (UniqueName: \"kubernetes.io/projected/7e1ae3d6-7af0-406d-b740-98c9f5c9403c-kube-api-access-lgnqx\") pod \"test-operator-controller-manager-7866795846-jzssc\" (UID: \"7e1ae3d6-7af0-406d-b740-98c9f5c9403c\") " pod="openstack-operators/test-operator-controller-manager-7866795846-jzssc" Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.238345 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.240356 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.240901 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-67zjg" Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.252435 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7f6588fc96-6phd8"] Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.261950 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-frln4"] Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.269673 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-frln4" Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.274142 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-wp77d" Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.274891 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-chk9j" Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.285615 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-frln4"] Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.299370 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-h27hl" Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.319002 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2f161526-b0fd-453b-8ae7-7b9b7a485b97-cert\") pod \"infra-operator-controller-manager-79d975b745-zqmsr\" (UID: \"2f161526-b0fd-453b-8ae7-7b9b7a485b97\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-zqmsr" Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.319102 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzdr4\" (UniqueName: \"kubernetes.io/projected/6bd1df07-8b75-44b8-91a3-4f612b64c279-kube-api-access-vzdr4\") pod \"watcher-operator-controller-manager-5db88f68c-54vfn\" (UID: \"6bd1df07-8b75-44b8-91a3-4f612b64c279\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-54vfn" Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.320142 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-h5rvt" Feb 19 10:00:08 crc kubenswrapper[4965]: E0219 10:00:08.320655 4965 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 10:00:08 crc kubenswrapper[4965]: E0219 10:00:08.320715 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2f161526-b0fd-453b-8ae7-7b9b7a485b97-cert podName:2f161526-b0fd-453b-8ae7-7b9b7a485b97 nodeName:}" failed. No retries permitted until 2026-02-19 10:00:09.320682859 +0000 UTC m=+1064.942004169 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2f161526-b0fd-453b-8ae7-7b9b7a485b97-cert") pod "infra-operator-controller-manager-79d975b745-zqmsr" (UID: "2f161526-b0fd-453b-8ae7-7b9b7a485b97") : secret "infra-operator-webhook-server-cert" not found Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.338639 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-dff68c48-5928s" Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.343615 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzdr4\" (UniqueName: \"kubernetes.io/projected/6bd1df07-8b75-44b8-91a3-4f612b64c279-kube-api-access-vzdr4\") pod \"watcher-operator-controller-manager-5db88f68c-54vfn\" (UID: \"6bd1df07-8b75-44b8-91a3-4f612b64c279\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-54vfn" Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.349179 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-jzssc" Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.420900 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/186369a2-50b6-4226-be98-8876e469033f-webhook-certs\") pod \"openstack-operator-controller-manager-7f6588fc96-6phd8\" (UID: \"186369a2-50b6-4226-be98-8876e469033f\") " pod="openstack-operators/openstack-operator-controller-manager-7f6588fc96-6phd8" Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.420969 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/186369a2-50b6-4226-be98-8876e469033f-metrics-certs\") pod \"openstack-operator-controller-manager-7f6588fc96-6phd8\" (UID: \"186369a2-50b6-4226-be98-8876e469033f\") " pod="openstack-operators/openstack-operator-controller-manager-7f6588fc96-6phd8" Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.421070 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzfpl\" (UniqueName: \"kubernetes.io/projected/186369a2-50b6-4226-be98-8876e469033f-kube-api-access-wzfpl\") pod \"openstack-operator-controller-manager-7f6588fc96-6phd8\" (UID: \"186369a2-50b6-4226-be98-8876e469033f\") " pod="openstack-operators/openstack-operator-controller-manager-7f6588fc96-6phd8" Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.421112 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2c7jw\" (UniqueName: \"kubernetes.io/projected/f1723aed-01cb-4ac1-b191-299a6dd638e5-kube-api-access-2c7jw\") pod \"rabbitmq-cluster-operator-manager-668c99d594-frln4\" (UID: \"f1723aed-01cb-4ac1-b191-299a6dd638e5\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-frln4" Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.426550 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-54vfn" Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.459643 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-4rtq9"] Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.466615 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-jncdt"] Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.493527 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-78bmk" Feb 19 10:00:08 crc kubenswrapper[4965]: W0219 10:00:08.499229 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24b54009_86e7_409a_991e_a406d38ab751.slice/crio-04a5c5e3ec7e4271d5715774e319687e8e7a3f564cc18ac8879797be55338913 WatchSource:0}: Error finding container 04a5c5e3ec7e4271d5715774e319687e8e7a3f564cc18ac8879797be55338913: Status 404 returned error can't find the container with id 04a5c5e3ec7e4271d5715774e319687e8e7a3f564cc18ac8879797be55338913 Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.500613 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-ztvs5" Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.523058 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2c7jw\" (UniqueName: \"kubernetes.io/projected/f1723aed-01cb-4ac1-b191-299a6dd638e5-kube-api-access-2c7jw\") pod \"rabbitmq-cluster-operator-manager-668c99d594-frln4\" (UID: \"f1723aed-01cb-4ac1-b191-299a6dd638e5\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-frln4" Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.523127 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/186369a2-50b6-4226-be98-8876e469033f-webhook-certs\") pod \"openstack-operator-controller-manager-7f6588fc96-6phd8\" (UID: \"186369a2-50b6-4226-be98-8876e469033f\") " pod="openstack-operators/openstack-operator-controller-manager-7f6588fc96-6phd8" Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.523222 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/186369a2-50b6-4226-be98-8876e469033f-metrics-certs\") pod \"openstack-operator-controller-manager-7f6588fc96-6phd8\" (UID: \"186369a2-50b6-4226-be98-8876e469033f\") " pod="openstack-operators/openstack-operator-controller-manager-7f6588fc96-6phd8" Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.523349 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzfpl\" (UniqueName: \"kubernetes.io/projected/186369a2-50b6-4226-be98-8876e469033f-kube-api-access-wzfpl\") pod \"openstack-operator-controller-manager-7f6588fc96-6phd8\" (UID: \"186369a2-50b6-4226-be98-8876e469033f\") " pod="openstack-operators/openstack-operator-controller-manager-7f6588fc96-6phd8" Feb 19 10:00:08 crc kubenswrapper[4965]: E0219 10:00:08.524133 4965 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 10:00:08 crc kubenswrapper[4965]: E0219 10:00:08.524184 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/186369a2-50b6-4226-be98-8876e469033f-webhook-certs podName:186369a2-50b6-4226-be98-8876e469033f nodeName:}" failed. No retries permitted until 2026-02-19 10:00:09.024168306 +0000 UTC m=+1064.645489616 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/186369a2-50b6-4226-be98-8876e469033f-webhook-certs") pod "openstack-operator-controller-manager-7f6588fc96-6phd8" (UID: "186369a2-50b6-4226-be98-8876e469033f") : secret "webhook-server-cert" not found Feb 19 10:00:08 crc kubenswrapper[4965]: E0219 10:00:08.524382 4965 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 10:00:08 crc kubenswrapper[4965]: E0219 10:00:08.524406 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/186369a2-50b6-4226-be98-8876e469033f-metrics-certs podName:186369a2-50b6-4226-be98-8876e469033f nodeName:}" failed. No retries permitted until 2026-02-19 10:00:09.024399411 +0000 UTC m=+1064.645720721 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/186369a2-50b6-4226-be98-8876e469033f-metrics-certs") pod "openstack-operator-controller-manager-7f6588fc96-6phd8" (UID: "186369a2-50b6-4226-be98-8876e469033f") : secret "metrics-server-cert" not found Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.545001 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzfpl\" (UniqueName: \"kubernetes.io/projected/186369a2-50b6-4226-be98-8876e469033f-kube-api-access-wzfpl\") pod \"openstack-operator-controller-manager-7f6588fc96-6phd8\" (UID: \"186369a2-50b6-4226-be98-8876e469033f\") " pod="openstack-operators/openstack-operator-controller-manager-7f6588fc96-6phd8" Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.552792 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2c7jw\" (UniqueName: \"kubernetes.io/projected/f1723aed-01cb-4ac1-b191-299a6dd638e5-kube-api-access-2c7jw\") pod \"rabbitmq-cluster-operator-manager-668c99d594-frln4\" (UID: \"f1723aed-01cb-4ac1-b191-299a6dd638e5\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-frln4" Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.574851 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-2g7mq"] Feb 19 10:00:08 crc kubenswrapper[4965]: W0219 10:00:08.594359 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74f4ddc1_28bd_411f_8f0c_c5bfc3bfcec6.slice/crio-6f95e78e76cdb393f0ed850d4d50d74c601cdf8ff81cf85dd542c1c1567c85ed WatchSource:0}: Error finding container 6f95e78e76cdb393f0ed850d4d50d74c601cdf8ff81cf85dd542c1c1567c85ed: Status 404 returned error can't find the container with id 6f95e78e76cdb393f0ed850d4d50d74c601cdf8ff81cf85dd542c1c1567c85ed Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.604829 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-frln4" Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.625727 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58e82cd5-3bd0-4f99-b958-29e5541fa49a-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cpknzq\" (UID: \"58e82cd5-3bd0-4f99-b958-29e5541fa49a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpknzq" Feb 19 10:00:08 crc kubenswrapper[4965]: E0219 10:00:08.626435 4965 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 10:00:08 crc kubenswrapper[4965]: E0219 10:00:08.626541 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58e82cd5-3bd0-4f99-b958-29e5541fa49a-cert podName:58e82cd5-3bd0-4f99-b958-29e5541fa49a nodeName:}" failed. No retries permitted until 2026-02-19 10:00:09.626518918 +0000 UTC m=+1065.247840228 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/58e82cd5-3bd0-4f99-b958-29e5541fa49a-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cpknzq" (UID: "58e82cd5-3bd0-4f99-b958-29e5541fa49a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.715637 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-4rtq9" event={"ID":"8e1c4dc5-2d5b-46fb-b3cc-1ae2749fd02c","Type":"ContainerStarted","Data":"20780c72b0c797581e2b1a0cb6db3d34fe7c34b6c91a07c6a8ac88a59e622bc4"} Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.716688 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jncdt" event={"ID":"24b54009-86e7-409a-991e-a406d38ab751","Type":"ContainerStarted","Data":"04a5c5e3ec7e4271d5715774e319687e8e7a3f564cc18ac8879797be55338913"} Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.717490 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-2g7mq" event={"ID":"74f4ddc1-28bd-411f-8f0c-c5bfc3bfcec6","Type":"ContainerStarted","Data":"6f95e78e76cdb393f0ed850d4d50d74c601cdf8ff81cf85dd542c1c1567c85ed"} Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.900822 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-zh77z"] Feb 19 10:00:08 crc kubenswrapper[4965]: W0219 10:00:08.902504 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc94f0d1d_5edd_4b64_b2c7_85bdc5022ec3.slice/crio-38d9197acca3b1c719f51677b449319ce9193554a78ff6ade0f0fdf90e82168e WatchSource:0}: Error finding container 38d9197acca3b1c719f51677b449319ce9193554a78ff6ade0f0fdf90e82168e: Status 404 returned error can't find the container with id 38d9197acca3b1c719f51677b449319ce9193554a78ff6ade0f0fdf90e82168e Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.940224 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-bndgq"] Feb 19 10:00:08 crc kubenswrapper[4965]: I0219 10:00:08.948400 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-vgqfx"] Feb 19 10:00:08 crc kubenswrapper[4965]: W0219 10:00:08.958427 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe5bbdd4_d10a_4bc6_bd35_76c7abb54600.slice/crio-a7f49d07b004cf629cdececf4a35499b800ab83d4a82371f1b71f3204a9ca260 WatchSource:0}: Error finding container a7f49d07b004cf629cdececf4a35499b800ab83d4a82371f1b71f3204a9ca260: Status 404 returned error can't find the container with id a7f49d07b004cf629cdececf4a35499b800ab83d4a82371f1b71f3204a9ca260 Feb 19 10:00:09 crc kubenswrapper[4965]: I0219 10:00:09.021683 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-z78k7"] Feb 19 10:00:09 crc kubenswrapper[4965]: I0219 10:00:09.028229 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-49xr8"] Feb 19 10:00:09 crc kubenswrapper[4965]: I0219 10:00:09.038554 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-h5skt"] Feb 19 10:00:09 crc kubenswrapper[4965]: I0219 10:00:09.039018 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/186369a2-50b6-4226-be98-8876e469033f-metrics-certs\") pod \"openstack-operator-controller-manager-7f6588fc96-6phd8\" (UID: \"186369a2-50b6-4226-be98-8876e469033f\") " pod="openstack-operators/openstack-operator-controller-manager-7f6588fc96-6phd8" Feb 19 10:00:09 crc kubenswrapper[4965]: I0219 10:00:09.039183 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/186369a2-50b6-4226-be98-8876e469033f-webhook-certs\") pod \"openstack-operator-controller-manager-7f6588fc96-6phd8\" (UID: \"186369a2-50b6-4226-be98-8876e469033f\") " pod="openstack-operators/openstack-operator-controller-manager-7f6588fc96-6phd8" Feb 19 10:00:09 crc kubenswrapper[4965]: E0219 10:00:09.039294 4965 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 10:00:09 crc kubenswrapper[4965]: E0219 10:00:09.039305 4965 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 10:00:09 crc kubenswrapper[4965]: E0219 10:00:09.039360 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/186369a2-50b6-4226-be98-8876e469033f-metrics-certs podName:186369a2-50b6-4226-be98-8876e469033f nodeName:}" failed. No retries permitted until 2026-02-19 10:00:10.039345382 +0000 UTC m=+1065.660666692 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/186369a2-50b6-4226-be98-8876e469033f-metrics-certs") pod "openstack-operator-controller-manager-7f6588fc96-6phd8" (UID: "186369a2-50b6-4226-be98-8876e469033f") : secret "metrics-server-cert" not found Feb 19 10:00:09 crc kubenswrapper[4965]: E0219 10:00:09.039378 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/186369a2-50b6-4226-be98-8876e469033f-webhook-certs podName:186369a2-50b6-4226-be98-8876e469033f nodeName:}" failed. No retries permitted until 2026-02-19 10:00:10.039370572 +0000 UTC m=+1065.660691972 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/186369a2-50b6-4226-be98-8876e469033f-webhook-certs") pod "openstack-operator-controller-manager-7f6588fc96-6phd8" (UID: "186369a2-50b6-4226-be98-8876e469033f") : secret "webhook-server-cert" not found Feb 19 10:00:09 crc kubenswrapper[4965]: I0219 10:00:09.046362 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-glzx9"] Feb 19 10:00:09 crc kubenswrapper[4965]: I0219 10:00:09.197009 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-dff68c48-5928s"] Feb 19 10:00:09 crc kubenswrapper[4965]: I0219 10:00:09.211663 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-jzssc"] Feb 19 10:00:09 crc kubenswrapper[4965]: W0219 10:00:09.216300 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e1ae3d6_7af0_406d_b740_98c9f5c9403c.slice/crio-adde21abd2b10941da9b8473c7588d2e8003d7de15a3050c3027196eadc18034 WatchSource:0}: Error finding container adde21abd2b10941da9b8473c7588d2e8003d7de15a3050c3027196eadc18034: Status 404 returned error can't find the container with id adde21abd2b10941da9b8473c7588d2e8003d7de15a3050c3027196eadc18034 Feb 19 10:00:09 crc kubenswrapper[4965]: I0219 10:00:09.219809 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-wp77d"] Feb 19 10:00:09 crc kubenswrapper[4965]: E0219 10:00:09.223918 4965 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lgnqx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7866795846-jzssc_openstack-operators(7e1ae3d6-7af0-406d-b740-98c9f5c9403c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 19 10:00:09 crc kubenswrapper[4965]: I0219 10:00:09.230013 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-h5rvt"] Feb 19 10:00:09 crc kubenswrapper[4965]: E0219 10:00:09.230069 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-7866795846-jzssc" podUID="7e1ae3d6-7af0-406d-b740-98c9f5c9403c" Feb 19 10:00:09 crc kubenswrapper[4965]: I0219 10:00:09.232175 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-h27hl"] Feb 19 10:00:09 crc kubenswrapper[4965]: E0219 10:00:09.232821 4965 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ff8b6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-d44cf6b75-wp77d_openstack-operators(ca57fee7-64f8-4c49-9170-6f6e618c78e7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 19 10:00:09 crc kubenswrapper[4965]: E0219 10:00:09.232982 4965 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s2jfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-8497b45c89-h27hl_openstack-operators(a354e865-3819-4147-a565-4682bc4c6a6c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 19 10:00:09 crc kubenswrapper[4965]: E0219 10:00:09.234107 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-h27hl" podUID="a354e865-3819-4147-a565-4682bc4c6a6c" Feb 19 10:00:09 crc kubenswrapper[4965]: E0219 10:00:09.234114 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-wp77d" podUID="ca57fee7-64f8-4c49-9170-6f6e618c78e7" Feb 19 10:00:09 crc kubenswrapper[4965]: W0219 10:00:09.234441 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6bd1df07_8b75_44b8_91a3_4f612b64c279.slice/crio-e1e58453fc7b7a890c85e67c9832ea436193725bac5b64b640058cdc92084d19 WatchSource:0}: Error finding container e1e58453fc7b7a890c85e67c9832ea436193725bac5b64b640058cdc92084d19: Status 404 returned error can't find the container with id e1e58453fc7b7a890c85e67c9832ea436193725bac5b64b640058cdc92084d19 Feb 19 10:00:09 crc kubenswrapper[4965]: E0219 10:00:09.238679 4965 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vzdr4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5db88f68c-54vfn_openstack-operators(6bd1df07-8b75-44b8-91a3-4f612b64c279): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 19 10:00:09 crc kubenswrapper[4965]: I0219 10:00:09.239244 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-7mzd9"] Feb 19 10:00:09 crc kubenswrapper[4965]: E0219 10:00:09.239822 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-54vfn" podUID="6bd1df07-8b75-44b8-91a3-4f612b64c279" Feb 19 10:00:09 crc kubenswrapper[4965]: I0219 10:00:09.245408 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-54vfn"] Feb 19 10:00:09 crc kubenswrapper[4965]: I0219 10:00:09.343631 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2f161526-b0fd-453b-8ae7-7b9b7a485b97-cert\") pod \"infra-operator-controller-manager-79d975b745-zqmsr\" (UID: \"2f161526-b0fd-453b-8ae7-7b9b7a485b97\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-zqmsr" Feb 19 10:00:09 crc kubenswrapper[4965]: E0219 10:00:09.344630 4965 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 10:00:09 crc kubenswrapper[4965]: E0219 10:00:09.344706 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2f161526-b0fd-453b-8ae7-7b9b7a485b97-cert podName:2f161526-b0fd-453b-8ae7-7b9b7a485b97 nodeName:}" failed. No retries permitted until 2026-02-19 10:00:11.344687218 +0000 UTC m=+1066.966008578 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2f161526-b0fd-453b-8ae7-7b9b7a485b97-cert") pod "infra-operator-controller-manager-79d975b745-zqmsr" (UID: "2f161526-b0fd-453b-8ae7-7b9b7a485b97") : secret "infra-operator-webhook-server-cert" not found Feb 19 10:00:09 crc kubenswrapper[4965]: I0219 10:00:09.379433 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-4md54"] Feb 19 10:00:09 crc kubenswrapper[4965]: W0219 10:00:09.385320 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9898282c_422b_49dd_b369_da910d49a2d8.slice/crio-b3162a67e3e93b18b10164e953a33d2288c7e53d4efd55d6e5f96ff0620a44df WatchSource:0}: Error finding container b3162a67e3e93b18b10164e953a33d2288c7e53d4efd55d6e5f96ff0620a44df: Status 404 returned error can't find the container with id b3162a67e3e93b18b10164e953a33d2288c7e53d4efd55d6e5f96ff0620a44df Feb 19 10:00:09 crc kubenswrapper[4965]: I0219 10:00:09.392422 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-frln4"] Feb 19 10:00:09 crc kubenswrapper[4965]: E0219 10:00:09.394246 4965 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2c7jw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-frln4_openstack-operators(f1723aed-01cb-4ac1-b191-299a6dd638e5): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 19 10:00:09 crc kubenswrapper[4965]: E0219 10:00:09.395384 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-frln4" podUID="f1723aed-01cb-4ac1-b191-299a6dd638e5" Feb 19 10:00:09 crc kubenswrapper[4965]: I0219 10:00:09.399931 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-ztvs5"] Feb 19 10:00:09 crc kubenswrapper[4965]: W0219 10:00:09.402405 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c1737a3_9dfe_4208_a8da_8be7f09394d9.slice/crio-0d40858bfecab657cee903893384c7ea0fd259cf6494795a4f0919639c4dfe30 WatchSource:0}: Error finding container 0d40858bfecab657cee903893384c7ea0fd259cf6494795a4f0919639c4dfe30: Status 404 returned error can't find the container with id 0d40858bfecab657cee903893384c7ea0fd259cf6494795a4f0919639c4dfe30 Feb 19 10:00:09 crc kubenswrapper[4965]: E0219 10:00:09.405256 4965 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hgkkd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-6d8bf5c495-ztvs5_openstack-operators(7c1737a3-9dfe-4208-a8da-8be7f09394d9): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 19 10:00:09 crc kubenswrapper[4965]: E0219 10:00:09.406546 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-ztvs5" podUID="7c1737a3-9dfe-4208-a8da-8be7f09394d9" Feb 19 10:00:09 crc kubenswrapper[4965]: I0219 10:00:09.648751 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58e82cd5-3bd0-4f99-b958-29e5541fa49a-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cpknzq\" (UID: \"58e82cd5-3bd0-4f99-b958-29e5541fa49a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpknzq" Feb 19 10:00:09 crc kubenswrapper[4965]: E0219 10:00:09.649072 4965 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 10:00:09 crc kubenswrapper[4965]: E0219 10:00:09.649160 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58e82cd5-3bd0-4f99-b958-29e5541fa49a-cert podName:58e82cd5-3bd0-4f99-b958-29e5541fa49a nodeName:}" failed. No retries permitted until 2026-02-19 10:00:11.649126931 +0000 UTC m=+1067.270448241 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/58e82cd5-3bd0-4f99-b958-29e5541fa49a-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cpknzq" (UID: "58e82cd5-3bd0-4f99-b958-29e5541fa49a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 10:00:09 crc kubenswrapper[4965]: I0219 10:00:09.727296 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-jzssc" event={"ID":"7e1ae3d6-7af0-406d-b740-98c9f5c9403c","Type":"ContainerStarted","Data":"adde21abd2b10941da9b8473c7588d2e8003d7de15a3050c3027196eadc18034"} Feb 19 10:00:09 crc kubenswrapper[4965]: E0219 10:00:09.730045 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-jzssc" podUID="7e1ae3d6-7af0-406d-b740-98c9f5c9403c" Feb 19 10:00:09 crc kubenswrapper[4965]: I0219 10:00:09.741226 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-dff68c48-5928s" event={"ID":"f1fcb3fa-62de-4b0b-93db-3e401ff94fe4","Type":"ContainerStarted","Data":"24ecd30e36057aac277ec0bc6b013eb5b6d59ba216be7e811cacbf79fdd8dca7"} Feb 19 10:00:09 crc kubenswrapper[4965]: I0219 10:00:09.743871 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-h5rvt" event={"ID":"e70fa350-bca9-4007-80a9-15cfb3a56b11","Type":"ContainerStarted","Data":"861f4c0456179f4a7613f2bb2949d0a45a1bc154dfa36598802ea2b81c419c47"} Feb 19 10:00:09 crc kubenswrapper[4965]: I0219 10:00:09.750750 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-ztvs5" event={"ID":"7c1737a3-9dfe-4208-a8da-8be7f09394d9","Type":"ContainerStarted","Data":"0d40858bfecab657cee903893384c7ea0fd259cf6494795a4f0919639c4dfe30"} Feb 19 10:00:09 crc kubenswrapper[4965]: E0219 10:00:09.753384 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642\\\"\"" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-ztvs5" podUID="7c1737a3-9dfe-4208-a8da-8be7f09394d9" Feb 19 10:00:09 crc kubenswrapper[4965]: I0219 10:00:09.755172 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-bndgq" event={"ID":"ef077548-5e44-43f1-9f0d-3cf539bca16b","Type":"ContainerStarted","Data":"473c17620d0b6fd8ba98a6927c8de09dcdb66af78427e7e140879b99da8708b4"} Feb 19 10:00:09 crc kubenswrapper[4965]: I0219 10:00:09.756817 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-4md54" event={"ID":"9898282c-422b-49dd-b369-da910d49a2d8","Type":"ContainerStarted","Data":"b3162a67e3e93b18b10164e953a33d2288c7e53d4efd55d6e5f96ff0620a44df"} Feb 19 10:00:09 crc kubenswrapper[4965]: I0219 10:00:09.758462 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-7mzd9" event={"ID":"18230479-3d13-49f7-a2a1-95a191acb3db","Type":"ContainerStarted","Data":"18c3de5d8fec3b3d4fbd03517b8242a31b2f0242a2db547889354823e5e36fd2"} Feb 19 10:00:09 crc kubenswrapper[4965]: I0219 10:00:09.761482 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-glzx9" event={"ID":"a0ff2743-9ab6-4388-b0af-06e06c3e7587","Type":"ContainerStarted","Data":"18aa08679aa75fa1f5f113efa8a0a06ca2eb55fc494e0a01b51b347bcb8ea642"} Feb 19 10:00:09 crc kubenswrapper[4965]: I0219 10:00:09.763143 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-54vfn" event={"ID":"6bd1df07-8b75-44b8-91a3-4f612b64c279","Type":"ContainerStarted","Data":"e1e58453fc7b7a890c85e67c9832ea436193725bac5b64b640058cdc92084d19"} Feb 19 10:00:09 crc kubenswrapper[4965]: E0219 10:00:09.765088 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-54vfn" podUID="6bd1df07-8b75-44b8-91a3-4f612b64c279" Feb 19 10:00:09 crc kubenswrapper[4965]: I0219 10:00:09.789567 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-wp77d" event={"ID":"ca57fee7-64f8-4c49-9170-6f6e618c78e7","Type":"ContainerStarted","Data":"3f0053282d9a629345633b58d1e79eb740333ac010ab602d63c98a71e7ba820e"} Feb 19 10:00:09 crc kubenswrapper[4965]: I0219 10:00:09.791601 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-h5skt" event={"ID":"5747cc94-5621-4a7d-b599-f2a0f2a2aa29","Type":"ContainerStarted","Data":"8632c5672e0d2e3ea0aee960f1f287fc8d015d1bd1fd88dd47cad97a9e5c60cd"} Feb 19 10:00:09 crc kubenswrapper[4965]: E0219 10:00:09.791833 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-wp77d" podUID="ca57fee7-64f8-4c49-9170-6f6e618c78e7" Feb 19 10:00:09 crc kubenswrapper[4965]: I0219 10:00:09.803443 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-frln4" event={"ID":"f1723aed-01cb-4ac1-b191-299a6dd638e5","Type":"ContainerStarted","Data":"797eb82893f8f5ef4a8dc6f76f676f0d61179b4a64e7dc7c8764ae31cb07331e"} Feb 19 10:00:09 crc kubenswrapper[4965]: I0219 10:00:09.814242 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-z78k7" event={"ID":"73c20094-0abc-4525-ae77-d571755841fa","Type":"ContainerStarted","Data":"e697efa03d29b5bff9a38ad95d05187e98e72229fa94b46faa748af1bc8b614f"} Feb 19 10:00:09 crc kubenswrapper[4965]: E0219 10:00:09.814304 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-frln4" podUID="f1723aed-01cb-4ac1-b191-299a6dd638e5" Feb 19 10:00:09 crc kubenswrapper[4965]: I0219 10:00:09.832458 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-h27hl" event={"ID":"a354e865-3819-4147-a565-4682bc4c6a6c","Type":"ContainerStarted","Data":"b648a9b9f58a796c7945313546fbdb1bd574b0a47d4b9078b8532309f23bd537"} Feb 19 10:00:09 crc kubenswrapper[4965]: E0219 10:00:09.835223 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-h27hl" podUID="a354e865-3819-4147-a565-4682bc4c6a6c" Feb 19 10:00:09 crc kubenswrapper[4965]: I0219 10:00:09.836174 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-vgqfx" event={"ID":"fe5bbdd4-d10a-4bc6-bd35-76c7abb54600","Type":"ContainerStarted","Data":"a7f49d07b004cf629cdececf4a35499b800ab83d4a82371f1b71f3204a9ca260"} Feb 19 10:00:09 crc kubenswrapper[4965]: I0219 10:00:09.840768 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-zh77z" event={"ID":"c94f0d1d-5edd-4b64-b2c7-85bdc5022ec3","Type":"ContainerStarted","Data":"38d9197acca3b1c719f51677b449319ce9193554a78ff6ade0f0fdf90e82168e"} Feb 19 10:00:09 crc kubenswrapper[4965]: I0219 10:00:09.845358 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-49xr8" event={"ID":"ec34bcd2-48d7-4522-a32a-268a3a1b385c","Type":"ContainerStarted","Data":"79403388f30ca86a00946f9e8733c2e0dd89e719bc6db4de09053bfaa11efefa"} Feb 19 10:00:10 crc kubenswrapper[4965]: I0219 10:00:10.061778 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/186369a2-50b6-4226-be98-8876e469033f-metrics-certs\") pod \"openstack-operator-controller-manager-7f6588fc96-6phd8\" (UID: \"186369a2-50b6-4226-be98-8876e469033f\") " pod="openstack-operators/openstack-operator-controller-manager-7f6588fc96-6phd8" Feb 19 10:00:10 crc kubenswrapper[4965]: I0219 10:00:10.061949 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/186369a2-50b6-4226-be98-8876e469033f-webhook-certs\") pod \"openstack-operator-controller-manager-7f6588fc96-6phd8\" (UID: \"186369a2-50b6-4226-be98-8876e469033f\") " pod="openstack-operators/openstack-operator-controller-manager-7f6588fc96-6phd8" Feb 19 10:00:10 crc kubenswrapper[4965]: E0219 10:00:10.061951 4965 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 10:00:10 crc kubenswrapper[4965]: E0219 10:00:10.062038 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/186369a2-50b6-4226-be98-8876e469033f-metrics-certs podName:186369a2-50b6-4226-be98-8876e469033f nodeName:}" failed. No retries permitted until 2026-02-19 10:00:12.062016376 +0000 UTC m=+1067.683337686 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/186369a2-50b6-4226-be98-8876e469033f-metrics-certs") pod "openstack-operator-controller-manager-7f6588fc96-6phd8" (UID: "186369a2-50b6-4226-be98-8876e469033f") : secret "metrics-server-cert" not found Feb 19 10:00:10 crc kubenswrapper[4965]: E0219 10:00:10.062089 4965 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 10:00:10 crc kubenswrapper[4965]: E0219 10:00:10.062142 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/186369a2-50b6-4226-be98-8876e469033f-webhook-certs podName:186369a2-50b6-4226-be98-8876e469033f nodeName:}" failed. No retries permitted until 2026-02-19 10:00:12.062126549 +0000 UTC m=+1067.683447859 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/186369a2-50b6-4226-be98-8876e469033f-webhook-certs") pod "openstack-operator-controller-manager-7f6588fc96-6phd8" (UID: "186369a2-50b6-4226-be98-8876e469033f") : secret "webhook-server-cert" not found Feb 19 10:00:10 crc kubenswrapper[4965]: E0219 10:00:10.853715 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-jzssc" podUID="7e1ae3d6-7af0-406d-b740-98c9f5c9403c" Feb 19 10:00:10 crc kubenswrapper[4965]: E0219 10:00:10.853897 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642\\\"\"" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-ztvs5" podUID="7c1737a3-9dfe-4208-a8da-8be7f09394d9" Feb 19 10:00:10 crc kubenswrapper[4965]: E0219 10:00:10.853897 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-frln4" podUID="f1723aed-01cb-4ac1-b191-299a6dd638e5" Feb 19 10:00:10 crc kubenswrapper[4965]: E0219 10:00:10.854652 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-wp77d" podUID="ca57fee7-64f8-4c49-9170-6f6e618c78e7" Feb 19 10:00:10 crc kubenswrapper[4965]: E0219 10:00:10.854736 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-h27hl" podUID="a354e865-3819-4147-a565-4682bc4c6a6c" Feb 19 10:00:10 crc kubenswrapper[4965]: E0219 10:00:10.855172 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-54vfn" podUID="6bd1df07-8b75-44b8-91a3-4f612b64c279" Feb 19 10:00:11 crc kubenswrapper[4965]: I0219 10:00:11.381067 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2f161526-b0fd-453b-8ae7-7b9b7a485b97-cert\") pod \"infra-operator-controller-manager-79d975b745-zqmsr\" (UID: \"2f161526-b0fd-453b-8ae7-7b9b7a485b97\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-zqmsr" Feb 19 10:00:11 crc kubenswrapper[4965]: E0219 10:00:11.381308 4965 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 10:00:11 crc kubenswrapper[4965]: E0219 10:00:11.381416 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2f161526-b0fd-453b-8ae7-7b9b7a485b97-cert podName:2f161526-b0fd-453b-8ae7-7b9b7a485b97 nodeName:}" failed. No retries permitted until 2026-02-19 10:00:15.381382677 +0000 UTC m=+1071.002704047 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2f161526-b0fd-453b-8ae7-7b9b7a485b97-cert") pod "infra-operator-controller-manager-79d975b745-zqmsr" (UID: "2f161526-b0fd-453b-8ae7-7b9b7a485b97") : secret "infra-operator-webhook-server-cert" not found Feb 19 10:00:11 crc kubenswrapper[4965]: I0219 10:00:11.685214 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58e82cd5-3bd0-4f99-b958-29e5541fa49a-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cpknzq\" (UID: \"58e82cd5-3bd0-4f99-b958-29e5541fa49a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpknzq" Feb 19 10:00:11 crc kubenswrapper[4965]: E0219 10:00:11.685707 4965 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 10:00:11 crc kubenswrapper[4965]: E0219 10:00:11.685756 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58e82cd5-3bd0-4f99-b958-29e5541fa49a-cert podName:58e82cd5-3bd0-4f99-b958-29e5541fa49a nodeName:}" failed. No retries permitted until 2026-02-19 10:00:15.685741089 +0000 UTC m=+1071.307062409 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/58e82cd5-3bd0-4f99-b958-29e5541fa49a-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cpknzq" (UID: "58e82cd5-3bd0-4f99-b958-29e5541fa49a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 10:00:12 crc kubenswrapper[4965]: I0219 10:00:12.090820 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/186369a2-50b6-4226-be98-8876e469033f-metrics-certs\") pod \"openstack-operator-controller-manager-7f6588fc96-6phd8\" (UID: \"186369a2-50b6-4226-be98-8876e469033f\") " pod="openstack-operators/openstack-operator-controller-manager-7f6588fc96-6phd8" Feb 19 10:00:12 crc kubenswrapper[4965]: I0219 10:00:12.090952 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/186369a2-50b6-4226-be98-8876e469033f-webhook-certs\") pod \"openstack-operator-controller-manager-7f6588fc96-6phd8\" (UID: \"186369a2-50b6-4226-be98-8876e469033f\") " pod="openstack-operators/openstack-operator-controller-manager-7f6588fc96-6phd8" Feb 19 10:00:12 crc kubenswrapper[4965]: E0219 10:00:12.091027 4965 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 10:00:12 crc kubenswrapper[4965]: E0219 10:00:12.091076 4965 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 10:00:12 crc kubenswrapper[4965]: E0219 10:00:12.091113 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/186369a2-50b6-4226-be98-8876e469033f-metrics-certs podName:186369a2-50b6-4226-be98-8876e469033f nodeName:}" failed. No retries permitted until 2026-02-19 10:00:16.09109362 +0000 UTC m=+1071.712414930 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/186369a2-50b6-4226-be98-8876e469033f-metrics-certs") pod "openstack-operator-controller-manager-7f6588fc96-6phd8" (UID: "186369a2-50b6-4226-be98-8876e469033f") : secret "metrics-server-cert" not found Feb 19 10:00:12 crc kubenswrapper[4965]: E0219 10:00:12.091135 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/186369a2-50b6-4226-be98-8876e469033f-webhook-certs podName:186369a2-50b6-4226-be98-8876e469033f nodeName:}" failed. No retries permitted until 2026-02-19 10:00:16.091126911 +0000 UTC m=+1071.712448221 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/186369a2-50b6-4226-be98-8876e469033f-webhook-certs") pod "openstack-operator-controller-manager-7f6588fc96-6phd8" (UID: "186369a2-50b6-4226-be98-8876e469033f") : secret "webhook-server-cert" not found Feb 19 10:00:15 crc kubenswrapper[4965]: I0219 10:00:15.451558 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2f161526-b0fd-453b-8ae7-7b9b7a485b97-cert\") pod \"infra-operator-controller-manager-79d975b745-zqmsr\" (UID: \"2f161526-b0fd-453b-8ae7-7b9b7a485b97\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-zqmsr" Feb 19 10:00:15 crc kubenswrapper[4965]: E0219 10:00:15.451758 4965 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 10:00:15 crc kubenswrapper[4965]: E0219 10:00:15.451927 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2f161526-b0fd-453b-8ae7-7b9b7a485b97-cert podName:2f161526-b0fd-453b-8ae7-7b9b7a485b97 nodeName:}" failed. No retries permitted until 2026-02-19 10:00:23.451907237 +0000 UTC m=+1079.073228547 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2f161526-b0fd-453b-8ae7-7b9b7a485b97-cert") pod "infra-operator-controller-manager-79d975b745-zqmsr" (UID: "2f161526-b0fd-453b-8ae7-7b9b7a485b97") : secret "infra-operator-webhook-server-cert" not found Feb 19 10:00:15 crc kubenswrapper[4965]: I0219 10:00:15.756190 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58e82cd5-3bd0-4f99-b958-29e5541fa49a-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cpknzq\" (UID: \"58e82cd5-3bd0-4f99-b958-29e5541fa49a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpknzq" Feb 19 10:00:15 crc kubenswrapper[4965]: E0219 10:00:15.756360 4965 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 10:00:15 crc kubenswrapper[4965]: E0219 10:00:15.756454 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58e82cd5-3bd0-4f99-b958-29e5541fa49a-cert podName:58e82cd5-3bd0-4f99-b958-29e5541fa49a nodeName:}" failed. No retries permitted until 2026-02-19 10:00:23.756432862 +0000 UTC m=+1079.377754272 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/58e82cd5-3bd0-4f99-b958-29e5541fa49a-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cpknzq" (UID: "58e82cd5-3bd0-4f99-b958-29e5541fa49a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 10:00:16 crc kubenswrapper[4965]: I0219 10:00:16.161292 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/186369a2-50b6-4226-be98-8876e469033f-webhook-certs\") pod \"openstack-operator-controller-manager-7f6588fc96-6phd8\" (UID: \"186369a2-50b6-4226-be98-8876e469033f\") " pod="openstack-operators/openstack-operator-controller-manager-7f6588fc96-6phd8" Feb 19 10:00:16 crc kubenswrapper[4965]: I0219 10:00:16.161375 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/186369a2-50b6-4226-be98-8876e469033f-metrics-certs\") pod \"openstack-operator-controller-manager-7f6588fc96-6phd8\" (UID: \"186369a2-50b6-4226-be98-8876e469033f\") " pod="openstack-operators/openstack-operator-controller-manager-7f6588fc96-6phd8" Feb 19 10:00:16 crc kubenswrapper[4965]: E0219 10:00:16.161459 4965 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 10:00:16 crc kubenswrapper[4965]: E0219 10:00:16.161551 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/186369a2-50b6-4226-be98-8876e469033f-webhook-certs podName:186369a2-50b6-4226-be98-8876e469033f nodeName:}" failed. No retries permitted until 2026-02-19 10:00:24.161529057 +0000 UTC m=+1079.782850377 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/186369a2-50b6-4226-be98-8876e469033f-webhook-certs") pod "openstack-operator-controller-manager-7f6588fc96-6phd8" (UID: "186369a2-50b6-4226-be98-8876e469033f") : secret "webhook-server-cert" not found Feb 19 10:00:16 crc kubenswrapper[4965]: E0219 10:00:16.161611 4965 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 10:00:16 crc kubenswrapper[4965]: E0219 10:00:16.161667 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/186369a2-50b6-4226-be98-8876e469033f-metrics-certs podName:186369a2-50b6-4226-be98-8876e469033f nodeName:}" failed. No retries permitted until 2026-02-19 10:00:24.16165015 +0000 UTC m=+1079.782971570 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/186369a2-50b6-4226-be98-8876e469033f-metrics-certs") pod "openstack-operator-controller-manager-7f6588fc96-6phd8" (UID: "186369a2-50b6-4226-be98-8876e469033f") : secret "metrics-server-cert" not found Feb 19 10:00:20 crc kubenswrapper[4965]: I0219 10:00:20.920832 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-2g7mq" event={"ID":"74f4ddc1-28bd-411f-8f0c-c5bfc3bfcec6","Type":"ContainerStarted","Data":"522ee7768525a7e72d80d9eb16eb7c5a6ec04486915025e9d8b3d36448075e95"} Feb 19 10:00:20 crc kubenswrapper[4965]: I0219 10:00:20.922421 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-2g7mq" Feb 19 10:00:20 crc kubenswrapper[4965]: I0219 10:00:20.926163 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-glzx9" event={"ID":"a0ff2743-9ab6-4388-b0af-06e06c3e7587","Type":"ContainerStarted","Data":"5a4b45dd6fa9e4c1a672c3a74129ab88d543a169469b076e4a8d25630a10e722"} Feb 19 10:00:20 crc kubenswrapper[4965]: I0219 10:00:20.926280 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-glzx9" Feb 19 10:00:20 crc kubenswrapper[4965]: I0219 10:00:20.930145 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-4rtq9" event={"ID":"8e1c4dc5-2d5b-46fb-b3cc-1ae2749fd02c","Type":"ContainerStarted","Data":"a2973afd379853a076f0078bff674ef6499f5d22791a1ebe35a01f7280fb62b5"} Feb 19 10:00:20 crc kubenswrapper[4965]: I0219 10:00:20.930385 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-4rtq9" Feb 19 10:00:20 crc kubenswrapper[4965]: I0219 10:00:20.932012 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-h5skt" event={"ID":"5747cc94-5621-4a7d-b599-f2a0f2a2aa29","Type":"ContainerStarted","Data":"1b662cbd355ad507ac119d90e3bfea44042796febe97c3d140e1fd8500052730"} Feb 19 10:00:20 crc kubenswrapper[4965]: I0219 10:00:20.932220 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-h5skt" Feb 19 10:00:20 crc kubenswrapper[4965]: I0219 10:00:20.940248 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jncdt" event={"ID":"24b54009-86e7-409a-991e-a406d38ab751","Type":"ContainerStarted","Data":"b7e24a89a2371fe68c1c15b0a9db17e393a51ec4bd9c42e226c56829e17d1fe7"} Feb 19 10:00:20 crc kubenswrapper[4965]: I0219 10:00:20.940992 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jncdt" Feb 19 10:00:20 crc kubenswrapper[4965]: I0219 10:00:20.949352 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-z78k7" event={"ID":"73c20094-0abc-4525-ae77-d571755841fa","Type":"ContainerStarted","Data":"95ae8fe402783b5b98774cb46d0d779b514999524f83a9e3f62478d36f93f17f"} Feb 19 10:00:20 crc kubenswrapper[4965]: I0219 10:00:20.950074 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-z78k7" Feb 19 10:00:20 crc kubenswrapper[4965]: I0219 10:00:20.953232 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-49xr8" event={"ID":"ec34bcd2-48d7-4522-a32a-268a3a1b385c","Type":"ContainerStarted","Data":"587d116c254b3369ab952d902bfef326b8ad11fcb9934182dbba88505979ce43"} Feb 19 10:00:20 crc kubenswrapper[4965]: I0219 10:00:20.953745 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-49xr8" Feb 19 10:00:20 crc kubenswrapper[4965]: I0219 10:00:20.957160 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-dff68c48-5928s" event={"ID":"f1fcb3fa-62de-4b0b-93db-3e401ff94fe4","Type":"ContainerStarted","Data":"2f02bd5982960c9d1286c0cac0bdb9de7af6b9850fd4ff63deffe5316d875d82"} Feb 19 10:00:20 crc kubenswrapper[4965]: I0219 10:00:20.957633 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-dff68c48-5928s" Feb 19 10:00:20 crc kubenswrapper[4965]: I0219 10:00:20.959165 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-2g7mq" podStartSLOduration=2.7144182519999998 podStartE2EDuration="13.959150024s" podCreationTimestamp="2026-02-19 10:00:07 +0000 UTC" firstStartedPulling="2026-02-19 10:00:08.596503047 +0000 UTC m=+1064.217824357" lastFinishedPulling="2026-02-19 10:00:19.841234819 +0000 UTC m=+1075.462556129" observedRunningTime="2026-02-19 10:00:20.957596327 +0000 UTC m=+1076.578917637" watchObservedRunningTime="2026-02-19 10:00:20.959150024 +0000 UTC m=+1076.580471334" Feb 19 10:00:20 crc kubenswrapper[4965]: I0219 10:00:20.962640 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-4md54" event={"ID":"9898282c-422b-49dd-b369-da910d49a2d8","Type":"ContainerStarted","Data":"2dff9aa453c140650212dcecf62ecf0cba38bfd3f19e045382119d143ee22ff0"} Feb 19 10:00:20 crc kubenswrapper[4965]: I0219 10:00:20.962919 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-4md54" Feb 19 10:00:20 crc kubenswrapper[4965]: I0219 10:00:20.976485 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-7mzd9" event={"ID":"18230479-3d13-49f7-a2a1-95a191acb3db","Type":"ContainerStarted","Data":"ff86a9cf8fce5fc0d4fb4ee117c41110597b0c11dc3a7ffafadb1e3e3c2707f9"} Feb 19 10:00:20 crc kubenswrapper[4965]: I0219 10:00:20.977271 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-7mzd9" Feb 19 10:00:20 crc kubenswrapper[4965]: I0219 10:00:20.991762 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-h5rvt" event={"ID":"e70fa350-bca9-4007-80a9-15cfb3a56b11","Type":"ContainerStarted","Data":"b15ff72ee2791ede4b58d57423af5663f739e1988baf030961079b20880fd2f5"} Feb 19 10:00:20 crc kubenswrapper[4965]: I0219 10:00:20.992608 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-h5rvt" Feb 19 10:00:20 crc kubenswrapper[4965]: I0219 10:00:20.997158 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-bndgq" event={"ID":"ef077548-5e44-43f1-9f0d-3cf539bca16b","Type":"ContainerStarted","Data":"0012c67510e342b2f1332743c5b8838407cf0e0cdda7b6fd94944942f0344572"} Feb 19 10:00:20 crc kubenswrapper[4965]: I0219 10:00:20.997213 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-bndgq" Feb 19 10:00:21 crc kubenswrapper[4965]: I0219 10:00:21.004294 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-vgqfx" event={"ID":"fe5bbdd4-d10a-4bc6-bd35-76c7abb54600","Type":"ContainerStarted","Data":"b506fe6ccf9b27e935deb12f2f67189e2d0c4a5bf7f83e3b19b9dccc28dd8740"} Feb 19 10:00:21 crc kubenswrapper[4965]: I0219 10:00:21.004989 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-vgqfx" Feb 19 10:00:21 crc kubenswrapper[4965]: I0219 10:00:21.008869 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-49xr8" podStartSLOduration=3.177906078 podStartE2EDuration="14.008852144s" podCreationTimestamp="2026-02-19 10:00:07 +0000 UTC" firstStartedPulling="2026-02-19 10:00:09.028110028 +0000 UTC m=+1064.649431338" lastFinishedPulling="2026-02-19 10:00:19.859056084 +0000 UTC m=+1075.480377404" observedRunningTime="2026-02-19 10:00:20.998168305 +0000 UTC m=+1076.619489625" watchObservedRunningTime="2026-02-19 10:00:21.008852144 +0000 UTC m=+1076.630173454" Feb 19 10:00:21 crc kubenswrapper[4965]: I0219 10:00:21.012965 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-zh77z" event={"ID":"c94f0d1d-5edd-4b64-b2c7-85bdc5022ec3","Type":"ContainerStarted","Data":"6a099eea10ef613e5c7b0cc532ebbca786b1cfbf45df6c0b84c35e1703761fb1"} Feb 19 10:00:21 crc kubenswrapper[4965]: I0219 10:00:21.013338 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-zh77z" Feb 19 10:00:21 crc kubenswrapper[4965]: I0219 10:00:21.040505 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-glzx9" podStartSLOduration=3.240949014 podStartE2EDuration="14.040481785s" podCreationTimestamp="2026-02-19 10:00:07 +0000 UTC" firstStartedPulling="2026-02-19 10:00:09.041177266 +0000 UTC m=+1064.662498586" lastFinishedPulling="2026-02-19 10:00:19.840710047 +0000 UTC m=+1075.462031357" observedRunningTime="2026-02-19 10:00:21.028615736 +0000 UTC m=+1076.649937046" watchObservedRunningTime="2026-02-19 10:00:21.040481785 +0000 UTC m=+1076.661803095" Feb 19 10:00:21 crc kubenswrapper[4965]: I0219 10:00:21.128464 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-dff68c48-5928s" podStartSLOduration=3.484841873 podStartE2EDuration="14.128437487s" podCreationTimestamp="2026-02-19 10:00:07 +0000 UTC" firstStartedPulling="2026-02-19 10:00:09.209636128 +0000 UTC m=+1064.830957438" lastFinishedPulling="2026-02-19 10:00:19.853231742 +0000 UTC m=+1075.474553052" observedRunningTime="2026-02-19 10:00:21.114538048 +0000 UTC m=+1076.735859368" watchObservedRunningTime="2026-02-19 10:00:21.128437487 +0000 UTC m=+1076.749758797" Feb 19 10:00:21 crc kubenswrapper[4965]: I0219 10:00:21.130810 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jncdt" podStartSLOduration=2.81284848 podStartE2EDuration="14.130797055s" podCreationTimestamp="2026-02-19 10:00:07 +0000 UTC" firstStartedPulling="2026-02-19 10:00:08.5231255 +0000 UTC m=+1064.144446810" lastFinishedPulling="2026-02-19 10:00:19.841074065 +0000 UTC m=+1075.462395385" observedRunningTime="2026-02-19 10:00:21.061434945 +0000 UTC m=+1076.682756275" watchObservedRunningTime="2026-02-19 10:00:21.130797055 +0000 UTC m=+1076.752118365" Feb 19 10:00:21 crc kubenswrapper[4965]: I0219 10:00:21.183766 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-4rtq9" podStartSLOduration=2.833303146 podStartE2EDuration="14.183748323s" podCreationTimestamp="2026-02-19 10:00:07 +0000 UTC" firstStartedPulling="2026-02-19 10:00:08.502724213 +0000 UTC m=+1064.124045523" lastFinishedPulling="2026-02-19 10:00:19.85316938 +0000 UTC m=+1075.474490700" observedRunningTime="2026-02-19 10:00:21.18154313 +0000 UTC m=+1076.802864430" watchObservedRunningTime="2026-02-19 10:00:21.183748323 +0000 UTC m=+1076.805069633" Feb 19 10:00:21 crc kubenswrapper[4965]: I0219 10:00:21.184884 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-h5skt" podStartSLOduration=3.307836353 podStartE2EDuration="14.184874431s" podCreationTimestamp="2026-02-19 10:00:07 +0000 UTC" firstStartedPulling="2026-02-19 10:00:09.029456971 +0000 UTC m=+1064.650778281" lastFinishedPulling="2026-02-19 10:00:19.906495039 +0000 UTC m=+1075.527816359" observedRunningTime="2026-02-19 10:00:21.159516664 +0000 UTC m=+1076.780837994" watchObservedRunningTime="2026-02-19 10:00:21.184874431 +0000 UTC m=+1076.806195741" Feb 19 10:00:21 crc kubenswrapper[4965]: I0219 10:00:21.202871 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-z78k7" podStartSLOduration=3.388066067 podStartE2EDuration="14.202853759s" podCreationTimestamp="2026-02-19 10:00:07 +0000 UTC" firstStartedPulling="2026-02-19 10:00:09.027248387 +0000 UTC m=+1064.648569697" lastFinishedPulling="2026-02-19 10:00:19.842036079 +0000 UTC m=+1075.463357389" observedRunningTime="2026-02-19 10:00:21.198312539 +0000 UTC m=+1076.819633869" watchObservedRunningTime="2026-02-19 10:00:21.202853759 +0000 UTC m=+1076.824175069" Feb 19 10:00:21 crc kubenswrapper[4965]: I0219 10:00:21.222747 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-zh77z" podStartSLOduration=3.288375539 podStartE2EDuration="14.222730323s" podCreationTimestamp="2026-02-19 10:00:07 +0000 UTC" firstStartedPulling="2026-02-19 10:00:08.906534127 +0000 UTC m=+1064.527855437" lastFinishedPulling="2026-02-19 10:00:19.840888901 +0000 UTC m=+1075.462210221" observedRunningTime="2026-02-19 10:00:21.22216154 +0000 UTC m=+1076.843482870" watchObservedRunningTime="2026-02-19 10:00:21.222730323 +0000 UTC m=+1076.844051633" Feb 19 10:00:21 crc kubenswrapper[4965]: I0219 10:00:21.247911 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-vgqfx" podStartSLOduration=3.356318033 podStartE2EDuration="14.247896566s" podCreationTimestamp="2026-02-19 10:00:07 +0000 UTC" firstStartedPulling="2026-02-19 10:00:08.962458909 +0000 UTC m=+1064.583780209" lastFinishedPulling="2026-02-19 10:00:19.854037422 +0000 UTC m=+1075.475358742" observedRunningTime="2026-02-19 10:00:21.244720879 +0000 UTC m=+1076.866042199" watchObservedRunningTime="2026-02-19 10:00:21.247896566 +0000 UTC m=+1076.869217876" Feb 19 10:00:21 crc kubenswrapper[4965]: I0219 10:00:21.264390 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-4md54" podStartSLOduration=3.802565461 podStartE2EDuration="14.264372517s" podCreationTimestamp="2026-02-19 10:00:07 +0000 UTC" firstStartedPulling="2026-02-19 10:00:09.392754528 +0000 UTC m=+1065.014075838" lastFinishedPulling="2026-02-19 10:00:19.854561584 +0000 UTC m=+1075.475882894" observedRunningTime="2026-02-19 10:00:21.263117377 +0000 UTC m=+1076.884438687" watchObservedRunningTime="2026-02-19 10:00:21.264372517 +0000 UTC m=+1076.885693827" Feb 19 10:00:21 crc kubenswrapper[4965]: I0219 10:00:21.288770 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-7mzd9" podStartSLOduration=3.6088532730000003 podStartE2EDuration="14.288756411s" podCreationTimestamp="2026-02-19 10:00:07 +0000 UTC" firstStartedPulling="2026-02-19 10:00:09.220752489 +0000 UTC m=+1064.842073789" lastFinishedPulling="2026-02-19 10:00:19.900655607 +0000 UTC m=+1075.521976927" observedRunningTime="2026-02-19 10:00:21.287525031 +0000 UTC m=+1076.908846341" watchObservedRunningTime="2026-02-19 10:00:21.288756411 +0000 UTC m=+1076.910077721" Feb 19 10:00:21 crc kubenswrapper[4965]: I0219 10:00:21.303214 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987464f4-bndgq" podStartSLOduration=3.403720318 podStartE2EDuration="14.303184233s" podCreationTimestamp="2026-02-19 10:00:07 +0000 UTC" firstStartedPulling="2026-02-19 10:00:08.960511171 +0000 UTC m=+1064.581832481" lastFinishedPulling="2026-02-19 10:00:19.859975046 +0000 UTC m=+1075.481296396" observedRunningTime="2026-02-19 10:00:21.298425076 +0000 UTC m=+1076.919746386" watchObservedRunningTime="2026-02-19 10:00:21.303184233 +0000 UTC m=+1076.924505543" Feb 19 10:00:21 crc kubenswrapper[4965]: I0219 10:00:21.329544 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-h5rvt" podStartSLOduration=3.693170978 podStartE2EDuration="14.329529514s" podCreationTimestamp="2026-02-19 10:00:07 +0000 UTC" firstStartedPulling="2026-02-19 10:00:09.210046459 +0000 UTC m=+1064.831367769" lastFinishedPulling="2026-02-19 10:00:19.846404975 +0000 UTC m=+1075.467726305" observedRunningTime="2026-02-19 10:00:21.327241568 +0000 UTC m=+1076.948562888" watchObservedRunningTime="2026-02-19 10:00:21.329529514 +0000 UTC m=+1076.950850824" Feb 19 10:00:23 crc kubenswrapper[4965]: I0219 10:00:23.490148 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2f161526-b0fd-453b-8ae7-7b9b7a485b97-cert\") pod \"infra-operator-controller-manager-79d975b745-zqmsr\" (UID: \"2f161526-b0fd-453b-8ae7-7b9b7a485b97\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-zqmsr" Feb 19 10:00:23 crc kubenswrapper[4965]: I0219 10:00:23.495684 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2f161526-b0fd-453b-8ae7-7b9b7a485b97-cert\") pod \"infra-operator-controller-manager-79d975b745-zqmsr\" (UID: \"2f161526-b0fd-453b-8ae7-7b9b7a485b97\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-zqmsr" Feb 19 10:00:23 crc kubenswrapper[4965]: I0219 10:00:23.697844 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-zqmsr" Feb 19 10:00:23 crc kubenswrapper[4965]: I0219 10:00:23.795354 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58e82cd5-3bd0-4f99-b958-29e5541fa49a-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cpknzq\" (UID: \"58e82cd5-3bd0-4f99-b958-29e5541fa49a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpknzq" Feb 19 10:00:23 crc kubenswrapper[4965]: I0219 10:00:23.802262 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58e82cd5-3bd0-4f99-b958-29e5541fa49a-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cpknzq\" (UID: \"58e82cd5-3bd0-4f99-b958-29e5541fa49a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpknzq" Feb 19 10:00:23 crc kubenswrapper[4965]: I0219 10:00:23.827431 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpknzq" Feb 19 10:00:24 crc kubenswrapper[4965]: I0219 10:00:24.116295 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-zqmsr"] Feb 19 10:00:24 crc kubenswrapper[4965]: W0219 10:00:24.122686 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f161526_b0fd_453b_8ae7_7b9b7a485b97.slice/crio-2b0a830b4df423a200d95e78826730a66ae0eb8e2b11d5ea40f6c81d1a54bb37 WatchSource:0}: Error finding container 2b0a830b4df423a200d95e78826730a66ae0eb8e2b11d5ea40f6c81d1a54bb37: Status 404 returned error can't find the container with id 2b0a830b4df423a200d95e78826730a66ae0eb8e2b11d5ea40f6c81d1a54bb37 Feb 19 10:00:24 crc kubenswrapper[4965]: I0219 10:00:24.200641 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/186369a2-50b6-4226-be98-8876e469033f-webhook-certs\") pod \"openstack-operator-controller-manager-7f6588fc96-6phd8\" (UID: \"186369a2-50b6-4226-be98-8876e469033f\") " pod="openstack-operators/openstack-operator-controller-manager-7f6588fc96-6phd8" Feb 19 10:00:24 crc kubenswrapper[4965]: I0219 10:00:24.200710 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/186369a2-50b6-4226-be98-8876e469033f-metrics-certs\") pod \"openstack-operator-controller-manager-7f6588fc96-6phd8\" (UID: \"186369a2-50b6-4226-be98-8876e469033f\") " pod="openstack-operators/openstack-operator-controller-manager-7f6588fc96-6phd8" Feb 19 10:00:24 crc kubenswrapper[4965]: E0219 10:00:24.200881 4965 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 10:00:24 crc kubenswrapper[4965]: E0219 10:00:24.201799 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/186369a2-50b6-4226-be98-8876e469033f-webhook-certs podName:186369a2-50b6-4226-be98-8876e469033f nodeName:}" failed. No retries permitted until 2026-02-19 10:00:40.201006593 +0000 UTC m=+1095.822327933 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/186369a2-50b6-4226-be98-8876e469033f-webhook-certs") pod "openstack-operator-controller-manager-7f6588fc96-6phd8" (UID: "186369a2-50b6-4226-be98-8876e469033f") : secret "webhook-server-cert" not found Feb 19 10:00:24 crc kubenswrapper[4965]: I0219 10:00:24.204468 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/186369a2-50b6-4226-be98-8876e469033f-metrics-certs\") pod \"openstack-operator-controller-manager-7f6588fc96-6phd8\" (UID: \"186369a2-50b6-4226-be98-8876e469033f\") " pod="openstack-operators/openstack-operator-controller-manager-7f6588fc96-6phd8" Feb 19 10:00:24 crc kubenswrapper[4965]: I0219 10:00:24.249980 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpknzq"] Feb 19 10:00:24 crc kubenswrapper[4965]: W0219 10:00:24.257327 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58e82cd5_3bd0_4f99_b958_29e5541fa49a.slice/crio-6ab5fccba561c5f98006943a3b9f56b9c474dab53012c639babeba54094edf7c WatchSource:0}: Error finding container 6ab5fccba561c5f98006943a3b9f56b9c474dab53012c639babeba54094edf7c: Status 404 returned error can't find the container with id 6ab5fccba561c5f98006943a3b9f56b9c474dab53012c639babeba54094edf7c Feb 19 10:00:25 crc kubenswrapper[4965]: I0219 10:00:25.049474 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpknzq" event={"ID":"58e82cd5-3bd0-4f99-b958-29e5541fa49a","Type":"ContainerStarted","Data":"6ab5fccba561c5f98006943a3b9f56b9c474dab53012c639babeba54094edf7c"} Feb 19 10:00:25 crc kubenswrapper[4965]: I0219 10:00:25.050983 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-zqmsr" event={"ID":"2f161526-b0fd-453b-8ae7-7b9b7a485b97","Type":"ContainerStarted","Data":"2b0a830b4df423a200d95e78826730a66ae0eb8e2b11d5ea40f6c81d1a54bb37"} Feb 19 10:00:27 crc kubenswrapper[4965]: I0219 10:00:27.645775 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jncdt" Feb 19 10:00:27 crc kubenswrapper[4965]: I0219 10:00:27.706963 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-2g7mq" Feb 19 10:00:27 crc kubenswrapper[4965]: I0219 10:00:27.750816 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987464f4-bndgq" Feb 19 10:00:27 crc kubenswrapper[4965]: I0219 10:00:27.751467 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-4rtq9" Feb 19 10:00:27 crc kubenswrapper[4965]: I0219 10:00:27.783879 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-vgqfx" Feb 19 10:00:27 crc kubenswrapper[4965]: I0219 10:00:27.913327 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-z78k7" Feb 19 10:00:27 crc kubenswrapper[4965]: I0219 10:00:27.931832 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-h5skt" Feb 19 10:00:27 crc kubenswrapper[4965]: I0219 10:00:27.936878 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-zh77z" Feb 19 10:00:28 crc kubenswrapper[4965]: I0219 10:00:28.129849 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-49xr8" Feb 19 10:00:28 crc kubenswrapper[4965]: I0219 10:00:28.150024 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-4md54" Feb 19 10:00:28 crc kubenswrapper[4965]: I0219 10:00:28.184399 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-glzx9" Feb 19 10:00:28 crc kubenswrapper[4965]: I0219 10:00:28.221879 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-7mzd9" Feb 19 10:00:28 crc kubenswrapper[4965]: I0219 10:00:28.323687 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-h5rvt" Feb 19 10:00:28 crc kubenswrapper[4965]: I0219 10:00:28.353662 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-dff68c48-5928s" Feb 19 10:00:36 crc kubenswrapper[4965]: I0219 10:00:36.144187 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-ztvs5" event={"ID":"7c1737a3-9dfe-4208-a8da-8be7f09394d9","Type":"ContainerStarted","Data":"04ebd1db66a0cd5b88352bd9193e137c2279647541f88e59841348b6157022fb"} Feb 19 10:00:36 crc kubenswrapper[4965]: I0219 10:00:36.144911 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-ztvs5" Feb 19 10:00:36 crc kubenswrapper[4965]: I0219 10:00:36.145844 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-zqmsr" event={"ID":"2f161526-b0fd-453b-8ae7-7b9b7a485b97","Type":"ContainerStarted","Data":"433a81af3c66af8510a42709da7c526b31258741a88950b5f0aa5d1be889d80e"} Feb 19 10:00:36 crc kubenswrapper[4965]: I0219 10:00:36.145946 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-zqmsr" Feb 19 10:00:36 crc kubenswrapper[4965]: I0219 10:00:36.147955 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-jzssc" event={"ID":"7e1ae3d6-7af0-406d-b740-98c9f5c9403c","Type":"ContainerStarted","Data":"7191aad0b9df1a2644ec1b151649c2899db6b6963b0598ba94964d1c528711c8"} Feb 19 10:00:36 crc kubenswrapper[4965]: I0219 10:00:36.148125 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-jzssc" Feb 19 10:00:36 crc kubenswrapper[4965]: I0219 10:00:36.150047 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-frln4" event={"ID":"f1723aed-01cb-4ac1-b191-299a6dd638e5","Type":"ContainerStarted","Data":"e1dd71b7de1fdc5c8d21633f40566b43d17b70739039ea8c07a73b78bf9ec019"} Feb 19 10:00:36 crc kubenswrapper[4965]: I0219 10:00:36.151675 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-54vfn" event={"ID":"6bd1df07-8b75-44b8-91a3-4f612b64c279","Type":"ContainerStarted","Data":"60ebcf2d329c6f20f2d4045af97571f04bf2460e66de4b103df836b16f915329"} Feb 19 10:00:36 crc kubenswrapper[4965]: I0219 10:00:36.151799 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-54vfn" Feb 19 10:00:36 crc kubenswrapper[4965]: I0219 10:00:36.153491 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-wp77d" event={"ID":"ca57fee7-64f8-4c49-9170-6f6e618c78e7","Type":"ContainerStarted","Data":"1f00de6f36ca3b1f672cb560285e2befda1fe80bd10ddf7b3ffa1ce04742e715"} Feb 19 10:00:36 crc kubenswrapper[4965]: I0219 10:00:36.153708 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-wp77d" Feb 19 10:00:36 crc kubenswrapper[4965]: I0219 10:00:36.155016 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-h27hl" event={"ID":"a354e865-3819-4147-a565-4682bc4c6a6c","Type":"ContainerStarted","Data":"3f5786a7328bf91e98e37a4bcfd25bbc948ea41d1356770619103cba390e3fc2"} Feb 19 10:00:36 crc kubenswrapper[4965]: I0219 10:00:36.155165 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-h27hl" Feb 19 10:00:36 crc kubenswrapper[4965]: I0219 10:00:36.156563 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpknzq" event={"ID":"58e82cd5-3bd0-4f99-b958-29e5541fa49a","Type":"ContainerStarted","Data":"db5482555df45a36680658bc6a0c311866c49ef0c5c99cf2fba0d926fab917c8"} Feb 19 10:00:36 crc kubenswrapper[4965]: I0219 10:00:36.156685 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpknzq" Feb 19 10:00:36 crc kubenswrapper[4965]: I0219 10:00:36.165699 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-ztvs5" podStartSLOduration=3.5829611630000002 podStartE2EDuration="29.165680659s" podCreationTimestamp="2026-02-19 10:00:07 +0000 UTC" firstStartedPulling="2026-02-19 10:00:09.404999466 +0000 UTC m=+1065.026320776" lastFinishedPulling="2026-02-19 10:00:34.987718962 +0000 UTC m=+1090.609040272" observedRunningTime="2026-02-19 10:00:36.164624353 +0000 UTC m=+1091.785945663" watchObservedRunningTime="2026-02-19 10:00:36.165680659 +0000 UTC m=+1091.787001969" Feb 19 10:00:36 crc kubenswrapper[4965]: I0219 10:00:36.181117 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7866795846-jzssc" podStartSLOduration=3.419576754 podStartE2EDuration="29.181097965s" podCreationTimestamp="2026-02-19 10:00:07 +0000 UTC" firstStartedPulling="2026-02-19 10:00:09.223780823 +0000 UTC m=+1064.845102133" lastFinishedPulling="2026-02-19 10:00:34.985302014 +0000 UTC m=+1090.606623344" observedRunningTime="2026-02-19 10:00:36.176895522 +0000 UTC m=+1091.798216832" watchObservedRunningTime="2026-02-19 10:00:36.181097965 +0000 UTC m=+1091.802419275" Feb 19 10:00:36 crc kubenswrapper[4965]: I0219 10:00:36.205680 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpknzq" podStartSLOduration=18.533165736 podStartE2EDuration="29.205666403s" podCreationTimestamp="2026-02-19 10:00:07 +0000 UTC" firstStartedPulling="2026-02-19 10:00:24.260561904 +0000 UTC m=+1079.881883214" lastFinishedPulling="2026-02-19 10:00:34.933062571 +0000 UTC m=+1090.554383881" observedRunningTime="2026-02-19 10:00:36.203641274 +0000 UTC m=+1091.824962594" watchObservedRunningTime="2026-02-19 10:00:36.205666403 +0000 UTC m=+1091.826987713" Feb 19 10:00:36 crc kubenswrapper[4965]: I0219 10:00:36.233667 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-54vfn" podStartSLOduration=3.529275005 podStartE2EDuration="29.233648364s" podCreationTimestamp="2026-02-19 10:00:07 +0000 UTC" firstStartedPulling="2026-02-19 10:00:09.238577083 +0000 UTC m=+1064.859898393" lastFinishedPulling="2026-02-19 10:00:34.942950442 +0000 UTC m=+1090.564271752" observedRunningTime="2026-02-19 10:00:36.222477972 +0000 UTC m=+1091.843799282" watchObservedRunningTime="2026-02-19 10:00:36.233648364 +0000 UTC m=+1091.854969674" Feb 19 10:00:36 crc kubenswrapper[4965]: I0219 10:00:36.247015 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-h27hl" podStartSLOduration=3.494617092 podStartE2EDuration="29.2469998s" podCreationTimestamp="2026-02-19 10:00:07 +0000 UTC" firstStartedPulling="2026-02-19 10:00:09.232888025 +0000 UTC m=+1064.854209335" lastFinishedPulling="2026-02-19 10:00:34.985270733 +0000 UTC m=+1090.606592043" observedRunningTime="2026-02-19 10:00:36.246065517 +0000 UTC m=+1091.867386827" watchObservedRunningTime="2026-02-19 10:00:36.2469998 +0000 UTC m=+1091.868321110" Feb 19 10:00:36 crc kubenswrapper[4965]: I0219 10:00:36.268662 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-wp77d" podStartSLOduration=3.516079804 podStartE2EDuration="29.268640086s" podCreationTimestamp="2026-02-19 10:00:07 +0000 UTC" firstStartedPulling="2026-02-19 10:00:09.232712631 +0000 UTC m=+1064.854033941" lastFinishedPulling="2026-02-19 10:00:34.985272913 +0000 UTC m=+1090.606594223" observedRunningTime="2026-02-19 10:00:36.266450213 +0000 UTC m=+1091.887771553" watchObservedRunningTime="2026-02-19 10:00:36.268640086 +0000 UTC m=+1091.889961416" Feb 19 10:00:36 crc kubenswrapper[4965]: I0219 10:00:36.284407 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-frln4" podStartSLOduration=2.612379509 podStartE2EDuration="28.284387539s" podCreationTimestamp="2026-02-19 10:00:08 +0000 UTC" firstStartedPulling="2026-02-19 10:00:09.39406228 +0000 UTC m=+1065.015383590" lastFinishedPulling="2026-02-19 10:00:35.0660703 +0000 UTC m=+1090.687391620" observedRunningTime="2026-02-19 10:00:36.277980194 +0000 UTC m=+1091.899301504" watchObservedRunningTime="2026-02-19 10:00:36.284387539 +0000 UTC m=+1091.905708849" Feb 19 10:00:36 crc kubenswrapper[4965]: I0219 10:00:36.300129 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-zqmsr" podStartSLOduration=18.487215126 podStartE2EDuration="29.300107143s" podCreationTimestamp="2026-02-19 10:00:07 +0000 UTC" firstStartedPulling="2026-02-19 10:00:24.124848329 +0000 UTC m=+1079.746169639" lastFinishedPulling="2026-02-19 10:00:34.937740346 +0000 UTC m=+1090.559061656" observedRunningTime="2026-02-19 10:00:36.292639221 +0000 UTC m=+1091.913960551" watchObservedRunningTime="2026-02-19 10:00:36.300107143 +0000 UTC m=+1091.921428453" Feb 19 10:00:40 crc kubenswrapper[4965]: I0219 10:00:40.296010 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/186369a2-50b6-4226-be98-8876e469033f-webhook-certs\") pod \"openstack-operator-controller-manager-7f6588fc96-6phd8\" (UID: \"186369a2-50b6-4226-be98-8876e469033f\") " pod="openstack-operators/openstack-operator-controller-manager-7f6588fc96-6phd8" Feb 19 10:00:40 crc kubenswrapper[4965]: I0219 10:00:40.302172 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/186369a2-50b6-4226-be98-8876e469033f-webhook-certs\") pod \"openstack-operator-controller-manager-7f6588fc96-6phd8\" (UID: \"186369a2-50b6-4226-be98-8876e469033f\") " pod="openstack-operators/openstack-operator-controller-manager-7f6588fc96-6phd8" Feb 19 10:00:40 crc kubenswrapper[4965]: I0219 10:00:40.375857 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-67zjg" Feb 19 10:00:40 crc kubenswrapper[4965]: I0219 10:00:40.383639 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7f6588fc96-6phd8" Feb 19 10:00:40 crc kubenswrapper[4965]: I0219 10:00:40.735644 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7f6588fc96-6phd8"] Feb 19 10:00:40 crc kubenswrapper[4965]: W0219 10:00:40.740412 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod186369a2_50b6_4226_be98_8876e469033f.slice/crio-cffb9dd06e724e294611664db942cdf547a053604ef45b83e73213d2a652b8f5 WatchSource:0}: Error finding container cffb9dd06e724e294611664db942cdf547a053604ef45b83e73213d2a652b8f5: Status 404 returned error can't find the container with id cffb9dd06e724e294611664db942cdf547a053604ef45b83e73213d2a652b8f5 Feb 19 10:00:41 crc kubenswrapper[4965]: I0219 10:00:41.192682 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7f6588fc96-6phd8" event={"ID":"186369a2-50b6-4226-be98-8876e469033f","Type":"ContainerStarted","Data":"55e12183a3cc82e5e6d20108686876ec3d81bef6c567c8104c8f56db74f2fe96"} Feb 19 10:00:41 crc kubenswrapper[4965]: I0219 10:00:41.192747 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7f6588fc96-6phd8" event={"ID":"186369a2-50b6-4226-be98-8876e469033f","Type":"ContainerStarted","Data":"cffb9dd06e724e294611664db942cdf547a053604ef45b83e73213d2a652b8f5"} Feb 19 10:00:41 crc kubenswrapper[4965]: I0219 10:00:41.192896 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7f6588fc96-6phd8" Feb 19 10:00:41 crc kubenswrapper[4965]: I0219 10:00:41.234621 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-7f6588fc96-6phd8" podStartSLOduration=33.234592902 podStartE2EDuration="33.234592902s" podCreationTimestamp="2026-02-19 10:00:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:00:41.226236998 +0000 UTC m=+1096.847558298" watchObservedRunningTime="2026-02-19 10:00:41.234592902 +0000 UTC m=+1096.855914242" Feb 19 10:00:43 crc kubenswrapper[4965]: I0219 10:00:43.703727 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-zqmsr" Feb 19 10:00:43 crc kubenswrapper[4965]: I0219 10:00:43.832842 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpknzq" Feb 19 10:00:46 crc kubenswrapper[4965]: I0219 10:00:46.601396 4965 patch_prober.go:28] interesting pod/machine-config-daemon-7mhh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:00:46 crc kubenswrapper[4965]: I0219 10:00:46.602420 4965 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:00:48 crc kubenswrapper[4965]: I0219 10:00:48.279203 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-wp77d" Feb 19 10:00:48 crc kubenswrapper[4965]: I0219 10:00:48.305699 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-h27hl" Feb 19 10:00:48 crc kubenswrapper[4965]: I0219 10:00:48.353513 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-jzssc" Feb 19 10:00:48 crc kubenswrapper[4965]: I0219 10:00:48.429066 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-54vfn" Feb 19 10:00:48 crc kubenswrapper[4965]: I0219 10:00:48.504584 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-ztvs5" Feb 19 10:00:50 crc kubenswrapper[4965]: I0219 10:00:50.390308 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7f6588fc96-6phd8" Feb 19 10:01:08 crc kubenswrapper[4965]: I0219 10:01:08.852312 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-r6prk"] Feb 19 10:01:08 crc kubenswrapper[4965]: I0219 10:01:08.854354 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-r6prk" Feb 19 10:01:08 crc kubenswrapper[4965]: I0219 10:01:08.856812 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 19 10:01:08 crc kubenswrapper[4965]: I0219 10:01:08.857158 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 19 10:01:08 crc kubenswrapper[4965]: I0219 10:01:08.857719 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-npxx9" Feb 19 10:01:08 crc kubenswrapper[4965]: I0219 10:01:08.858016 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 19 10:01:08 crc kubenswrapper[4965]: I0219 10:01:08.868113 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-r6prk"] Feb 19 10:01:08 crc kubenswrapper[4965]: I0219 10:01:08.946256 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/218ae800-f2dc-4ae1-beeb-bf4847797fbd-config\") pod \"dnsmasq-dns-675f4bcbfc-r6prk\" (UID: \"218ae800-f2dc-4ae1-beeb-bf4847797fbd\") " pod="openstack/dnsmasq-dns-675f4bcbfc-r6prk" Feb 19 10:01:08 crc kubenswrapper[4965]: I0219 10:01:08.946321 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvqmk\" (UniqueName: \"kubernetes.io/projected/218ae800-f2dc-4ae1-beeb-bf4847797fbd-kube-api-access-lvqmk\") pod \"dnsmasq-dns-675f4bcbfc-r6prk\" (UID: \"218ae800-f2dc-4ae1-beeb-bf4847797fbd\") " pod="openstack/dnsmasq-dns-675f4bcbfc-r6prk" Feb 19 10:01:08 crc kubenswrapper[4965]: I0219 10:01:08.955714 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7djwj"] Feb 19 10:01:08 crc kubenswrapper[4965]: I0219 10:01:08.956763 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-7djwj" Feb 19 10:01:08 crc kubenswrapper[4965]: I0219 10:01:08.963874 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 19 10:01:08 crc kubenswrapper[4965]: I0219 10:01:08.982730 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7djwj"] Feb 19 10:01:09 crc kubenswrapper[4965]: I0219 10:01:09.047766 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/218ae800-f2dc-4ae1-beeb-bf4847797fbd-config\") pod \"dnsmasq-dns-675f4bcbfc-r6prk\" (UID: \"218ae800-f2dc-4ae1-beeb-bf4847797fbd\") " pod="openstack/dnsmasq-dns-675f4bcbfc-r6prk" Feb 19 10:01:09 crc kubenswrapper[4965]: I0219 10:01:09.047865 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7a3ae7dc-3ce4-4d63-9e90-d005f3de3d8d-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-7djwj\" (UID: \"7a3ae7dc-3ce4-4d63-9e90-d005f3de3d8d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7djwj" Feb 19 10:01:09 crc kubenswrapper[4965]: I0219 10:01:09.047902 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvqmk\" (UniqueName: \"kubernetes.io/projected/218ae800-f2dc-4ae1-beeb-bf4847797fbd-kube-api-access-lvqmk\") pod \"dnsmasq-dns-675f4bcbfc-r6prk\" (UID: \"218ae800-f2dc-4ae1-beeb-bf4847797fbd\") " pod="openstack/dnsmasq-dns-675f4bcbfc-r6prk" Feb 19 10:01:09 crc kubenswrapper[4965]: I0219 10:01:09.047982 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9kkq\" (UniqueName: \"kubernetes.io/projected/7a3ae7dc-3ce4-4d63-9e90-d005f3de3d8d-kube-api-access-g9kkq\") pod \"dnsmasq-dns-78dd6ddcc-7djwj\" (UID: \"7a3ae7dc-3ce4-4d63-9e90-d005f3de3d8d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7djwj" Feb 19 10:01:09 crc kubenswrapper[4965]: I0219 10:01:09.048019 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a3ae7dc-3ce4-4d63-9e90-d005f3de3d8d-config\") pod \"dnsmasq-dns-78dd6ddcc-7djwj\" (UID: \"7a3ae7dc-3ce4-4d63-9e90-d005f3de3d8d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7djwj" Feb 19 10:01:09 crc kubenswrapper[4965]: I0219 10:01:09.049065 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/218ae800-f2dc-4ae1-beeb-bf4847797fbd-config\") pod \"dnsmasq-dns-675f4bcbfc-r6prk\" (UID: \"218ae800-f2dc-4ae1-beeb-bf4847797fbd\") " pod="openstack/dnsmasq-dns-675f4bcbfc-r6prk" Feb 19 10:01:09 crc kubenswrapper[4965]: I0219 10:01:09.083173 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvqmk\" (UniqueName: \"kubernetes.io/projected/218ae800-f2dc-4ae1-beeb-bf4847797fbd-kube-api-access-lvqmk\") pod \"dnsmasq-dns-675f4bcbfc-r6prk\" (UID: \"218ae800-f2dc-4ae1-beeb-bf4847797fbd\") " pod="openstack/dnsmasq-dns-675f4bcbfc-r6prk" Feb 19 10:01:09 crc kubenswrapper[4965]: I0219 10:01:09.149473 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9kkq\" (UniqueName: \"kubernetes.io/projected/7a3ae7dc-3ce4-4d63-9e90-d005f3de3d8d-kube-api-access-g9kkq\") pod \"dnsmasq-dns-78dd6ddcc-7djwj\" (UID: \"7a3ae7dc-3ce4-4d63-9e90-d005f3de3d8d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7djwj" Feb 19 10:01:09 crc kubenswrapper[4965]: I0219 10:01:09.149554 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a3ae7dc-3ce4-4d63-9e90-d005f3de3d8d-config\") pod \"dnsmasq-dns-78dd6ddcc-7djwj\" (UID: \"7a3ae7dc-3ce4-4d63-9e90-d005f3de3d8d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7djwj" Feb 19 10:01:09 crc kubenswrapper[4965]: I0219 10:01:09.149637 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7a3ae7dc-3ce4-4d63-9e90-d005f3de3d8d-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-7djwj\" (UID: \"7a3ae7dc-3ce4-4d63-9e90-d005f3de3d8d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7djwj" Feb 19 10:01:09 crc kubenswrapper[4965]: I0219 10:01:09.150778 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7a3ae7dc-3ce4-4d63-9e90-d005f3de3d8d-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-7djwj\" (UID: \"7a3ae7dc-3ce4-4d63-9e90-d005f3de3d8d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7djwj" Feb 19 10:01:09 crc kubenswrapper[4965]: I0219 10:01:09.151266 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a3ae7dc-3ce4-4d63-9e90-d005f3de3d8d-config\") pod \"dnsmasq-dns-78dd6ddcc-7djwj\" (UID: \"7a3ae7dc-3ce4-4d63-9e90-d005f3de3d8d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7djwj" Feb 19 10:01:09 crc kubenswrapper[4965]: I0219 10:01:09.169208 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9kkq\" (UniqueName: \"kubernetes.io/projected/7a3ae7dc-3ce4-4d63-9e90-d005f3de3d8d-kube-api-access-g9kkq\") pod \"dnsmasq-dns-78dd6ddcc-7djwj\" (UID: \"7a3ae7dc-3ce4-4d63-9e90-d005f3de3d8d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7djwj" Feb 19 10:01:09 crc kubenswrapper[4965]: I0219 10:01:09.173888 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-r6prk" Feb 19 10:01:09 crc kubenswrapper[4965]: I0219 10:01:09.278058 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-7djwj" Feb 19 10:01:09 crc kubenswrapper[4965]: I0219 10:01:09.633257 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-r6prk"] Feb 19 10:01:09 crc kubenswrapper[4965]: I0219 10:01:09.760952 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7djwj"] Feb 19 10:01:09 crc kubenswrapper[4965]: W0219 10:01:09.763522 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a3ae7dc_3ce4_4d63_9e90_d005f3de3d8d.slice/crio-bc2dd8e8e2ae456cec3cb0ff7ce55ae8df59319bc1ba8f8876637179ffb4d0f6 WatchSource:0}: Error finding container bc2dd8e8e2ae456cec3cb0ff7ce55ae8df59319bc1ba8f8876637179ffb4d0f6: Status 404 returned error can't find the container with id bc2dd8e8e2ae456cec3cb0ff7ce55ae8df59319bc1ba8f8876637179ffb4d0f6 Feb 19 10:01:10 crc kubenswrapper[4965]: I0219 10:01:10.443416 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-7djwj" event={"ID":"7a3ae7dc-3ce4-4d63-9e90-d005f3de3d8d","Type":"ContainerStarted","Data":"bc2dd8e8e2ae456cec3cb0ff7ce55ae8df59319bc1ba8f8876637179ffb4d0f6"} Feb 19 10:01:10 crc kubenswrapper[4965]: I0219 10:01:10.446024 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-r6prk" event={"ID":"218ae800-f2dc-4ae1-beeb-bf4847797fbd","Type":"ContainerStarted","Data":"bca4b019e1c6e795a97e003de0e38a69d9169d9f351fe0e1bf37e00d374f4972"} Feb 19 10:01:11 crc kubenswrapper[4965]: I0219 10:01:11.826913 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-r6prk"] Feb 19 10:01:11 crc kubenswrapper[4965]: I0219 10:01:11.854528 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-hb29s"] Feb 19 10:01:11 crc kubenswrapper[4965]: I0219 10:01:11.859361 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-hb29s" Feb 19 10:01:11 crc kubenswrapper[4965]: I0219 10:01:11.874245 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-hb29s"] Feb 19 10:01:11 crc kubenswrapper[4965]: I0219 10:01:11.998890 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq544\" (UniqueName: \"kubernetes.io/projected/45fd5ec9-f248-4c13-b4a5-85885283391e-kube-api-access-hq544\") pod \"dnsmasq-dns-666b6646f7-hb29s\" (UID: \"45fd5ec9-f248-4c13-b4a5-85885283391e\") " pod="openstack/dnsmasq-dns-666b6646f7-hb29s" Feb 19 10:01:11 crc kubenswrapper[4965]: I0219 10:01:11.998962 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45fd5ec9-f248-4c13-b4a5-85885283391e-config\") pod \"dnsmasq-dns-666b6646f7-hb29s\" (UID: \"45fd5ec9-f248-4c13-b4a5-85885283391e\") " pod="openstack/dnsmasq-dns-666b6646f7-hb29s" Feb 19 10:01:11 crc kubenswrapper[4965]: I0219 10:01:11.999461 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45fd5ec9-f248-4c13-b4a5-85885283391e-dns-svc\") pod \"dnsmasq-dns-666b6646f7-hb29s\" (UID: \"45fd5ec9-f248-4c13-b4a5-85885283391e\") " pod="openstack/dnsmasq-dns-666b6646f7-hb29s" Feb 19 10:01:12 crc kubenswrapper[4965]: I0219 10:01:12.100715 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45fd5ec9-f248-4c13-b4a5-85885283391e-dns-svc\") pod \"dnsmasq-dns-666b6646f7-hb29s\" (UID: \"45fd5ec9-f248-4c13-b4a5-85885283391e\") " pod="openstack/dnsmasq-dns-666b6646f7-hb29s" Feb 19 10:01:12 crc kubenswrapper[4965]: I0219 10:01:12.101122 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hq544\" (UniqueName: \"kubernetes.io/projected/45fd5ec9-f248-4c13-b4a5-85885283391e-kube-api-access-hq544\") pod \"dnsmasq-dns-666b6646f7-hb29s\" (UID: \"45fd5ec9-f248-4c13-b4a5-85885283391e\") " pod="openstack/dnsmasq-dns-666b6646f7-hb29s" Feb 19 10:01:12 crc kubenswrapper[4965]: I0219 10:01:12.101147 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45fd5ec9-f248-4c13-b4a5-85885283391e-config\") pod \"dnsmasq-dns-666b6646f7-hb29s\" (UID: \"45fd5ec9-f248-4c13-b4a5-85885283391e\") " pod="openstack/dnsmasq-dns-666b6646f7-hb29s" Feb 19 10:01:12 crc kubenswrapper[4965]: I0219 10:01:12.101581 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45fd5ec9-f248-4c13-b4a5-85885283391e-dns-svc\") pod \"dnsmasq-dns-666b6646f7-hb29s\" (UID: \"45fd5ec9-f248-4c13-b4a5-85885283391e\") " pod="openstack/dnsmasq-dns-666b6646f7-hb29s" Feb 19 10:01:12 crc kubenswrapper[4965]: I0219 10:01:12.102273 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45fd5ec9-f248-4c13-b4a5-85885283391e-config\") pod \"dnsmasq-dns-666b6646f7-hb29s\" (UID: \"45fd5ec9-f248-4c13-b4a5-85885283391e\") " pod="openstack/dnsmasq-dns-666b6646f7-hb29s" Feb 19 10:01:12 crc kubenswrapper[4965]: I0219 10:01:12.140315 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hq544\" (UniqueName: \"kubernetes.io/projected/45fd5ec9-f248-4c13-b4a5-85885283391e-kube-api-access-hq544\") pod \"dnsmasq-dns-666b6646f7-hb29s\" (UID: \"45fd5ec9-f248-4c13-b4a5-85885283391e\") " pod="openstack/dnsmasq-dns-666b6646f7-hb29s" Feb 19 10:01:12 crc kubenswrapper[4965]: I0219 10:01:12.192901 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-hb29s" Feb 19 10:01:12 crc kubenswrapper[4965]: I0219 10:01:12.259153 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7djwj"] Feb 19 10:01:12 crc kubenswrapper[4965]: I0219 10:01:12.295985 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-5xvv9"] Feb 19 10:01:12 crc kubenswrapper[4965]: I0219 10:01:12.300334 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-5xvv9" Feb 19 10:01:12 crc kubenswrapper[4965]: I0219 10:01:12.302807 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-5xvv9"] Feb 19 10:01:12 crc kubenswrapper[4965]: I0219 10:01:12.404942 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f73b9d2-a434-4638-bce4-6c710166a455-config\") pod \"dnsmasq-dns-57d769cc4f-5xvv9\" (UID: \"3f73b9d2-a434-4638-bce4-6c710166a455\") " pod="openstack/dnsmasq-dns-57d769cc4f-5xvv9" Feb 19 10:01:12 crc kubenswrapper[4965]: I0219 10:01:12.404997 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2ptb\" (UniqueName: \"kubernetes.io/projected/3f73b9d2-a434-4638-bce4-6c710166a455-kube-api-access-r2ptb\") pod \"dnsmasq-dns-57d769cc4f-5xvv9\" (UID: \"3f73b9d2-a434-4638-bce4-6c710166a455\") " pod="openstack/dnsmasq-dns-57d769cc4f-5xvv9" Feb 19 10:01:12 crc kubenswrapper[4965]: I0219 10:01:12.405074 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f73b9d2-a434-4638-bce4-6c710166a455-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-5xvv9\" (UID: \"3f73b9d2-a434-4638-bce4-6c710166a455\") " pod="openstack/dnsmasq-dns-57d769cc4f-5xvv9" Feb 19 10:01:12 crc kubenswrapper[4965]: I0219 10:01:12.507069 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f73b9d2-a434-4638-bce4-6c710166a455-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-5xvv9\" (UID: \"3f73b9d2-a434-4638-bce4-6c710166a455\") " pod="openstack/dnsmasq-dns-57d769cc4f-5xvv9" Feb 19 10:01:12 crc kubenswrapper[4965]: I0219 10:01:12.507493 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f73b9d2-a434-4638-bce4-6c710166a455-config\") pod \"dnsmasq-dns-57d769cc4f-5xvv9\" (UID: \"3f73b9d2-a434-4638-bce4-6c710166a455\") " pod="openstack/dnsmasq-dns-57d769cc4f-5xvv9" Feb 19 10:01:12 crc kubenswrapper[4965]: I0219 10:01:12.507526 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2ptb\" (UniqueName: \"kubernetes.io/projected/3f73b9d2-a434-4638-bce4-6c710166a455-kube-api-access-r2ptb\") pod \"dnsmasq-dns-57d769cc4f-5xvv9\" (UID: \"3f73b9d2-a434-4638-bce4-6c710166a455\") " pod="openstack/dnsmasq-dns-57d769cc4f-5xvv9" Feb 19 10:01:12 crc kubenswrapper[4965]: I0219 10:01:12.508582 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f73b9d2-a434-4638-bce4-6c710166a455-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-5xvv9\" (UID: \"3f73b9d2-a434-4638-bce4-6c710166a455\") " pod="openstack/dnsmasq-dns-57d769cc4f-5xvv9" Feb 19 10:01:12 crc kubenswrapper[4965]: I0219 10:01:12.509406 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f73b9d2-a434-4638-bce4-6c710166a455-config\") pod \"dnsmasq-dns-57d769cc4f-5xvv9\" (UID: \"3f73b9d2-a434-4638-bce4-6c710166a455\") " pod="openstack/dnsmasq-dns-57d769cc4f-5xvv9" Feb 19 10:01:12 crc kubenswrapper[4965]: I0219 10:01:12.527104 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2ptb\" (UniqueName: \"kubernetes.io/projected/3f73b9d2-a434-4638-bce4-6c710166a455-kube-api-access-r2ptb\") pod \"dnsmasq-dns-57d769cc4f-5xvv9\" (UID: \"3f73b9d2-a434-4638-bce4-6c710166a455\") " pod="openstack/dnsmasq-dns-57d769cc4f-5xvv9" Feb 19 10:01:12 crc kubenswrapper[4965]: I0219 10:01:12.719575 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-5xvv9" Feb 19 10:01:12 crc kubenswrapper[4965]: I0219 10:01:12.881118 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-hb29s"] Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.078126 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.079607 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.081302 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.082809 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.082924 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-7ghtg" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.085778 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.086027 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.086196 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.090583 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.099718 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.234063 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvmrg\" (UniqueName: \"kubernetes.io/projected/305a32d6-c9f8-4494-b356-75d6c54c7467-kube-api-access-qvmrg\") pod \"rabbitmq-server-0\" (UID: \"305a32d6-c9f8-4494-b356-75d6c54c7467\") " pod="openstack/rabbitmq-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.234716 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/305a32d6-c9f8-4494-b356-75d6c54c7467-config-data\") pod \"rabbitmq-server-0\" (UID: \"305a32d6-c9f8-4494-b356-75d6c54c7467\") " pod="openstack/rabbitmq-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.234742 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/305a32d6-c9f8-4494-b356-75d6c54c7467-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"305a32d6-c9f8-4494-b356-75d6c54c7467\") " pod="openstack/rabbitmq-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.234767 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/305a32d6-c9f8-4494-b356-75d6c54c7467-server-conf\") pod \"rabbitmq-server-0\" (UID: \"305a32d6-c9f8-4494-b356-75d6c54c7467\") " pod="openstack/rabbitmq-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.234788 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/305a32d6-c9f8-4494-b356-75d6c54c7467-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"305a32d6-c9f8-4494-b356-75d6c54c7467\") " pod="openstack/rabbitmq-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.234815 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/305a32d6-c9f8-4494-b356-75d6c54c7467-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"305a32d6-c9f8-4494-b356-75d6c54c7467\") " pod="openstack/rabbitmq-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.234838 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/305a32d6-c9f8-4494-b356-75d6c54c7467-pod-info\") pod \"rabbitmq-server-0\" (UID: \"305a32d6-c9f8-4494-b356-75d6c54c7467\") " pod="openstack/rabbitmq-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.234871 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/305a32d6-c9f8-4494-b356-75d6c54c7467-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"305a32d6-c9f8-4494-b356-75d6c54c7467\") " pod="openstack/rabbitmq-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.234895 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/305a32d6-c9f8-4494-b356-75d6c54c7467-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"305a32d6-c9f8-4494-b356-75d6c54c7467\") " pod="openstack/rabbitmq-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.234922 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/305a32d6-c9f8-4494-b356-75d6c54c7467-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"305a32d6-c9f8-4494-b356-75d6c54c7467\") " pod="openstack/rabbitmq-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.234953 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b53b8e9d-af36-445c-a8c8-07d5c566352c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b53b8e9d-af36-445c-a8c8-07d5c566352c\") pod \"rabbitmq-server-0\" (UID: \"305a32d6-c9f8-4494-b356-75d6c54c7467\") " pod="openstack/rabbitmq-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.249699 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-5xvv9"] Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.336342 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/305a32d6-c9f8-4494-b356-75d6c54c7467-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"305a32d6-c9f8-4494-b356-75d6c54c7467\") " pod="openstack/rabbitmq-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.336418 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b53b8e9d-af36-445c-a8c8-07d5c566352c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b53b8e9d-af36-445c-a8c8-07d5c566352c\") pod \"rabbitmq-server-0\" (UID: \"305a32d6-c9f8-4494-b356-75d6c54c7467\") " pod="openstack/rabbitmq-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.336451 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvmrg\" (UniqueName: \"kubernetes.io/projected/305a32d6-c9f8-4494-b356-75d6c54c7467-kube-api-access-qvmrg\") pod \"rabbitmq-server-0\" (UID: \"305a32d6-c9f8-4494-b356-75d6c54c7467\") " pod="openstack/rabbitmq-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.336494 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/305a32d6-c9f8-4494-b356-75d6c54c7467-config-data\") pod \"rabbitmq-server-0\" (UID: \"305a32d6-c9f8-4494-b356-75d6c54c7467\") " pod="openstack/rabbitmq-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.336525 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/305a32d6-c9f8-4494-b356-75d6c54c7467-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"305a32d6-c9f8-4494-b356-75d6c54c7467\") " pod="openstack/rabbitmq-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.336553 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/305a32d6-c9f8-4494-b356-75d6c54c7467-server-conf\") pod \"rabbitmq-server-0\" (UID: \"305a32d6-c9f8-4494-b356-75d6c54c7467\") " pod="openstack/rabbitmq-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.336579 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/305a32d6-c9f8-4494-b356-75d6c54c7467-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"305a32d6-c9f8-4494-b356-75d6c54c7467\") " pod="openstack/rabbitmq-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.336612 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/305a32d6-c9f8-4494-b356-75d6c54c7467-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"305a32d6-c9f8-4494-b356-75d6c54c7467\") " pod="openstack/rabbitmq-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.336640 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/305a32d6-c9f8-4494-b356-75d6c54c7467-pod-info\") pod \"rabbitmq-server-0\" (UID: \"305a32d6-c9f8-4494-b356-75d6c54c7467\") " pod="openstack/rabbitmq-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.336680 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/305a32d6-c9f8-4494-b356-75d6c54c7467-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"305a32d6-c9f8-4494-b356-75d6c54c7467\") " pod="openstack/rabbitmq-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.336708 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/305a32d6-c9f8-4494-b356-75d6c54c7467-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"305a32d6-c9f8-4494-b356-75d6c54c7467\") " pod="openstack/rabbitmq-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.338465 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/305a32d6-c9f8-4494-b356-75d6c54c7467-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"305a32d6-c9f8-4494-b356-75d6c54c7467\") " pod="openstack/rabbitmq-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.338517 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/305a32d6-c9f8-4494-b356-75d6c54c7467-config-data\") pod \"rabbitmq-server-0\" (UID: \"305a32d6-c9f8-4494-b356-75d6c54c7467\") " pod="openstack/rabbitmq-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.338593 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/305a32d6-c9f8-4494-b356-75d6c54c7467-server-conf\") pod \"rabbitmq-server-0\" (UID: \"305a32d6-c9f8-4494-b356-75d6c54c7467\") " pod="openstack/rabbitmq-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.339186 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/305a32d6-c9f8-4494-b356-75d6c54c7467-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"305a32d6-c9f8-4494-b356-75d6c54c7467\") " pod="openstack/rabbitmq-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.339460 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/305a32d6-c9f8-4494-b356-75d6c54c7467-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"305a32d6-c9f8-4494-b356-75d6c54c7467\") " pod="openstack/rabbitmq-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.340531 4965 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.340560 4965 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b53b8e9d-af36-445c-a8c8-07d5c566352c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b53b8e9d-af36-445c-a8c8-07d5c566352c\") pod \"rabbitmq-server-0\" (UID: \"305a32d6-c9f8-4494-b356-75d6c54c7467\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a606e5eb2618d24d56413a1015b901d80936de0be271267d3eeee72120bb76ae/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.346272 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/305a32d6-c9f8-4494-b356-75d6c54c7467-pod-info\") pod \"rabbitmq-server-0\" (UID: \"305a32d6-c9f8-4494-b356-75d6c54c7467\") " pod="openstack/rabbitmq-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.346525 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/305a32d6-c9f8-4494-b356-75d6c54c7467-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"305a32d6-c9f8-4494-b356-75d6c54c7467\") " pod="openstack/rabbitmq-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.357381 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvmrg\" (UniqueName: \"kubernetes.io/projected/305a32d6-c9f8-4494-b356-75d6c54c7467-kube-api-access-qvmrg\") pod \"rabbitmq-server-0\" (UID: \"305a32d6-c9f8-4494-b356-75d6c54c7467\") " pod="openstack/rabbitmq-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.361907 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/305a32d6-c9f8-4494-b356-75d6c54c7467-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"305a32d6-c9f8-4494-b356-75d6c54c7467\") " pod="openstack/rabbitmq-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.370585 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/305a32d6-c9f8-4494-b356-75d6c54c7467-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"305a32d6-c9f8-4494-b356-75d6c54c7467\") " pod="openstack/rabbitmq-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.396304 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b53b8e9d-af36-445c-a8c8-07d5c566352c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b53b8e9d-af36-445c-a8c8-07d5c566352c\") pod \"rabbitmq-server-0\" (UID: \"305a32d6-c9f8-4494-b356-75d6c54c7467\") " pod="openstack/rabbitmq-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.426737 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.427873 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.431547 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.438319 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-xs2sm" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.438506 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.438612 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.438731 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.439706 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.439878 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.440388 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.451569 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.475716 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-hb29s" event={"ID":"45fd5ec9-f248-4c13-b4a5-85885283391e","Type":"ContainerStarted","Data":"94e201d574249473114d4fc54e519fe0bfcec73df77264df290763c5eb533e91"} Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.540249 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bbd64606-53f8-484e-b8d2-c0fef4acb1bd-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"bbd64606-53f8-484e-b8d2-c0fef4acb1bd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.540326 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-207bdf8b-fa4d-47cb-a97b-effc8257e554\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-207bdf8b-fa4d-47cb-a97b-effc8257e554\") pod \"rabbitmq-cell1-server-0\" (UID: \"bbd64606-53f8-484e-b8d2-c0fef4acb1bd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.540354 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bbd64606-53f8-484e-b8d2-c0fef4acb1bd-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"bbd64606-53f8-484e-b8d2-c0fef4acb1bd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.540383 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bbd64606-53f8-484e-b8d2-c0fef4acb1bd-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"bbd64606-53f8-484e-b8d2-c0fef4acb1bd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.540403 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bbd64606-53f8-484e-b8d2-c0fef4acb1bd-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"bbd64606-53f8-484e-b8d2-c0fef4acb1bd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.540429 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bbd64606-53f8-484e-b8d2-c0fef4acb1bd-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"bbd64606-53f8-484e-b8d2-c0fef4acb1bd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.540450 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bbd64606-53f8-484e-b8d2-c0fef4acb1bd-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bbd64606-53f8-484e-b8d2-c0fef4acb1bd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.540479 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bbd64606-53f8-484e-b8d2-c0fef4acb1bd-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bbd64606-53f8-484e-b8d2-c0fef4acb1bd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.540506 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bbd64606-53f8-484e-b8d2-c0fef4acb1bd-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"bbd64606-53f8-484e-b8d2-c0fef4acb1bd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.540535 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bbd64606-53f8-484e-b8d2-c0fef4acb1bd-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"bbd64606-53f8-484e-b8d2-c0fef4acb1bd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.540579 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzvj4\" (UniqueName: \"kubernetes.io/projected/bbd64606-53f8-484e-b8d2-c0fef4acb1bd-kube-api-access-fzvj4\") pod \"rabbitmq-cell1-server-0\" (UID: \"bbd64606-53f8-484e-b8d2-c0fef4acb1bd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.642330 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-207bdf8b-fa4d-47cb-a97b-effc8257e554\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-207bdf8b-fa4d-47cb-a97b-effc8257e554\") pod \"rabbitmq-cell1-server-0\" (UID: \"bbd64606-53f8-484e-b8d2-c0fef4acb1bd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.643092 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bbd64606-53f8-484e-b8d2-c0fef4acb1bd-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"bbd64606-53f8-484e-b8d2-c0fef4acb1bd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.643142 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bbd64606-53f8-484e-b8d2-c0fef4acb1bd-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"bbd64606-53f8-484e-b8d2-c0fef4acb1bd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.643167 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bbd64606-53f8-484e-b8d2-c0fef4acb1bd-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"bbd64606-53f8-484e-b8d2-c0fef4acb1bd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.643216 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bbd64606-53f8-484e-b8d2-c0fef4acb1bd-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"bbd64606-53f8-484e-b8d2-c0fef4acb1bd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.643245 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bbd64606-53f8-484e-b8d2-c0fef4acb1bd-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bbd64606-53f8-484e-b8d2-c0fef4acb1bd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.643280 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bbd64606-53f8-484e-b8d2-c0fef4acb1bd-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bbd64606-53f8-484e-b8d2-c0fef4acb1bd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.643310 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bbd64606-53f8-484e-b8d2-c0fef4acb1bd-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"bbd64606-53f8-484e-b8d2-c0fef4acb1bd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.643342 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bbd64606-53f8-484e-b8d2-c0fef4acb1bd-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"bbd64606-53f8-484e-b8d2-c0fef4acb1bd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.643401 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzvj4\" (UniqueName: \"kubernetes.io/projected/bbd64606-53f8-484e-b8d2-c0fef4acb1bd-kube-api-access-fzvj4\") pod \"rabbitmq-cell1-server-0\" (UID: \"bbd64606-53f8-484e-b8d2-c0fef4acb1bd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.643445 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bbd64606-53f8-484e-b8d2-c0fef4acb1bd-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"bbd64606-53f8-484e-b8d2-c0fef4acb1bd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.643807 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bbd64606-53f8-484e-b8d2-c0fef4acb1bd-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"bbd64606-53f8-484e-b8d2-c0fef4acb1bd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.644156 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bbd64606-53f8-484e-b8d2-c0fef4acb1bd-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"bbd64606-53f8-484e-b8d2-c0fef4acb1bd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.644605 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bbd64606-53f8-484e-b8d2-c0fef4acb1bd-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"bbd64606-53f8-484e-b8d2-c0fef4acb1bd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.644650 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bbd64606-53f8-484e-b8d2-c0fef4acb1bd-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bbd64606-53f8-484e-b8d2-c0fef4acb1bd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.645280 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bbd64606-53f8-484e-b8d2-c0fef4acb1bd-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bbd64606-53f8-484e-b8d2-c0fef4acb1bd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.646065 4965 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.646088 4965 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-207bdf8b-fa4d-47cb-a97b-effc8257e554\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-207bdf8b-fa4d-47cb-a97b-effc8257e554\") pod \"rabbitmq-cell1-server-0\" (UID: \"bbd64606-53f8-484e-b8d2-c0fef4acb1bd\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c02e9c89efc1374aeb0c7995657ea859640e28776f8f5ffa07ca5ea0e348ba25/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.647508 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bbd64606-53f8-484e-b8d2-c0fef4acb1bd-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"bbd64606-53f8-484e-b8d2-c0fef4acb1bd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.649918 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bbd64606-53f8-484e-b8d2-c0fef4acb1bd-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"bbd64606-53f8-484e-b8d2-c0fef4acb1bd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.655553 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bbd64606-53f8-484e-b8d2-c0fef4acb1bd-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"bbd64606-53f8-484e-b8d2-c0fef4acb1bd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.660430 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzvj4\" (UniqueName: \"kubernetes.io/projected/bbd64606-53f8-484e-b8d2-c0fef4acb1bd-kube-api-access-fzvj4\") pod \"rabbitmq-cell1-server-0\" (UID: \"bbd64606-53f8-484e-b8d2-c0fef4acb1bd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.683046 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bbd64606-53f8-484e-b8d2-c0fef4acb1bd-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"bbd64606-53f8-484e-b8d2-c0fef4acb1bd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.708138 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-207bdf8b-fa4d-47cb-a97b-effc8257e554\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-207bdf8b-fa4d-47cb-a97b-effc8257e554\") pod \"rabbitmq-cell1-server-0\" (UID: \"bbd64606-53f8-484e-b8d2-c0fef4acb1bd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:01:13 crc kubenswrapper[4965]: I0219 10:01:13.773647 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:01:14 crc kubenswrapper[4965]: I0219 10:01:14.297651 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 19 10:01:14 crc kubenswrapper[4965]: I0219 10:01:14.299431 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 19 10:01:14 crc kubenswrapper[4965]: I0219 10:01:14.301077 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-qx8wx" Feb 19 10:01:14 crc kubenswrapper[4965]: I0219 10:01:14.302699 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 19 10:01:14 crc kubenswrapper[4965]: I0219 10:01:14.303362 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 19 10:01:14 crc kubenswrapper[4965]: I0219 10:01:14.307183 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 19 10:01:14 crc kubenswrapper[4965]: I0219 10:01:14.309115 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 19 10:01:14 crc kubenswrapper[4965]: I0219 10:01:14.309265 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 19 10:01:14 crc kubenswrapper[4965]: I0219 10:01:14.456325 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/215df1f4-6c30-4144-b141-5a867e8d2728-operator-scripts\") pod \"openstack-galera-0\" (UID: \"215df1f4-6c30-4144-b141-5a867e8d2728\") " pod="openstack/openstack-galera-0" Feb 19 10:01:14 crc kubenswrapper[4965]: I0219 10:01:14.456374 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d8732f81-6f44-4b1d-8d1d-03a7e32121e5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d8732f81-6f44-4b1d-8d1d-03a7e32121e5\") pod \"openstack-galera-0\" (UID: \"215df1f4-6c30-4144-b141-5a867e8d2728\") " pod="openstack/openstack-galera-0" Feb 19 10:01:14 crc kubenswrapper[4965]: I0219 10:01:14.456404 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/215df1f4-6c30-4144-b141-5a867e8d2728-kolla-config\") pod \"openstack-galera-0\" (UID: \"215df1f4-6c30-4144-b141-5a867e8d2728\") " pod="openstack/openstack-galera-0" Feb 19 10:01:14 crc kubenswrapper[4965]: I0219 10:01:14.456421 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/215df1f4-6c30-4144-b141-5a867e8d2728-config-data-default\") pod \"openstack-galera-0\" (UID: \"215df1f4-6c30-4144-b141-5a867e8d2728\") " pod="openstack/openstack-galera-0" Feb 19 10:01:14 crc kubenswrapper[4965]: I0219 10:01:14.456463 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/215df1f4-6c30-4144-b141-5a867e8d2728-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"215df1f4-6c30-4144-b141-5a867e8d2728\") " pod="openstack/openstack-galera-0" Feb 19 10:01:14 crc kubenswrapper[4965]: I0219 10:01:14.456496 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/215df1f4-6c30-4144-b141-5a867e8d2728-config-data-generated\") pod \"openstack-galera-0\" (UID: \"215df1f4-6c30-4144-b141-5a867e8d2728\") " pod="openstack/openstack-galera-0" Feb 19 10:01:14 crc kubenswrapper[4965]: I0219 10:01:14.456563 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h65h8\" (UniqueName: \"kubernetes.io/projected/215df1f4-6c30-4144-b141-5a867e8d2728-kube-api-access-h65h8\") pod \"openstack-galera-0\" (UID: \"215df1f4-6c30-4144-b141-5a867e8d2728\") " pod="openstack/openstack-galera-0" Feb 19 10:01:14 crc kubenswrapper[4965]: I0219 10:01:14.456589 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/215df1f4-6c30-4144-b141-5a867e8d2728-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"215df1f4-6c30-4144-b141-5a867e8d2728\") " pod="openstack/openstack-galera-0" Feb 19 10:01:14 crc kubenswrapper[4965]: I0219 10:01:14.565847 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/215df1f4-6c30-4144-b141-5a867e8d2728-operator-scripts\") pod \"openstack-galera-0\" (UID: \"215df1f4-6c30-4144-b141-5a867e8d2728\") " pod="openstack/openstack-galera-0" Feb 19 10:01:14 crc kubenswrapper[4965]: I0219 10:01:14.565905 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d8732f81-6f44-4b1d-8d1d-03a7e32121e5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d8732f81-6f44-4b1d-8d1d-03a7e32121e5\") pod \"openstack-galera-0\" (UID: \"215df1f4-6c30-4144-b141-5a867e8d2728\") " pod="openstack/openstack-galera-0" Feb 19 10:01:14 crc kubenswrapper[4965]: I0219 10:01:14.565941 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/215df1f4-6c30-4144-b141-5a867e8d2728-kolla-config\") pod \"openstack-galera-0\" (UID: \"215df1f4-6c30-4144-b141-5a867e8d2728\") " pod="openstack/openstack-galera-0" Feb 19 10:01:14 crc kubenswrapper[4965]: I0219 10:01:14.565963 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/215df1f4-6c30-4144-b141-5a867e8d2728-config-data-default\") pod \"openstack-galera-0\" (UID: \"215df1f4-6c30-4144-b141-5a867e8d2728\") " pod="openstack/openstack-galera-0" Feb 19 10:01:14 crc kubenswrapper[4965]: I0219 10:01:14.566035 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/215df1f4-6c30-4144-b141-5a867e8d2728-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"215df1f4-6c30-4144-b141-5a867e8d2728\") " pod="openstack/openstack-galera-0" Feb 19 10:01:14 crc kubenswrapper[4965]: I0219 10:01:14.566088 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/215df1f4-6c30-4144-b141-5a867e8d2728-config-data-generated\") pod \"openstack-galera-0\" (UID: \"215df1f4-6c30-4144-b141-5a867e8d2728\") " pod="openstack/openstack-galera-0" Feb 19 10:01:14 crc kubenswrapper[4965]: I0219 10:01:14.566123 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h65h8\" (UniqueName: \"kubernetes.io/projected/215df1f4-6c30-4144-b141-5a867e8d2728-kube-api-access-h65h8\") pod \"openstack-galera-0\" (UID: \"215df1f4-6c30-4144-b141-5a867e8d2728\") " pod="openstack/openstack-galera-0" Feb 19 10:01:14 crc kubenswrapper[4965]: I0219 10:01:14.566140 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/215df1f4-6c30-4144-b141-5a867e8d2728-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"215df1f4-6c30-4144-b141-5a867e8d2728\") " pod="openstack/openstack-galera-0" Feb 19 10:01:14 crc kubenswrapper[4965]: I0219 10:01:14.566639 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/215df1f4-6c30-4144-b141-5a867e8d2728-config-data-generated\") pod \"openstack-galera-0\" (UID: \"215df1f4-6c30-4144-b141-5a867e8d2728\") " pod="openstack/openstack-galera-0" Feb 19 10:01:14 crc kubenswrapper[4965]: I0219 10:01:14.567567 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/215df1f4-6c30-4144-b141-5a867e8d2728-config-data-default\") pod \"openstack-galera-0\" (UID: \"215df1f4-6c30-4144-b141-5a867e8d2728\") " pod="openstack/openstack-galera-0" Feb 19 10:01:14 crc kubenswrapper[4965]: I0219 10:01:14.569418 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/215df1f4-6c30-4144-b141-5a867e8d2728-kolla-config\") pod \"openstack-galera-0\" (UID: \"215df1f4-6c30-4144-b141-5a867e8d2728\") " pod="openstack/openstack-galera-0" Feb 19 10:01:14 crc kubenswrapper[4965]: I0219 10:01:14.569568 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/215df1f4-6c30-4144-b141-5a867e8d2728-operator-scripts\") pod \"openstack-galera-0\" (UID: \"215df1f4-6c30-4144-b141-5a867e8d2728\") " pod="openstack/openstack-galera-0" Feb 19 10:01:14 crc kubenswrapper[4965]: I0219 10:01:14.570931 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/215df1f4-6c30-4144-b141-5a867e8d2728-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"215df1f4-6c30-4144-b141-5a867e8d2728\") " pod="openstack/openstack-galera-0" Feb 19 10:01:14 crc kubenswrapper[4965]: I0219 10:01:14.575542 4965 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 10:01:14 crc kubenswrapper[4965]: I0219 10:01:14.575575 4965 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d8732f81-6f44-4b1d-8d1d-03a7e32121e5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d8732f81-6f44-4b1d-8d1d-03a7e32121e5\") pod \"openstack-galera-0\" (UID: \"215df1f4-6c30-4144-b141-5a867e8d2728\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/cddfb92fd793010c53e6516a9d8285aac0a94dea5e511f6e5f60cd5bfcac332f/globalmount\"" pod="openstack/openstack-galera-0" Feb 19 10:01:14 crc kubenswrapper[4965]: I0219 10:01:14.585641 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h65h8\" (UniqueName: \"kubernetes.io/projected/215df1f4-6c30-4144-b141-5a867e8d2728-kube-api-access-h65h8\") pod \"openstack-galera-0\" (UID: \"215df1f4-6c30-4144-b141-5a867e8d2728\") " pod="openstack/openstack-galera-0" Feb 19 10:01:14 crc kubenswrapper[4965]: I0219 10:01:14.604177 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/215df1f4-6c30-4144-b141-5a867e8d2728-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"215df1f4-6c30-4144-b141-5a867e8d2728\") " pod="openstack/openstack-galera-0" Feb 19 10:01:14 crc kubenswrapper[4965]: I0219 10:01:14.627173 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d8732f81-6f44-4b1d-8d1d-03a7e32121e5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d8732f81-6f44-4b1d-8d1d-03a7e32121e5\") pod \"openstack-galera-0\" (UID: \"215df1f4-6c30-4144-b141-5a867e8d2728\") " pod="openstack/openstack-galera-0" Feb 19 10:01:14 crc kubenswrapper[4965]: I0219 10:01:14.920149 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 19 10:01:15 crc kubenswrapper[4965]: I0219 10:01:15.722674 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 19 10:01:15 crc kubenswrapper[4965]: I0219 10:01:15.724567 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 19 10:01:15 crc kubenswrapper[4965]: I0219 10:01:15.727281 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 19 10:01:15 crc kubenswrapper[4965]: I0219 10:01:15.727354 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-v5pcj" Feb 19 10:01:15 crc kubenswrapper[4965]: I0219 10:01:15.727389 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 19 10:01:15 crc kubenswrapper[4965]: I0219 10:01:15.727777 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 19 10:01:15 crc kubenswrapper[4965]: I0219 10:01:15.739151 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 19 10:01:15 crc kubenswrapper[4965]: I0219 10:01:15.883068 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k64jt\" (UniqueName: \"kubernetes.io/projected/5b862187-0edd-4939-9260-d0d35653485c-kube-api-access-k64jt\") pod \"openstack-cell1-galera-0\" (UID: \"5b862187-0edd-4939-9260-d0d35653485c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:01:15 crc kubenswrapper[4965]: I0219 10:01:15.883113 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5b862187-0edd-4939-9260-d0d35653485c-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"5b862187-0edd-4939-9260-d0d35653485c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:01:15 crc kubenswrapper[4965]: I0219 10:01:15.883149 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b862187-0edd-4939-9260-d0d35653485c-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"5b862187-0edd-4939-9260-d0d35653485c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:01:15 crc kubenswrapper[4965]: I0219 10:01:15.883167 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5b862187-0edd-4939-9260-d0d35653485c-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"5b862187-0edd-4939-9260-d0d35653485c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:01:15 crc kubenswrapper[4965]: I0219 10:01:15.883218 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b862187-0edd-4939-9260-d0d35653485c-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"5b862187-0edd-4939-9260-d0d35653485c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:01:15 crc kubenswrapper[4965]: I0219 10:01:15.883237 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a95e62cc-dbc7-4521-96b2-997f99f9d6ff\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a95e62cc-dbc7-4521-96b2-997f99f9d6ff\") pod \"openstack-cell1-galera-0\" (UID: \"5b862187-0edd-4939-9260-d0d35653485c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:01:15 crc kubenswrapper[4965]: I0219 10:01:15.883277 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b862187-0edd-4939-9260-d0d35653485c-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"5b862187-0edd-4939-9260-d0d35653485c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:01:15 crc kubenswrapper[4965]: I0219 10:01:15.883293 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5b862187-0edd-4939-9260-d0d35653485c-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"5b862187-0edd-4939-9260-d0d35653485c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:01:15 crc kubenswrapper[4965]: I0219 10:01:15.978818 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 19 10:01:15 crc kubenswrapper[4965]: I0219 10:01:15.979730 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 19 10:01:15 crc kubenswrapper[4965]: I0219 10:01:15.984302 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b862187-0edd-4939-9260-d0d35653485c-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"5b862187-0edd-4939-9260-d0d35653485c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:01:15 crc kubenswrapper[4965]: I0219 10:01:15.984342 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5b862187-0edd-4939-9260-d0d35653485c-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"5b862187-0edd-4939-9260-d0d35653485c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:01:15 crc kubenswrapper[4965]: I0219 10:01:15.984396 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k64jt\" (UniqueName: \"kubernetes.io/projected/5b862187-0edd-4939-9260-d0d35653485c-kube-api-access-k64jt\") pod \"openstack-cell1-galera-0\" (UID: \"5b862187-0edd-4939-9260-d0d35653485c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:01:15 crc kubenswrapper[4965]: I0219 10:01:15.984424 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5b862187-0edd-4939-9260-d0d35653485c-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"5b862187-0edd-4939-9260-d0d35653485c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:01:15 crc kubenswrapper[4965]: I0219 10:01:15.984456 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b862187-0edd-4939-9260-d0d35653485c-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"5b862187-0edd-4939-9260-d0d35653485c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:01:15 crc kubenswrapper[4965]: I0219 10:01:15.984474 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5b862187-0edd-4939-9260-d0d35653485c-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"5b862187-0edd-4939-9260-d0d35653485c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:01:15 crc kubenswrapper[4965]: I0219 10:01:15.984509 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b862187-0edd-4939-9260-d0d35653485c-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"5b862187-0edd-4939-9260-d0d35653485c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:01:15 crc kubenswrapper[4965]: I0219 10:01:15.984546 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a95e62cc-dbc7-4521-96b2-997f99f9d6ff\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a95e62cc-dbc7-4521-96b2-997f99f9d6ff\") pod \"openstack-cell1-galera-0\" (UID: \"5b862187-0edd-4939-9260-d0d35653485c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:01:15 crc kubenswrapper[4965]: I0219 10:01:15.985267 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5b862187-0edd-4939-9260-d0d35653485c-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"5b862187-0edd-4939-9260-d0d35653485c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:01:15 crc kubenswrapper[4965]: I0219 10:01:15.985593 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5b862187-0edd-4939-9260-d0d35653485c-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"5b862187-0edd-4939-9260-d0d35653485c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:01:15 crc kubenswrapper[4965]: I0219 10:01:15.985863 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 19 10:01:15 crc kubenswrapper[4965]: I0219 10:01:15.985982 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 19 10:01:15 crc kubenswrapper[4965]: I0219 10:01:15.986038 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-vq45m" Feb 19 10:01:15 crc kubenswrapper[4965]: I0219 10:01:15.986051 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5b862187-0edd-4939-9260-d0d35653485c-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"5b862187-0edd-4939-9260-d0d35653485c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:01:15 crc kubenswrapper[4965]: I0219 10:01:15.988494 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b862187-0edd-4939-9260-d0d35653485c-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"5b862187-0edd-4939-9260-d0d35653485c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:01:15 crc kubenswrapper[4965]: I0219 10:01:15.999886 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 19 10:01:16 crc kubenswrapper[4965]: I0219 10:01:16.008975 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b862187-0edd-4939-9260-d0d35653485c-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"5b862187-0edd-4939-9260-d0d35653485c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:01:16 crc kubenswrapper[4965]: I0219 10:01:16.009747 4965 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 10:01:16 crc kubenswrapper[4965]: I0219 10:01:16.009788 4965 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a95e62cc-dbc7-4521-96b2-997f99f9d6ff\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a95e62cc-dbc7-4521-96b2-997f99f9d6ff\") pod \"openstack-cell1-galera-0\" (UID: \"5b862187-0edd-4939-9260-d0d35653485c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c078c698d7b895a5a409845da36cc00bbc3dcdc0765fe34edead988ad61f5bd2/globalmount\"" pod="openstack/openstack-cell1-galera-0" Feb 19 10:01:16 crc kubenswrapper[4965]: I0219 10:01:16.012814 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b862187-0edd-4939-9260-d0d35653485c-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"5b862187-0edd-4939-9260-d0d35653485c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:01:16 crc kubenswrapper[4965]: I0219 10:01:16.028061 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k64jt\" (UniqueName: \"kubernetes.io/projected/5b862187-0edd-4939-9260-d0d35653485c-kube-api-access-k64jt\") pod \"openstack-cell1-galera-0\" (UID: \"5b862187-0edd-4939-9260-d0d35653485c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:01:16 crc kubenswrapper[4965]: I0219 10:01:16.056928 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a95e62cc-dbc7-4521-96b2-997f99f9d6ff\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a95e62cc-dbc7-4521-96b2-997f99f9d6ff\") pod \"openstack-cell1-galera-0\" (UID: \"5b862187-0edd-4939-9260-d0d35653485c\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:01:16 crc kubenswrapper[4965]: I0219 10:01:16.086134 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/40caef4c-7f84-42cb-b51c-b0884efc2052-config-data\") pod \"memcached-0\" (UID: \"40caef4c-7f84-42cb-b51c-b0884efc2052\") " pod="openstack/memcached-0" Feb 19 10:01:16 crc kubenswrapper[4965]: I0219 10:01:16.086178 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/40caef4c-7f84-42cb-b51c-b0884efc2052-memcached-tls-certs\") pod \"memcached-0\" (UID: \"40caef4c-7f84-42cb-b51c-b0884efc2052\") " pod="openstack/memcached-0" Feb 19 10:01:16 crc kubenswrapper[4965]: I0219 10:01:16.086223 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn9lt\" (UniqueName: \"kubernetes.io/projected/40caef4c-7f84-42cb-b51c-b0884efc2052-kube-api-access-kn9lt\") pod \"memcached-0\" (UID: \"40caef4c-7f84-42cb-b51c-b0884efc2052\") " pod="openstack/memcached-0" Feb 19 10:01:16 crc kubenswrapper[4965]: I0219 10:01:16.086242 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40caef4c-7f84-42cb-b51c-b0884efc2052-combined-ca-bundle\") pod \"memcached-0\" (UID: \"40caef4c-7f84-42cb-b51c-b0884efc2052\") " pod="openstack/memcached-0" Feb 19 10:01:16 crc kubenswrapper[4965]: I0219 10:01:16.086263 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/40caef4c-7f84-42cb-b51c-b0884efc2052-kolla-config\") pod \"memcached-0\" (UID: \"40caef4c-7f84-42cb-b51c-b0884efc2052\") " pod="openstack/memcached-0" Feb 19 10:01:16 crc kubenswrapper[4965]: I0219 10:01:16.188279 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/40caef4c-7f84-42cb-b51c-b0884efc2052-kolla-config\") pod \"memcached-0\" (UID: \"40caef4c-7f84-42cb-b51c-b0884efc2052\") " pod="openstack/memcached-0" Feb 19 10:01:16 crc kubenswrapper[4965]: I0219 10:01:16.188421 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/40caef4c-7f84-42cb-b51c-b0884efc2052-config-data\") pod \"memcached-0\" (UID: \"40caef4c-7f84-42cb-b51c-b0884efc2052\") " pod="openstack/memcached-0" Feb 19 10:01:16 crc kubenswrapper[4965]: I0219 10:01:16.188440 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/40caef4c-7f84-42cb-b51c-b0884efc2052-memcached-tls-certs\") pod \"memcached-0\" (UID: \"40caef4c-7f84-42cb-b51c-b0884efc2052\") " pod="openstack/memcached-0" Feb 19 10:01:16 crc kubenswrapper[4965]: I0219 10:01:16.188461 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kn9lt\" (UniqueName: \"kubernetes.io/projected/40caef4c-7f84-42cb-b51c-b0884efc2052-kube-api-access-kn9lt\") pod \"memcached-0\" (UID: \"40caef4c-7f84-42cb-b51c-b0884efc2052\") " pod="openstack/memcached-0" Feb 19 10:01:16 crc kubenswrapper[4965]: I0219 10:01:16.188478 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40caef4c-7f84-42cb-b51c-b0884efc2052-combined-ca-bundle\") pod \"memcached-0\" (UID: \"40caef4c-7f84-42cb-b51c-b0884efc2052\") " pod="openstack/memcached-0" Feb 19 10:01:16 crc kubenswrapper[4965]: I0219 10:01:16.189172 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/40caef4c-7f84-42cb-b51c-b0884efc2052-config-data\") pod \"memcached-0\" (UID: \"40caef4c-7f84-42cb-b51c-b0884efc2052\") " pod="openstack/memcached-0" Feb 19 10:01:16 crc kubenswrapper[4965]: I0219 10:01:16.189753 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/40caef4c-7f84-42cb-b51c-b0884efc2052-kolla-config\") pod \"memcached-0\" (UID: \"40caef4c-7f84-42cb-b51c-b0884efc2052\") " pod="openstack/memcached-0" Feb 19 10:01:16 crc kubenswrapper[4965]: I0219 10:01:16.192736 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40caef4c-7f84-42cb-b51c-b0884efc2052-combined-ca-bundle\") pod \"memcached-0\" (UID: \"40caef4c-7f84-42cb-b51c-b0884efc2052\") " pod="openstack/memcached-0" Feb 19 10:01:16 crc kubenswrapper[4965]: I0219 10:01:16.193359 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/40caef4c-7f84-42cb-b51c-b0884efc2052-memcached-tls-certs\") pod \"memcached-0\" (UID: \"40caef4c-7f84-42cb-b51c-b0884efc2052\") " pod="openstack/memcached-0" Feb 19 10:01:16 crc kubenswrapper[4965]: I0219 10:01:16.215009 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kn9lt\" (UniqueName: \"kubernetes.io/projected/40caef4c-7f84-42cb-b51c-b0884efc2052-kube-api-access-kn9lt\") pod \"memcached-0\" (UID: \"40caef4c-7f84-42cb-b51c-b0884efc2052\") " pod="openstack/memcached-0" Feb 19 10:01:16 crc kubenswrapper[4965]: I0219 10:01:16.340858 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 19 10:01:16 crc kubenswrapper[4965]: I0219 10:01:16.381717 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 19 10:01:16 crc kubenswrapper[4965]: I0219 10:01:16.539688 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-5xvv9" event={"ID":"3f73b9d2-a434-4638-bce4-6c710166a455","Type":"ContainerStarted","Data":"268d6d4b5946ce229529d7ac053c81513f5f2f8d4df0774bee1035b3f0b1fd88"} Feb 19 10:01:16 crc kubenswrapper[4965]: I0219 10:01:16.601326 4965 patch_prober.go:28] interesting pod/machine-config-daemon-7mhh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:01:16 crc kubenswrapper[4965]: I0219 10:01:16.601424 4965 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:01:18 crc kubenswrapper[4965]: I0219 10:01:18.060138 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 10:01:18 crc kubenswrapper[4965]: I0219 10:01:18.061681 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 10:01:18 crc kubenswrapper[4965]: W0219 10:01:18.084478 4965 reflector.go:561] object-"openstack"/"telemetry-ceilometer-dockercfg-xl8z6": failed to list *v1.Secret: secrets "telemetry-ceilometer-dockercfg-xl8z6" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Feb 19 10:01:18 crc kubenswrapper[4965]: E0219 10:01:18.084538 4965 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"telemetry-ceilometer-dockercfg-xl8z6\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"telemetry-ceilometer-dockercfg-xl8z6\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 19 10:01:18 crc kubenswrapper[4965]: I0219 10:01:18.098713 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 10:01:18 crc kubenswrapper[4965]: I0219 10:01:18.121094 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcqdp\" (UniqueName: \"kubernetes.io/projected/6263bd60-b2d0-44ff-ae54-874728576f1d-kube-api-access-zcqdp\") pod \"kube-state-metrics-0\" (UID: \"6263bd60-b2d0-44ff-ae54-874728576f1d\") " pod="openstack/kube-state-metrics-0" Feb 19 10:01:18 crc kubenswrapper[4965]: I0219 10:01:18.222371 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcqdp\" (UniqueName: \"kubernetes.io/projected/6263bd60-b2d0-44ff-ae54-874728576f1d-kube-api-access-zcqdp\") pod \"kube-state-metrics-0\" (UID: \"6263bd60-b2d0-44ff-ae54-874728576f1d\") " pod="openstack/kube-state-metrics-0" Feb 19 10:01:18 crc kubenswrapper[4965]: I0219 10:01:18.263430 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcqdp\" (UniqueName: \"kubernetes.io/projected/6263bd60-b2d0-44ff-ae54-874728576f1d-kube-api-access-zcqdp\") pod \"kube-state-metrics-0\" (UID: \"6263bd60-b2d0-44ff-ae54-874728576f1d\") " pod="openstack/kube-state-metrics-0" Feb 19 10:01:18 crc kubenswrapper[4965]: I0219 10:01:18.853643 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 19 10:01:18 crc kubenswrapper[4965]: I0219 10:01:18.855387 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Feb 19 10:01:18 crc kubenswrapper[4965]: I0219 10:01:18.857061 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Feb 19 10:01:18 crc kubenswrapper[4965]: I0219 10:01:18.859105 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Feb 19 10:01:18 crc kubenswrapper[4965]: I0219 10:01:18.859158 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Feb 19 10:01:18 crc kubenswrapper[4965]: I0219 10:01:18.859227 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Feb 19 10:01:18 crc kubenswrapper[4965]: I0219 10:01:18.859462 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-7wsbb" Feb 19 10:01:18 crc kubenswrapper[4965]: I0219 10:01:18.863946 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 19 10:01:18 crc kubenswrapper[4965]: I0219 10:01:18.933235 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxkth\" (UniqueName: \"kubernetes.io/projected/45105c9e-db96-41c5-ba42-d56027ca318c-kube-api-access-hxkth\") pod \"alertmanager-metric-storage-0\" (UID: \"45105c9e-db96-41c5-ba42-d56027ca318c\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 10:01:18 crc kubenswrapper[4965]: I0219 10:01:18.933337 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/45105c9e-db96-41c5-ba42-d56027ca318c-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"45105c9e-db96-41c5-ba42-d56027ca318c\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 10:01:18 crc kubenswrapper[4965]: I0219 10:01:18.933393 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/45105c9e-db96-41c5-ba42-d56027ca318c-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"45105c9e-db96-41c5-ba42-d56027ca318c\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 10:01:18 crc kubenswrapper[4965]: I0219 10:01:18.933439 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/45105c9e-db96-41c5-ba42-d56027ca318c-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"45105c9e-db96-41c5-ba42-d56027ca318c\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 10:01:18 crc kubenswrapper[4965]: I0219 10:01:18.933488 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/45105c9e-db96-41c5-ba42-d56027ca318c-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"45105c9e-db96-41c5-ba42-d56027ca318c\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 10:01:18 crc kubenswrapper[4965]: I0219 10:01:18.933556 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/45105c9e-db96-41c5-ba42-d56027ca318c-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"45105c9e-db96-41c5-ba42-d56027ca318c\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 10:01:18 crc kubenswrapper[4965]: I0219 10:01:18.933585 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/45105c9e-db96-41c5-ba42-d56027ca318c-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"45105c9e-db96-41c5-ba42-d56027ca318c\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 10:01:19 crc kubenswrapper[4965]: I0219 10:01:19.035555 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/45105c9e-db96-41c5-ba42-d56027ca318c-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"45105c9e-db96-41c5-ba42-d56027ca318c\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 10:01:19 crc kubenswrapper[4965]: I0219 10:01:19.035607 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/45105c9e-db96-41c5-ba42-d56027ca318c-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"45105c9e-db96-41c5-ba42-d56027ca318c\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 10:01:19 crc kubenswrapper[4965]: I0219 10:01:19.035677 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxkth\" (UniqueName: \"kubernetes.io/projected/45105c9e-db96-41c5-ba42-d56027ca318c-kube-api-access-hxkth\") pod \"alertmanager-metric-storage-0\" (UID: \"45105c9e-db96-41c5-ba42-d56027ca318c\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 10:01:19 crc kubenswrapper[4965]: I0219 10:01:19.035741 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/45105c9e-db96-41c5-ba42-d56027ca318c-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"45105c9e-db96-41c5-ba42-d56027ca318c\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 10:01:19 crc kubenswrapper[4965]: I0219 10:01:19.035807 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/45105c9e-db96-41c5-ba42-d56027ca318c-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"45105c9e-db96-41c5-ba42-d56027ca318c\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 10:01:19 crc kubenswrapper[4965]: I0219 10:01:19.035851 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/45105c9e-db96-41c5-ba42-d56027ca318c-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"45105c9e-db96-41c5-ba42-d56027ca318c\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 10:01:19 crc kubenswrapper[4965]: I0219 10:01:19.035907 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/45105c9e-db96-41c5-ba42-d56027ca318c-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"45105c9e-db96-41c5-ba42-d56027ca318c\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 10:01:19 crc kubenswrapper[4965]: I0219 10:01:19.037297 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/45105c9e-db96-41c5-ba42-d56027ca318c-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"45105c9e-db96-41c5-ba42-d56027ca318c\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 10:01:19 crc kubenswrapper[4965]: I0219 10:01:19.039856 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/45105c9e-db96-41c5-ba42-d56027ca318c-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"45105c9e-db96-41c5-ba42-d56027ca318c\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 10:01:19 crc kubenswrapper[4965]: I0219 10:01:19.040084 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/45105c9e-db96-41c5-ba42-d56027ca318c-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"45105c9e-db96-41c5-ba42-d56027ca318c\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 10:01:19 crc kubenswrapper[4965]: I0219 10:01:19.040480 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/45105c9e-db96-41c5-ba42-d56027ca318c-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"45105c9e-db96-41c5-ba42-d56027ca318c\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 10:01:19 crc kubenswrapper[4965]: I0219 10:01:19.040881 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/45105c9e-db96-41c5-ba42-d56027ca318c-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"45105c9e-db96-41c5-ba42-d56027ca318c\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 10:01:19 crc kubenswrapper[4965]: I0219 10:01:19.042538 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/45105c9e-db96-41c5-ba42-d56027ca318c-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"45105c9e-db96-41c5-ba42-d56027ca318c\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 10:01:19 crc kubenswrapper[4965]: I0219 10:01:19.064225 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxkth\" (UniqueName: \"kubernetes.io/projected/45105c9e-db96-41c5-ba42-d56027ca318c-kube-api-access-hxkth\") pod \"alertmanager-metric-storage-0\" (UID: \"45105c9e-db96-41c5-ba42-d56027ca318c\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 10:01:19 crc kubenswrapper[4965]: I0219 10:01:19.215349 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Feb 19 10:01:19 crc kubenswrapper[4965]: I0219 10:01:19.356524 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 10:01:19 crc kubenswrapper[4965]: I0219 10:01:19.358671 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:19 crc kubenswrapper[4965]: I0219 10:01:19.362492 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-rnpks" Feb 19 10:01:19 crc kubenswrapper[4965]: I0219 10:01:19.362545 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 19 10:01:19 crc kubenswrapper[4965]: I0219 10:01:19.362723 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 19 10:01:19 crc kubenswrapper[4965]: I0219 10:01:19.362772 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 19 10:01:19 crc kubenswrapper[4965]: I0219 10:01:19.362960 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 19 10:01:19 crc kubenswrapper[4965]: I0219 10:01:19.362973 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 19 10:01:19 crc kubenswrapper[4965]: I0219 10:01:19.363100 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 19 10:01:19 crc kubenswrapper[4965]: I0219 10:01:19.367168 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 19 10:01:19 crc kubenswrapper[4965]: I0219 10:01:19.368729 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 10:01:19 crc kubenswrapper[4965]: I0219 10:01:19.380888 4965 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/kube-state-metrics-0" secret="" err="failed to sync secret cache: timed out waiting for the condition" Feb 19 10:01:19 crc kubenswrapper[4965]: I0219 10:01:19.382159 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 10:01:19 crc kubenswrapper[4965]: I0219 10:01:19.441465 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-cd8d8a7f-775c-4ca9-8d07-5662d94d0fe3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cd8d8a7f-775c-4ca9-8d07-5662d94d0fe3\") pod \"prometheus-metric-storage-0\" (UID: \"e7a4c9f4-b898-43b4-812d-ab4f17c2124d\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:19 crc kubenswrapper[4965]: I0219 10:01:19.441511 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e7a4c9f4-b898-43b4-812d-ab4f17c2124d-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"e7a4c9f4-b898-43b4-812d-ab4f17c2124d\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:19 crc kubenswrapper[4965]: I0219 10:01:19.441537 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e7a4c9f4-b898-43b4-812d-ab4f17c2124d-config\") pod \"prometheus-metric-storage-0\" (UID: \"e7a4c9f4-b898-43b4-812d-ab4f17c2124d\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:19 crc kubenswrapper[4965]: I0219 10:01:19.441556 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e7a4c9f4-b898-43b4-812d-ab4f17c2124d-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"e7a4c9f4-b898-43b4-812d-ab4f17c2124d\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:19 crc kubenswrapper[4965]: I0219 10:01:19.441605 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e7a4c9f4-b898-43b4-812d-ab4f17c2124d-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"e7a4c9f4-b898-43b4-812d-ab4f17c2124d\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:19 crc kubenswrapper[4965]: I0219 10:01:19.441794 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e7a4c9f4-b898-43b4-812d-ab4f17c2124d-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"e7a4c9f4-b898-43b4-812d-ab4f17c2124d\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:19 crc kubenswrapper[4965]: I0219 10:01:19.441917 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/e7a4c9f4-b898-43b4-812d-ab4f17c2124d-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"e7a4c9f4-b898-43b4-812d-ab4f17c2124d\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:19 crc kubenswrapper[4965]: I0219 10:01:19.441978 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9ks6\" (UniqueName: \"kubernetes.io/projected/e7a4c9f4-b898-43b4-812d-ab4f17c2124d-kube-api-access-m9ks6\") pod \"prometheus-metric-storage-0\" (UID: \"e7a4c9f4-b898-43b4-812d-ab4f17c2124d\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:19 crc kubenswrapper[4965]: I0219 10:01:19.442001 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/e7a4c9f4-b898-43b4-812d-ab4f17c2124d-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"e7a4c9f4-b898-43b4-812d-ab4f17c2124d\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:19 crc kubenswrapper[4965]: I0219 10:01:19.442050 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e7a4c9f4-b898-43b4-812d-ab4f17c2124d-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"e7a4c9f4-b898-43b4-812d-ab4f17c2124d\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:19 crc kubenswrapper[4965]: I0219 10:01:19.492616 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-xl8z6" Feb 19 10:01:19 crc kubenswrapper[4965]: I0219 10:01:19.545933 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e7a4c9f4-b898-43b4-812d-ab4f17c2124d-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"e7a4c9f4-b898-43b4-812d-ab4f17c2124d\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:19 crc kubenswrapper[4965]: I0219 10:01:19.545989 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/e7a4c9f4-b898-43b4-812d-ab4f17c2124d-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"e7a4c9f4-b898-43b4-812d-ab4f17c2124d\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:19 crc kubenswrapper[4965]: I0219 10:01:19.546016 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9ks6\" (UniqueName: \"kubernetes.io/projected/e7a4c9f4-b898-43b4-812d-ab4f17c2124d-kube-api-access-m9ks6\") pod \"prometheus-metric-storage-0\" (UID: \"e7a4c9f4-b898-43b4-812d-ab4f17c2124d\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:19 crc kubenswrapper[4965]: I0219 10:01:19.546036 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/e7a4c9f4-b898-43b4-812d-ab4f17c2124d-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"e7a4c9f4-b898-43b4-812d-ab4f17c2124d\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:19 crc kubenswrapper[4965]: I0219 10:01:19.546058 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e7a4c9f4-b898-43b4-812d-ab4f17c2124d-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"e7a4c9f4-b898-43b4-812d-ab4f17c2124d\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:19 crc kubenswrapper[4965]: I0219 10:01:19.546907 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e7a4c9f4-b898-43b4-812d-ab4f17c2124d-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"e7a4c9f4-b898-43b4-812d-ab4f17c2124d\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:19 crc kubenswrapper[4965]: I0219 10:01:19.547068 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/e7a4c9f4-b898-43b4-812d-ab4f17c2124d-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"e7a4c9f4-b898-43b4-812d-ab4f17c2124d\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:19 crc kubenswrapper[4965]: I0219 10:01:19.548755 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/e7a4c9f4-b898-43b4-812d-ab4f17c2124d-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"e7a4c9f4-b898-43b4-812d-ab4f17c2124d\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:19 crc kubenswrapper[4965]: I0219 10:01:19.548851 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-cd8d8a7f-775c-4ca9-8d07-5662d94d0fe3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cd8d8a7f-775c-4ca9-8d07-5662d94d0fe3\") pod \"prometheus-metric-storage-0\" (UID: \"e7a4c9f4-b898-43b4-812d-ab4f17c2124d\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:19 crc kubenswrapper[4965]: I0219 10:01:19.548882 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e7a4c9f4-b898-43b4-812d-ab4f17c2124d-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"e7a4c9f4-b898-43b4-812d-ab4f17c2124d\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:19 crc kubenswrapper[4965]: I0219 10:01:19.551398 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e7a4c9f4-b898-43b4-812d-ab4f17c2124d-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"e7a4c9f4-b898-43b4-812d-ab4f17c2124d\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:19 crc kubenswrapper[4965]: I0219 10:01:19.553289 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e7a4c9f4-b898-43b4-812d-ab4f17c2124d-config\") pod \"prometheus-metric-storage-0\" (UID: \"e7a4c9f4-b898-43b4-812d-ab4f17c2124d\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:19 crc kubenswrapper[4965]: I0219 10:01:19.553333 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e7a4c9f4-b898-43b4-812d-ab4f17c2124d-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"e7a4c9f4-b898-43b4-812d-ab4f17c2124d\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:19 crc kubenswrapper[4965]: I0219 10:01:19.553353 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e7a4c9f4-b898-43b4-812d-ab4f17c2124d-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"e7a4c9f4-b898-43b4-812d-ab4f17c2124d\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:19 crc kubenswrapper[4965]: I0219 10:01:19.553506 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e7a4c9f4-b898-43b4-812d-ab4f17c2124d-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"e7a4c9f4-b898-43b4-812d-ab4f17c2124d\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:19 crc kubenswrapper[4965]: I0219 10:01:19.554055 4965 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 10:01:19 crc kubenswrapper[4965]: I0219 10:01:19.554091 4965 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-cd8d8a7f-775c-4ca9-8d07-5662d94d0fe3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cd8d8a7f-775c-4ca9-8d07-5662d94d0fe3\") pod \"prometheus-metric-storage-0\" (UID: \"e7a4c9f4-b898-43b4-812d-ab4f17c2124d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/53b03fef949ae6dbd52b5860402ecf7cae38e33da69a528411b5e62a8cf74f89/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:19 crc kubenswrapper[4965]: I0219 10:01:19.559905 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e7a4c9f4-b898-43b4-812d-ab4f17c2124d-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"e7a4c9f4-b898-43b4-812d-ab4f17c2124d\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:19 crc kubenswrapper[4965]: I0219 10:01:19.562653 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e7a4c9f4-b898-43b4-812d-ab4f17c2124d-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"e7a4c9f4-b898-43b4-812d-ab4f17c2124d\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:19 crc kubenswrapper[4965]: I0219 10:01:19.565326 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9ks6\" (UniqueName: \"kubernetes.io/projected/e7a4c9f4-b898-43b4-812d-ab4f17c2124d-kube-api-access-m9ks6\") pod \"prometheus-metric-storage-0\" (UID: \"e7a4c9f4-b898-43b4-812d-ab4f17c2124d\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:19 crc kubenswrapper[4965]: I0219 10:01:19.566749 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e7a4c9f4-b898-43b4-812d-ab4f17c2124d-config\") pod \"prometheus-metric-storage-0\" (UID: \"e7a4c9f4-b898-43b4-812d-ab4f17c2124d\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:19 crc kubenswrapper[4965]: I0219 10:01:19.589859 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-cd8d8a7f-775c-4ca9-8d07-5662d94d0fe3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cd8d8a7f-775c-4ca9-8d07-5662d94d0fe3\") pod \"prometheus-metric-storage-0\" (UID: \"e7a4c9f4-b898-43b4-812d-ab4f17c2124d\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:19 crc kubenswrapper[4965]: I0219 10:01:19.677697 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 10:01:21 crc kubenswrapper[4965]: I0219 10:01:21.866205 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 19 10:01:21 crc kubenswrapper[4965]: I0219 10:01:21.867560 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 19 10:01:21 crc kubenswrapper[4965]: I0219 10:01:21.869625 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-tfv8l" Feb 19 10:01:21 crc kubenswrapper[4965]: I0219 10:01:21.869848 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 19 10:01:21 crc kubenswrapper[4965]: I0219 10:01:21.870003 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 19 10:01:21 crc kubenswrapper[4965]: I0219 10:01:21.870163 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 19 10:01:21 crc kubenswrapper[4965]: I0219 10:01:21.870458 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 19 10:01:21 crc kubenswrapper[4965]: I0219 10:01:21.879876 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 19 10:01:21 crc kubenswrapper[4965]: I0219 10:01:21.994322 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/98633dba-c95c-4f35-a045-5c738d652492-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"98633dba-c95c-4f35-a045-5c738d652492\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:01:21 crc kubenswrapper[4965]: I0219 10:01:21.994916 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntpvx\" (UniqueName: \"kubernetes.io/projected/98633dba-c95c-4f35-a045-5c738d652492-kube-api-access-ntpvx\") pod \"ovsdbserver-nb-0\" (UID: \"98633dba-c95c-4f35-a045-5c738d652492\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:01:21 crc kubenswrapper[4965]: I0219 10:01:21.995039 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/98633dba-c95c-4f35-a045-5c738d652492-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"98633dba-c95c-4f35-a045-5c738d652492\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:01:21 crc kubenswrapper[4965]: I0219 10:01:21.995117 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/98633dba-c95c-4f35-a045-5c738d652492-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"98633dba-c95c-4f35-a045-5c738d652492\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:01:21 crc kubenswrapper[4965]: I0219 10:01:21.995227 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98633dba-c95c-4f35-a045-5c738d652492-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"98633dba-c95c-4f35-a045-5c738d652492\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:01:21 crc kubenswrapper[4965]: I0219 10:01:21.995337 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98633dba-c95c-4f35-a045-5c738d652492-config\") pod \"ovsdbserver-nb-0\" (UID: \"98633dba-c95c-4f35-a045-5c738d652492\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:01:21 crc kubenswrapper[4965]: I0219 10:01:21.995441 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/98633dba-c95c-4f35-a045-5c738d652492-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"98633dba-c95c-4f35-a045-5c738d652492\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:01:21 crc kubenswrapper[4965]: I0219 10:01:21.995580 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7cc60b37-793e-48bc-9b22-97549d1a5edf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7cc60b37-793e-48bc-9b22-97549d1a5edf\") pod \"ovsdbserver-nb-0\" (UID: \"98633dba-c95c-4f35-a045-5c738d652492\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.097361 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntpvx\" (UniqueName: \"kubernetes.io/projected/98633dba-c95c-4f35-a045-5c738d652492-kube-api-access-ntpvx\") pod \"ovsdbserver-nb-0\" (UID: \"98633dba-c95c-4f35-a045-5c738d652492\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.097441 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/98633dba-c95c-4f35-a045-5c738d652492-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"98633dba-c95c-4f35-a045-5c738d652492\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.097468 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/98633dba-c95c-4f35-a045-5c738d652492-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"98633dba-c95c-4f35-a045-5c738d652492\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.097964 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98633dba-c95c-4f35-a045-5c738d652492-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"98633dba-c95c-4f35-a045-5c738d652492\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.098014 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98633dba-c95c-4f35-a045-5c738d652492-config\") pod \"ovsdbserver-nb-0\" (UID: \"98633dba-c95c-4f35-a045-5c738d652492\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.098035 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/98633dba-c95c-4f35-a045-5c738d652492-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"98633dba-c95c-4f35-a045-5c738d652492\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.098089 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7cc60b37-793e-48bc-9b22-97549d1a5edf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7cc60b37-793e-48bc-9b22-97549d1a5edf\") pod \"ovsdbserver-nb-0\" (UID: \"98633dba-c95c-4f35-a045-5c738d652492\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.098118 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/98633dba-c95c-4f35-a045-5c738d652492-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"98633dba-c95c-4f35-a045-5c738d652492\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.098867 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98633dba-c95c-4f35-a045-5c738d652492-config\") pod \"ovsdbserver-nb-0\" (UID: \"98633dba-c95c-4f35-a045-5c738d652492\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.101495 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/98633dba-c95c-4f35-a045-5c738d652492-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"98633dba-c95c-4f35-a045-5c738d652492\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.104678 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/98633dba-c95c-4f35-a045-5c738d652492-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"98633dba-c95c-4f35-a045-5c738d652492\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.105689 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98633dba-c95c-4f35-a045-5c738d652492-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"98633dba-c95c-4f35-a045-5c738d652492\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.109461 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/98633dba-c95c-4f35-a045-5c738d652492-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"98633dba-c95c-4f35-a045-5c738d652492\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.111731 4965 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.111769 4965 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7cc60b37-793e-48bc-9b22-97549d1a5edf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7cc60b37-793e-48bc-9b22-97549d1a5edf\") pod \"ovsdbserver-nb-0\" (UID: \"98633dba-c95c-4f35-a045-5c738d652492\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/979dae3ada0cac1c6d4123772373cd5a25a0788f76e1c8c2051d90e5de429d45/globalmount\"" pod="openstack/ovsdbserver-nb-0" Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.117371 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/98633dba-c95c-4f35-a045-5c738d652492-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"98633dba-c95c-4f35-a045-5c738d652492\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.120964 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntpvx\" (UniqueName: \"kubernetes.io/projected/98633dba-c95c-4f35-a045-5c738d652492-kube-api-access-ntpvx\") pod \"ovsdbserver-nb-0\" (UID: \"98633dba-c95c-4f35-a045-5c738d652492\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.154350 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7cc60b37-793e-48bc-9b22-97549d1a5edf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7cc60b37-793e-48bc-9b22-97549d1a5edf\") pod \"ovsdbserver-nb-0\" (UID: \"98633dba-c95c-4f35-a045-5c738d652492\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.172071 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-mwlb6"] Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.173151 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mwlb6" Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.184245 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-gff4s" Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.184428 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.184546 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.185032 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mwlb6"] Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.190439 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.203846 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-jlns7"] Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.207878 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-jlns7" Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.213243 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-jlns7"] Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.300982 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/3f408d9e-6ca2-490c-be7e-0516fa19db75-var-log\") pod \"ovn-controller-ovs-jlns7\" (UID: \"3f408d9e-6ca2-490c-be7e-0516fa19db75\") " pod="openstack/ovn-controller-ovs-jlns7" Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.301089 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3f408d9e-6ca2-490c-be7e-0516fa19db75-var-run\") pod \"ovn-controller-ovs-jlns7\" (UID: \"3f408d9e-6ca2-490c-be7e-0516fa19db75\") " pod="openstack/ovn-controller-ovs-jlns7" Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.301122 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkhxd\" (UniqueName: \"kubernetes.io/projected/3f408d9e-6ca2-490c-be7e-0516fa19db75-kube-api-access-fkhxd\") pod \"ovn-controller-ovs-jlns7\" (UID: \"3f408d9e-6ca2-490c-be7e-0516fa19db75\") " pod="openstack/ovn-controller-ovs-jlns7" Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.301168 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a44ec63-c497-4874-b0ca-ecb9d6c9bc2a-combined-ca-bundle\") pod \"ovn-controller-mwlb6\" (UID: \"0a44ec63-c497-4874-b0ca-ecb9d6c9bc2a\") " pod="openstack/ovn-controller-mwlb6" Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.301225 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g97g9\" (UniqueName: \"kubernetes.io/projected/0a44ec63-c497-4874-b0ca-ecb9d6c9bc2a-kube-api-access-g97g9\") pod \"ovn-controller-mwlb6\" (UID: \"0a44ec63-c497-4874-b0ca-ecb9d6c9bc2a\") " pod="openstack/ovn-controller-mwlb6" Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.301276 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/3f408d9e-6ca2-490c-be7e-0516fa19db75-var-lib\") pod \"ovn-controller-ovs-jlns7\" (UID: \"3f408d9e-6ca2-490c-be7e-0516fa19db75\") " pod="openstack/ovn-controller-ovs-jlns7" Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.301316 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0a44ec63-c497-4874-b0ca-ecb9d6c9bc2a-var-run-ovn\") pod \"ovn-controller-mwlb6\" (UID: \"0a44ec63-c497-4874-b0ca-ecb9d6c9bc2a\") " pod="openstack/ovn-controller-mwlb6" Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.301348 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/3f408d9e-6ca2-490c-be7e-0516fa19db75-etc-ovs\") pod \"ovn-controller-ovs-jlns7\" (UID: \"3f408d9e-6ca2-490c-be7e-0516fa19db75\") " pod="openstack/ovn-controller-ovs-jlns7" Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.301735 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0a44ec63-c497-4874-b0ca-ecb9d6c9bc2a-var-log-ovn\") pod \"ovn-controller-mwlb6\" (UID: \"0a44ec63-c497-4874-b0ca-ecb9d6c9bc2a\") " pod="openstack/ovn-controller-mwlb6" Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.301841 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a44ec63-c497-4874-b0ca-ecb9d6c9bc2a-ovn-controller-tls-certs\") pod \"ovn-controller-mwlb6\" (UID: \"0a44ec63-c497-4874-b0ca-ecb9d6c9bc2a\") " pod="openstack/ovn-controller-mwlb6" Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.301914 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0a44ec63-c497-4874-b0ca-ecb9d6c9bc2a-scripts\") pod \"ovn-controller-mwlb6\" (UID: \"0a44ec63-c497-4874-b0ca-ecb9d6c9bc2a\") " pod="openstack/ovn-controller-mwlb6" Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.301975 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0a44ec63-c497-4874-b0ca-ecb9d6c9bc2a-var-run\") pod \"ovn-controller-mwlb6\" (UID: \"0a44ec63-c497-4874-b0ca-ecb9d6c9bc2a\") " pod="openstack/ovn-controller-mwlb6" Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.302101 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3f408d9e-6ca2-490c-be7e-0516fa19db75-scripts\") pod \"ovn-controller-ovs-jlns7\" (UID: \"3f408d9e-6ca2-490c-be7e-0516fa19db75\") " pod="openstack/ovn-controller-ovs-jlns7" Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.403050 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/3f408d9e-6ca2-490c-be7e-0516fa19db75-var-lib\") pod \"ovn-controller-ovs-jlns7\" (UID: \"3f408d9e-6ca2-490c-be7e-0516fa19db75\") " pod="openstack/ovn-controller-ovs-jlns7" Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.403363 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0a44ec63-c497-4874-b0ca-ecb9d6c9bc2a-var-run-ovn\") pod \"ovn-controller-mwlb6\" (UID: \"0a44ec63-c497-4874-b0ca-ecb9d6c9bc2a\") " pod="openstack/ovn-controller-mwlb6" Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.403447 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/3f408d9e-6ca2-490c-be7e-0516fa19db75-etc-ovs\") pod \"ovn-controller-ovs-jlns7\" (UID: \"3f408d9e-6ca2-490c-be7e-0516fa19db75\") " pod="openstack/ovn-controller-ovs-jlns7" Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.403521 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0a44ec63-c497-4874-b0ca-ecb9d6c9bc2a-var-log-ovn\") pod \"ovn-controller-mwlb6\" (UID: \"0a44ec63-c497-4874-b0ca-ecb9d6c9bc2a\") " pod="openstack/ovn-controller-mwlb6" Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.403624 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a44ec63-c497-4874-b0ca-ecb9d6c9bc2a-ovn-controller-tls-certs\") pod \"ovn-controller-mwlb6\" (UID: \"0a44ec63-c497-4874-b0ca-ecb9d6c9bc2a\") " pod="openstack/ovn-controller-mwlb6" Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.403702 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0a44ec63-c497-4874-b0ca-ecb9d6c9bc2a-scripts\") pod \"ovn-controller-mwlb6\" (UID: \"0a44ec63-c497-4874-b0ca-ecb9d6c9bc2a\") " pod="openstack/ovn-controller-mwlb6" Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.403780 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0a44ec63-c497-4874-b0ca-ecb9d6c9bc2a-var-run\") pod \"ovn-controller-mwlb6\" (UID: \"0a44ec63-c497-4874-b0ca-ecb9d6c9bc2a\") " pod="openstack/ovn-controller-mwlb6" Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.403878 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3f408d9e-6ca2-490c-be7e-0516fa19db75-scripts\") pod \"ovn-controller-ovs-jlns7\" (UID: \"3f408d9e-6ca2-490c-be7e-0516fa19db75\") " pod="openstack/ovn-controller-ovs-jlns7" Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.403958 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/3f408d9e-6ca2-490c-be7e-0516fa19db75-var-log\") pod \"ovn-controller-ovs-jlns7\" (UID: \"3f408d9e-6ca2-490c-be7e-0516fa19db75\") " pod="openstack/ovn-controller-ovs-jlns7" Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.404033 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3f408d9e-6ca2-490c-be7e-0516fa19db75-var-run\") pod \"ovn-controller-ovs-jlns7\" (UID: \"3f408d9e-6ca2-490c-be7e-0516fa19db75\") " pod="openstack/ovn-controller-ovs-jlns7" Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.404110 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkhxd\" (UniqueName: \"kubernetes.io/projected/3f408d9e-6ca2-490c-be7e-0516fa19db75-kube-api-access-fkhxd\") pod \"ovn-controller-ovs-jlns7\" (UID: \"3f408d9e-6ca2-490c-be7e-0516fa19db75\") " pod="openstack/ovn-controller-ovs-jlns7" Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.404209 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a44ec63-c497-4874-b0ca-ecb9d6c9bc2a-combined-ca-bundle\") pod \"ovn-controller-mwlb6\" (UID: \"0a44ec63-c497-4874-b0ca-ecb9d6c9bc2a\") " pod="openstack/ovn-controller-mwlb6" Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.404288 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g97g9\" (UniqueName: \"kubernetes.io/projected/0a44ec63-c497-4874-b0ca-ecb9d6c9bc2a-kube-api-access-g97g9\") pod \"ovn-controller-mwlb6\" (UID: \"0a44ec63-c497-4874-b0ca-ecb9d6c9bc2a\") " pod="openstack/ovn-controller-mwlb6" Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.405076 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/3f408d9e-6ca2-490c-be7e-0516fa19db75-var-lib\") pod \"ovn-controller-ovs-jlns7\" (UID: \"3f408d9e-6ca2-490c-be7e-0516fa19db75\") " pod="openstack/ovn-controller-ovs-jlns7" Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.405284 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0a44ec63-c497-4874-b0ca-ecb9d6c9bc2a-var-run-ovn\") pod \"ovn-controller-mwlb6\" (UID: \"0a44ec63-c497-4874-b0ca-ecb9d6c9bc2a\") " pod="openstack/ovn-controller-mwlb6" Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.405454 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/3f408d9e-6ca2-490c-be7e-0516fa19db75-etc-ovs\") pod \"ovn-controller-ovs-jlns7\" (UID: \"3f408d9e-6ca2-490c-be7e-0516fa19db75\") " pod="openstack/ovn-controller-ovs-jlns7" Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.405648 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0a44ec63-c497-4874-b0ca-ecb9d6c9bc2a-var-log-ovn\") pod \"ovn-controller-mwlb6\" (UID: \"0a44ec63-c497-4874-b0ca-ecb9d6c9bc2a\") " pod="openstack/ovn-controller-mwlb6" Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.407576 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0a44ec63-c497-4874-b0ca-ecb9d6c9bc2a-var-run\") pod \"ovn-controller-mwlb6\" (UID: \"0a44ec63-c497-4874-b0ca-ecb9d6c9bc2a\") " pod="openstack/ovn-controller-mwlb6" Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.407931 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3f408d9e-6ca2-490c-be7e-0516fa19db75-var-run\") pod \"ovn-controller-ovs-jlns7\" (UID: \"3f408d9e-6ca2-490c-be7e-0516fa19db75\") " pod="openstack/ovn-controller-ovs-jlns7" Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.409562 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3f408d9e-6ca2-490c-be7e-0516fa19db75-scripts\") pod \"ovn-controller-ovs-jlns7\" (UID: \"3f408d9e-6ca2-490c-be7e-0516fa19db75\") " pod="openstack/ovn-controller-ovs-jlns7" Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.410404 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0a44ec63-c497-4874-b0ca-ecb9d6c9bc2a-scripts\") pod \"ovn-controller-mwlb6\" (UID: \"0a44ec63-c497-4874-b0ca-ecb9d6c9bc2a\") " pod="openstack/ovn-controller-mwlb6" Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.416370 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a44ec63-c497-4874-b0ca-ecb9d6c9bc2a-ovn-controller-tls-certs\") pod \"ovn-controller-mwlb6\" (UID: \"0a44ec63-c497-4874-b0ca-ecb9d6c9bc2a\") " pod="openstack/ovn-controller-mwlb6" Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.416477 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/3f408d9e-6ca2-490c-be7e-0516fa19db75-var-log\") pod \"ovn-controller-ovs-jlns7\" (UID: \"3f408d9e-6ca2-490c-be7e-0516fa19db75\") " pod="openstack/ovn-controller-ovs-jlns7" Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.423410 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g97g9\" (UniqueName: \"kubernetes.io/projected/0a44ec63-c497-4874-b0ca-ecb9d6c9bc2a-kube-api-access-g97g9\") pod \"ovn-controller-mwlb6\" (UID: \"0a44ec63-c497-4874-b0ca-ecb9d6c9bc2a\") " pod="openstack/ovn-controller-mwlb6" Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.423641 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a44ec63-c497-4874-b0ca-ecb9d6c9bc2a-combined-ca-bundle\") pod \"ovn-controller-mwlb6\" (UID: \"0a44ec63-c497-4874-b0ca-ecb9d6c9bc2a\") " pod="openstack/ovn-controller-mwlb6" Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.431410 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkhxd\" (UniqueName: \"kubernetes.io/projected/3f408d9e-6ca2-490c-be7e-0516fa19db75-kube-api-access-fkhxd\") pod \"ovn-controller-ovs-jlns7\" (UID: \"3f408d9e-6ca2-490c-be7e-0516fa19db75\") " pod="openstack/ovn-controller-ovs-jlns7" Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.498895 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mwlb6" Feb 19 10:01:22 crc kubenswrapper[4965]: I0219 10:01:22.528769 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-jlns7" Feb 19 10:01:26 crc kubenswrapper[4965]: I0219 10:01:26.458403 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 19 10:01:26 crc kubenswrapper[4965]: I0219 10:01:26.464411 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 19 10:01:26 crc kubenswrapper[4965]: I0219 10:01:26.468660 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-2nntq" Feb 19 10:01:26 crc kubenswrapper[4965]: I0219 10:01:26.468858 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 19 10:01:26 crc kubenswrapper[4965]: I0219 10:01:26.469037 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 19 10:01:26 crc kubenswrapper[4965]: I0219 10:01:26.469141 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 19 10:01:26 crc kubenswrapper[4965]: I0219 10:01:26.469457 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 19 10:01:26 crc kubenswrapper[4965]: I0219 10:01:26.581299 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1520d7ba-9d74-47f8-9c7a-9731ae9ff49e-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"1520d7ba-9d74-47f8-9c7a-9731ae9ff49e\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:01:26 crc kubenswrapper[4965]: I0219 10:01:26.581385 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d8da21cf-c1c5-4301-8e68-e500d783a868\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d8da21cf-c1c5-4301-8e68-e500d783a868\") pod \"ovsdbserver-sb-0\" (UID: \"1520d7ba-9d74-47f8-9c7a-9731ae9ff49e\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:01:26 crc kubenswrapper[4965]: I0219 10:01:26.581428 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1520d7ba-9d74-47f8-9c7a-9731ae9ff49e-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1520d7ba-9d74-47f8-9c7a-9731ae9ff49e\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:01:26 crc kubenswrapper[4965]: I0219 10:01:26.581507 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw9ph\" (UniqueName: \"kubernetes.io/projected/1520d7ba-9d74-47f8-9c7a-9731ae9ff49e-kube-api-access-hw9ph\") pod \"ovsdbserver-sb-0\" (UID: \"1520d7ba-9d74-47f8-9c7a-9731ae9ff49e\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:01:26 crc kubenswrapper[4965]: I0219 10:01:26.581541 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1520d7ba-9d74-47f8-9c7a-9731ae9ff49e-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"1520d7ba-9d74-47f8-9c7a-9731ae9ff49e\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:01:26 crc kubenswrapper[4965]: I0219 10:01:26.581572 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1520d7ba-9d74-47f8-9c7a-9731ae9ff49e-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"1520d7ba-9d74-47f8-9c7a-9731ae9ff49e\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:01:26 crc kubenswrapper[4965]: I0219 10:01:26.581598 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1520d7ba-9d74-47f8-9c7a-9731ae9ff49e-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1520d7ba-9d74-47f8-9c7a-9731ae9ff49e\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:01:26 crc kubenswrapper[4965]: I0219 10:01:26.581644 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1520d7ba-9d74-47f8-9c7a-9731ae9ff49e-config\") pod \"ovsdbserver-sb-0\" (UID: \"1520d7ba-9d74-47f8-9c7a-9731ae9ff49e\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:01:26 crc kubenswrapper[4965]: I0219 10:01:26.682946 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1520d7ba-9d74-47f8-9c7a-9731ae9ff49e-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1520d7ba-9d74-47f8-9c7a-9731ae9ff49e\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:01:26 crc kubenswrapper[4965]: I0219 10:01:26.682994 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1520d7ba-9d74-47f8-9c7a-9731ae9ff49e-config\") pod \"ovsdbserver-sb-0\" (UID: \"1520d7ba-9d74-47f8-9c7a-9731ae9ff49e\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:01:26 crc kubenswrapper[4965]: I0219 10:01:26.683041 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1520d7ba-9d74-47f8-9c7a-9731ae9ff49e-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"1520d7ba-9d74-47f8-9c7a-9731ae9ff49e\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:01:26 crc kubenswrapper[4965]: I0219 10:01:26.683080 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d8da21cf-c1c5-4301-8e68-e500d783a868\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d8da21cf-c1c5-4301-8e68-e500d783a868\") pod \"ovsdbserver-sb-0\" (UID: \"1520d7ba-9d74-47f8-9c7a-9731ae9ff49e\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:01:26 crc kubenswrapper[4965]: I0219 10:01:26.683105 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1520d7ba-9d74-47f8-9c7a-9731ae9ff49e-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1520d7ba-9d74-47f8-9c7a-9731ae9ff49e\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:01:26 crc kubenswrapper[4965]: I0219 10:01:26.683162 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hw9ph\" (UniqueName: \"kubernetes.io/projected/1520d7ba-9d74-47f8-9c7a-9731ae9ff49e-kube-api-access-hw9ph\") pod \"ovsdbserver-sb-0\" (UID: \"1520d7ba-9d74-47f8-9c7a-9731ae9ff49e\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:01:26 crc kubenswrapper[4965]: I0219 10:01:26.683185 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1520d7ba-9d74-47f8-9c7a-9731ae9ff49e-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"1520d7ba-9d74-47f8-9c7a-9731ae9ff49e\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:01:26 crc kubenswrapper[4965]: I0219 10:01:26.683222 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1520d7ba-9d74-47f8-9c7a-9731ae9ff49e-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"1520d7ba-9d74-47f8-9c7a-9731ae9ff49e\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:01:26 crc kubenswrapper[4965]: I0219 10:01:26.683484 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1520d7ba-9d74-47f8-9c7a-9731ae9ff49e-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"1520d7ba-9d74-47f8-9c7a-9731ae9ff49e\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:01:26 crc kubenswrapper[4965]: I0219 10:01:26.684369 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1520d7ba-9d74-47f8-9c7a-9731ae9ff49e-config\") pod \"ovsdbserver-sb-0\" (UID: \"1520d7ba-9d74-47f8-9c7a-9731ae9ff49e\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:01:26 crc kubenswrapper[4965]: I0219 10:01:26.684472 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1520d7ba-9d74-47f8-9c7a-9731ae9ff49e-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"1520d7ba-9d74-47f8-9c7a-9731ae9ff49e\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:01:26 crc kubenswrapper[4965]: I0219 10:01:26.686607 4965 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 10:01:26 crc kubenswrapper[4965]: I0219 10:01:26.686656 4965 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d8da21cf-c1c5-4301-8e68-e500d783a868\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d8da21cf-c1c5-4301-8e68-e500d783a868\") pod \"ovsdbserver-sb-0\" (UID: \"1520d7ba-9d74-47f8-9c7a-9731ae9ff49e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/49352fe534f64c8362c0983bc89f2fae9f57d7923c2ca9ac12af9b49ed87a73c/globalmount\"" pod="openstack/ovsdbserver-sb-0" Feb 19 10:01:26 crc kubenswrapper[4965]: I0219 10:01:26.688568 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1520d7ba-9d74-47f8-9c7a-9731ae9ff49e-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"1520d7ba-9d74-47f8-9c7a-9731ae9ff49e\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:01:26 crc kubenswrapper[4965]: I0219 10:01:26.699935 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1520d7ba-9d74-47f8-9c7a-9731ae9ff49e-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1520d7ba-9d74-47f8-9c7a-9731ae9ff49e\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:01:26 crc kubenswrapper[4965]: I0219 10:01:26.706508 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1520d7ba-9d74-47f8-9c7a-9731ae9ff49e-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1520d7ba-9d74-47f8-9c7a-9731ae9ff49e\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:01:26 crc kubenswrapper[4965]: I0219 10:01:26.706518 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hw9ph\" (UniqueName: \"kubernetes.io/projected/1520d7ba-9d74-47f8-9c7a-9731ae9ff49e-kube-api-access-hw9ph\") pod \"ovsdbserver-sb-0\" (UID: \"1520d7ba-9d74-47f8-9c7a-9731ae9ff49e\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:01:26 crc kubenswrapper[4965]: I0219 10:01:26.746426 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d8da21cf-c1c5-4301-8e68-e500d783a868\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d8da21cf-c1c5-4301-8e68-e500d783a868\") pod \"ovsdbserver-sb-0\" (UID: \"1520d7ba-9d74-47f8-9c7a-9731ae9ff49e\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:01:26 crc kubenswrapper[4965]: I0219 10:01:26.797565 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 19 10:01:27 crc kubenswrapper[4965]: E0219 10:01:27.164569 4965 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 19 10:01:27 crc kubenswrapper[4965]: E0219 10:01:27.164722 4965 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lvqmk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-r6prk_openstack(218ae800-f2dc-4ae1-beeb-bf4847797fbd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 10:01:27 crc kubenswrapper[4965]: E0219 10:01:27.165941 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-r6prk" podUID="218ae800-f2dc-4ae1-beeb-bf4847797fbd" Feb 19 10:01:27 crc kubenswrapper[4965]: E0219 10:01:27.216967 4965 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 19 10:01:27 crc kubenswrapper[4965]: E0219 10:01:27.217632 4965 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g9kkq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-7djwj_openstack(7a3ae7dc-3ce4-4d63-9e90-d005f3de3d8d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 10:01:27 crc kubenswrapper[4965]: E0219 10:01:27.218830 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-7djwj" podUID="7a3ae7dc-3ce4-4d63-9e90-d005f3de3d8d" Feb 19 10:01:27 crc kubenswrapper[4965]: I0219 10:01:27.819493 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 19 10:01:27 crc kubenswrapper[4965]: W0219 10:01:27.893471 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45105c9e_db96_41c5_ba42_d56027ca318c.slice/crio-067e7b32a6471cbe1fc4f88ca7404f99592239d427d133a25ff8709028119881 WatchSource:0}: Error finding container 067e7b32a6471cbe1fc4f88ca7404f99592239d427d133a25ff8709028119881: Status 404 returned error can't find the container with id 067e7b32a6471cbe1fc4f88ca7404f99592239d427d133a25ff8709028119881 Feb 19 10:01:28 crc kubenswrapper[4965]: I0219 10:01:28.251294 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-7djwj" Feb 19 10:01:28 crc kubenswrapper[4965]: I0219 10:01:28.273952 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 19 10:01:28 crc kubenswrapper[4965]: W0219 10:01:28.290395 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod215df1f4_6c30_4144_b141_5a867e8d2728.slice/crio-715232d360b3ff1b1721ed02e29fe93b8a0029db8fb785a958ff6ead6e239df6 WatchSource:0}: Error finding container 715232d360b3ff1b1721ed02e29fe93b8a0029db8fb785a958ff6ead6e239df6: Status 404 returned error can't find the container with id 715232d360b3ff1b1721ed02e29fe93b8a0029db8fb785a958ff6ead6e239df6 Feb 19 10:01:28 crc kubenswrapper[4965]: I0219 10:01:28.317213 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9kkq\" (UniqueName: \"kubernetes.io/projected/7a3ae7dc-3ce4-4d63-9e90-d005f3de3d8d-kube-api-access-g9kkq\") pod \"7a3ae7dc-3ce4-4d63-9e90-d005f3de3d8d\" (UID: \"7a3ae7dc-3ce4-4d63-9e90-d005f3de3d8d\") " Feb 19 10:01:28 crc kubenswrapper[4965]: I0219 10:01:28.317283 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7a3ae7dc-3ce4-4d63-9e90-d005f3de3d8d-dns-svc\") pod \"7a3ae7dc-3ce4-4d63-9e90-d005f3de3d8d\" (UID: \"7a3ae7dc-3ce4-4d63-9e90-d005f3de3d8d\") " Feb 19 10:01:28 crc kubenswrapper[4965]: I0219 10:01:28.317407 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a3ae7dc-3ce4-4d63-9e90-d005f3de3d8d-config\") pod \"7a3ae7dc-3ce4-4d63-9e90-d005f3de3d8d\" (UID: \"7a3ae7dc-3ce4-4d63-9e90-d005f3de3d8d\") " Feb 19 10:01:28 crc kubenswrapper[4965]: I0219 10:01:28.317993 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a3ae7dc-3ce4-4d63-9e90-d005f3de3d8d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7a3ae7dc-3ce4-4d63-9e90-d005f3de3d8d" (UID: "7a3ae7dc-3ce4-4d63-9e90-d005f3de3d8d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:01:28 crc kubenswrapper[4965]: I0219 10:01:28.318419 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a3ae7dc-3ce4-4d63-9e90-d005f3de3d8d-config" (OuterVolumeSpecName: "config") pod "7a3ae7dc-3ce4-4d63-9e90-d005f3de3d8d" (UID: "7a3ae7dc-3ce4-4d63-9e90-d005f3de3d8d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:01:28 crc kubenswrapper[4965]: I0219 10:01:28.321795 4965 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7a3ae7dc-3ce4-4d63-9e90-d005f3de3d8d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:28 crc kubenswrapper[4965]: I0219 10:01:28.321819 4965 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a3ae7dc-3ce4-4d63-9e90-d005f3de3d8d-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:28 crc kubenswrapper[4965]: I0219 10:01:28.326829 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mwlb6"] Feb 19 10:01:28 crc kubenswrapper[4965]: I0219 10:01:28.329261 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a3ae7dc-3ce4-4d63-9e90-d005f3de3d8d-kube-api-access-g9kkq" (OuterVolumeSpecName: "kube-api-access-g9kkq") pod "7a3ae7dc-3ce4-4d63-9e90-d005f3de3d8d" (UID: "7a3ae7dc-3ce4-4d63-9e90-d005f3de3d8d"). InnerVolumeSpecName "kube-api-access-g9kkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:01:28 crc kubenswrapper[4965]: W0219 10:01:28.333573 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod305a32d6_c9f8_4494_b356_75d6c54c7467.slice/crio-044219c30685117e651b3bff2a8dd282596bcb8cd3e142d14a7a2a6f0051de8d WatchSource:0}: Error finding container 044219c30685117e651b3bff2a8dd282596bcb8cd3e142d14a7a2a6f0051de8d: Status 404 returned error can't find the container with id 044219c30685117e651b3bff2a8dd282596bcb8cd3e142d14a7a2a6f0051de8d Feb 19 10:01:28 crc kubenswrapper[4965]: I0219 10:01:28.333712 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 19 10:01:28 crc kubenswrapper[4965]: W0219 10:01:28.339679 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7a4c9f4_b898_43b4_812d_ab4f17c2124d.slice/crio-54b60bf8c3e09744a63228ccd8c5feb0566fcd2b2b377e8bc67a3ba6834884d6 WatchSource:0}: Error finding container 54b60bf8c3e09744a63228ccd8c5feb0566fcd2b2b377e8bc67a3ba6834884d6: Status 404 returned error can't find the container with id 54b60bf8c3e09744a63228ccd8c5feb0566fcd2b2b377e8bc67a3ba6834884d6 Feb 19 10:01:28 crc kubenswrapper[4965]: I0219 10:01:28.343179 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 10:01:28 crc kubenswrapper[4965]: I0219 10:01:28.343535 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-r6prk" Feb 19 10:01:28 crc kubenswrapper[4965]: W0219 10:01:28.343781 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbbd64606_53f8_484e_b8d2_c0fef4acb1bd.slice/crio-1607d2fb6dd1dc2d9aa264e00ead79e3964938bdf143825f4964dc7e018ee31b WatchSource:0}: Error finding container 1607d2fb6dd1dc2d9aa264e00ead79e3964938bdf143825f4964dc7e018ee31b: Status 404 returned error can't find the container with id 1607d2fb6dd1dc2d9aa264e00ead79e3964938bdf143825f4964dc7e018ee31b Feb 19 10:01:28 crc kubenswrapper[4965]: I0219 10:01:28.352710 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 10:01:28 crc kubenswrapper[4965]: I0219 10:01:28.358139 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 10:01:28 crc kubenswrapper[4965]: I0219 10:01:28.413891 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 19 10:01:28 crc kubenswrapper[4965]: W0219 10:01:28.415391 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1520d7ba_9d74_47f8_9c7a_9731ae9ff49e.slice/crio-28531097cc34961d054670d356030d92dd8c0ad114143d50585673e46a385e47 WatchSource:0}: Error finding container 28531097cc34961d054670d356030d92dd8c0ad114143d50585673e46a385e47: Status 404 returned error can't find the container with id 28531097cc34961d054670d356030d92dd8c0ad114143d50585673e46a385e47 Feb 19 10:01:28 crc kubenswrapper[4965]: I0219 10:01:28.422895 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvqmk\" (UniqueName: \"kubernetes.io/projected/218ae800-f2dc-4ae1-beeb-bf4847797fbd-kube-api-access-lvqmk\") pod \"218ae800-f2dc-4ae1-beeb-bf4847797fbd\" (UID: \"218ae800-f2dc-4ae1-beeb-bf4847797fbd\") " Feb 19 10:01:28 crc kubenswrapper[4965]: I0219 10:01:28.423993 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/218ae800-f2dc-4ae1-beeb-bf4847797fbd-config\") pod \"218ae800-f2dc-4ae1-beeb-bf4847797fbd\" (UID: \"218ae800-f2dc-4ae1-beeb-bf4847797fbd\") " Feb 19 10:01:28 crc kubenswrapper[4965]: I0219 10:01:28.424487 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9kkq\" (UniqueName: \"kubernetes.io/projected/7a3ae7dc-3ce4-4d63-9e90-d005f3de3d8d-kube-api-access-g9kkq\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:28 crc kubenswrapper[4965]: I0219 10:01:28.424903 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/218ae800-f2dc-4ae1-beeb-bf4847797fbd-config" (OuterVolumeSpecName: "config") pod "218ae800-f2dc-4ae1-beeb-bf4847797fbd" (UID: "218ae800-f2dc-4ae1-beeb-bf4847797fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:01:28 crc kubenswrapper[4965]: I0219 10:01:28.427424 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/218ae800-f2dc-4ae1-beeb-bf4847797fbd-kube-api-access-lvqmk" (OuterVolumeSpecName: "kube-api-access-lvqmk") pod "218ae800-f2dc-4ae1-beeb-bf4847797fbd" (UID: "218ae800-f2dc-4ae1-beeb-bf4847797fbd"). InnerVolumeSpecName "kube-api-access-lvqmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:01:28 crc kubenswrapper[4965]: I0219 10:01:28.526795 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvqmk\" (UniqueName: \"kubernetes.io/projected/218ae800-f2dc-4ae1-beeb-bf4847797fbd-kube-api-access-lvqmk\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:28 crc kubenswrapper[4965]: I0219 10:01:28.526833 4965 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/218ae800-f2dc-4ae1-beeb-bf4847797fbd-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:28 crc kubenswrapper[4965]: I0219 10:01:28.537652 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 19 10:01:29 crc kubenswrapper[4965]: I0219 10:01:29.129555 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-r6prk" event={"ID":"218ae800-f2dc-4ae1-beeb-bf4847797fbd","Type":"ContainerDied","Data":"bca4b019e1c6e795a97e003de0e38a69d9169d9f351fe0e1bf37e00d374f4972"} Feb 19 10:01:29 crc kubenswrapper[4965]: I0219 10:01:29.143782 4965 generic.go:334] "Generic (PLEG): container finished" podID="3f73b9d2-a434-4638-bce4-6c710166a455" containerID="35e15293de36b307843697bb1831d782f7c419a79a2a97d2244ac5438bb7255b" exitCode=0 Feb 19 10:01:29 crc kubenswrapper[4965]: I0219 10:01:29.143873 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-5xvv9" event={"ID":"3f73b9d2-a434-4638-bce4-6c710166a455","Type":"ContainerDied","Data":"35e15293de36b307843697bb1831d782f7c419a79a2a97d2244ac5438bb7255b"} Feb 19 10:01:29 crc kubenswrapper[4965]: I0219 10:01:29.151081 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"45105c9e-db96-41c5-ba42-d56027ca318c","Type":"ContainerStarted","Data":"067e7b32a6471cbe1fc4f88ca7404f99592239d427d133a25ff8709028119881"} Feb 19 10:01:29 crc kubenswrapper[4965]: I0219 10:01:29.157416 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-r6prk" Feb 19 10:01:29 crc kubenswrapper[4965]: I0219 10:01:29.160966 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"215df1f4-6c30-4144-b141-5a867e8d2728","Type":"ContainerStarted","Data":"715232d360b3ff1b1721ed02e29fe93b8a0029db8fb785a958ff6ead6e239df6"} Feb 19 10:01:29 crc kubenswrapper[4965]: I0219 10:01:29.161409 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 10:01:29 crc kubenswrapper[4965]: I0219 10:01:29.166302 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"40caef4c-7f84-42cb-b51c-b0884efc2052","Type":"ContainerStarted","Data":"c001bc1d8f4d6fb7bb3f2bcf69d20aef5acde83f316d14b5e81a02b2e75d7edb"} Feb 19 10:01:29 crc kubenswrapper[4965]: I0219 10:01:29.169498 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bbd64606-53f8-484e-b8d2-c0fef4acb1bd","Type":"ContainerStarted","Data":"1607d2fb6dd1dc2d9aa264e00ead79e3964938bdf143825f4964dc7e018ee31b"} Feb 19 10:01:29 crc kubenswrapper[4965]: I0219 10:01:29.191355 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"305a32d6-c9f8-4494-b356-75d6c54c7467","Type":"ContainerStarted","Data":"044219c30685117e651b3bff2a8dd282596bcb8cd3e142d14a7a2a6f0051de8d"} Feb 19 10:01:29 crc kubenswrapper[4965]: I0219 10:01:29.195577 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-7djwj" Feb 19 10:01:29 crc kubenswrapper[4965]: I0219 10:01:29.195682 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-7djwj" event={"ID":"7a3ae7dc-3ce4-4d63-9e90-d005f3de3d8d","Type":"ContainerDied","Data":"bc2dd8e8e2ae456cec3cb0ff7ce55ae8df59319bc1ba8f8876637179ffb4d0f6"} Feb 19 10:01:29 crc kubenswrapper[4965]: I0219 10:01:29.224775 4965 generic.go:334] "Generic (PLEG): container finished" podID="45fd5ec9-f248-4c13-b4a5-85885283391e" containerID="7e24aa672508623ed5e44c0c18287abe244b42be8f6e1ea0a2aabc6d3d793a9e" exitCode=0 Feb 19 10:01:29 crc kubenswrapper[4965]: I0219 10:01:29.266847 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-hb29s" event={"ID":"45fd5ec9-f248-4c13-b4a5-85885283391e","Type":"ContainerDied","Data":"7e24aa672508623ed5e44c0c18287abe244b42be8f6e1ea0a2aabc6d3d793a9e"} Feb 19 10:01:29 crc kubenswrapper[4965]: I0219 10:01:29.266902 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mwlb6" event={"ID":"0a44ec63-c497-4874-b0ca-ecb9d6c9bc2a","Type":"ContainerStarted","Data":"70bf9ecf5be81f0db1896808778fbd9b26124581845b70a6e45d219b5de77c58"} Feb 19 10:01:29 crc kubenswrapper[4965]: I0219 10:01:29.266921 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-jlns7"] Feb 19 10:01:29 crc kubenswrapper[4965]: I0219 10:01:29.266940 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"1520d7ba-9d74-47f8-9c7a-9731ae9ff49e","Type":"ContainerStarted","Data":"28531097cc34961d054670d356030d92dd8c0ad114143d50585673e46a385e47"} Feb 19 10:01:29 crc kubenswrapper[4965]: I0219 10:01:29.266952 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e7a4c9f4-b898-43b4-812d-ab4f17c2124d","Type":"ContainerStarted","Data":"54b60bf8c3e09744a63228ccd8c5feb0566fcd2b2b377e8bc67a3ba6834884d6"} Feb 19 10:01:29 crc kubenswrapper[4965]: I0219 10:01:29.330233 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-r6prk"] Feb 19 10:01:29 crc kubenswrapper[4965]: I0219 10:01:29.334129 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-r6prk"] Feb 19 10:01:29 crc kubenswrapper[4965]: I0219 10:01:29.371255 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7djwj"] Feb 19 10:01:29 crc kubenswrapper[4965]: I0219 10:01:29.376821 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7djwj"] Feb 19 10:01:29 crc kubenswrapper[4965]: E0219 10:01:29.469645 4965 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Feb 19 10:01:29 crc kubenswrapper[4965]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/45fd5ec9-f248-4c13-b4a5-85885283391e/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 19 10:01:29 crc kubenswrapper[4965]: > podSandboxID="94e201d574249473114d4fc54e519fe0bfcec73df77264df290763c5eb533e91" Feb 19 10:01:29 crc kubenswrapper[4965]: E0219 10:01:29.469820 4965 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 19 10:01:29 crc kubenswrapper[4965]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hq544,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-hb29s_openstack(45fd5ec9-f248-4c13-b4a5-85885283391e): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/45fd5ec9-f248-4c13-b4a5-85885283391e/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 19 10:01:29 crc kubenswrapper[4965]: > logger="UnhandledError" Feb 19 10:01:29 crc kubenswrapper[4965]: E0219 10:01:29.471234 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/45fd5ec9-f248-4c13-b4a5-85885283391e/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-666b6646f7-hb29s" podUID="45fd5ec9-f248-4c13-b4a5-85885283391e" Feb 19 10:01:29 crc kubenswrapper[4965]: I0219 10:01:29.713773 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-distributor-585d9bcbc-ktrzq"] Feb 19 10:01:29 crc kubenswrapper[4965]: I0219 10:01:29.715783 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-ktrzq" Feb 19 10:01:29 crc kubenswrapper[4965]: I0219 10:01:29.721949 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-dockercfg-bxhsg" Feb 19 10:01:29 crc kubenswrapper[4965]: I0219 10:01:29.722223 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-ca-bundle" Feb 19 10:01:29 crc kubenswrapper[4965]: I0219 10:01:29.723774 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-distributor-grpc" Feb 19 10:01:29 crc kubenswrapper[4965]: I0219 10:01:29.724111 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-config" Feb 19 10:01:29 crc kubenswrapper[4965]: I0219 10:01:29.725536 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-distributor-585d9bcbc-ktrzq"] Feb 19 10:01:29 crc kubenswrapper[4965]: I0219 10:01:29.725736 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-distributor-http" Feb 19 10:01:29 crc kubenswrapper[4965]: I0219 10:01:29.833420 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkfn5\" (UniqueName: \"kubernetes.io/projected/afbb0d2a-5cd0-4358-b5b0-c22749400326-kube-api-access-wkfn5\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-ktrzq\" (UID: \"afbb0d2a-5cd0-4358-b5b0-c22749400326\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-ktrzq" Feb 19 10:01:29 crc kubenswrapper[4965]: I0219 10:01:29.833467 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/afbb0d2a-5cd0-4358-b5b0-c22749400326-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-ktrzq\" (UID: \"afbb0d2a-5cd0-4358-b5b0-c22749400326\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-ktrzq" Feb 19 10:01:29 crc kubenswrapper[4965]: I0219 10:01:29.833512 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/afbb0d2a-5cd0-4358-b5b0-c22749400326-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-ktrzq\" (UID: \"afbb0d2a-5cd0-4358-b5b0-c22749400326\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-ktrzq" Feb 19 10:01:29 crc kubenswrapper[4965]: I0219 10:01:29.833563 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afbb0d2a-5cd0-4358-b5b0-c22749400326-config\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-ktrzq\" (UID: \"afbb0d2a-5cd0-4358-b5b0-c22749400326\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-ktrzq" Feb 19 10:01:29 crc kubenswrapper[4965]: I0219 10:01:29.833592 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/afbb0d2a-5cd0-4358-b5b0-c22749400326-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-ktrzq\" (UID: \"afbb0d2a-5cd0-4358-b5b0-c22749400326\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-ktrzq" Feb 19 10:01:29 crc kubenswrapper[4965]: I0219 10:01:29.887408 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-querier-58c84b5844-hb4c6"] Feb 19 10:01:29 crc kubenswrapper[4965]: I0219 10:01:29.889653 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-hb4c6" Feb 19 10:01:29 crc kubenswrapper[4965]: I0219 10:01:29.895348 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-querier-grpc" Feb 19 10:01:29 crc kubenswrapper[4965]: I0219 10:01:29.895576 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-loki-s3" Feb 19 10:01:29 crc kubenswrapper[4965]: I0219 10:01:29.903801 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-querier-http" Feb 19 10:01:29 crc kubenswrapper[4965]: I0219 10:01:29.912014 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-querier-58c84b5844-hb4c6"] Feb 19 10:01:29 crc kubenswrapper[4965]: I0219 10:01:29.947614 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/7adcb318-8832-417d-814a-7a2d21c8af30-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-58c84b5844-hb4c6\" (UID: \"7adcb318-8832-417d-814a-7a2d21c8af30\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-hb4c6" Feb 19 10:01:29 crc kubenswrapper[4965]: I0219 10:01:29.947676 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/7adcb318-8832-417d-814a-7a2d21c8af30-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-58c84b5844-hb4c6\" (UID: \"7adcb318-8832-417d-814a-7a2d21c8af30\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-hb4c6" Feb 19 10:01:29 crc kubenswrapper[4965]: I0219 10:01:29.947712 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkfn5\" (UniqueName: \"kubernetes.io/projected/afbb0d2a-5cd0-4358-b5b0-c22749400326-kube-api-access-wkfn5\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-ktrzq\" (UID: \"afbb0d2a-5cd0-4358-b5b0-c22749400326\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-ktrzq" Feb 19 10:01:29 crc kubenswrapper[4965]: I0219 10:01:29.947737 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/afbb0d2a-5cd0-4358-b5b0-c22749400326-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-ktrzq\" (UID: \"afbb0d2a-5cd0-4358-b5b0-c22749400326\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-ktrzq" Feb 19 10:01:29 crc kubenswrapper[4965]: I0219 10:01:29.947772 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/afbb0d2a-5cd0-4358-b5b0-c22749400326-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-ktrzq\" (UID: \"afbb0d2a-5cd0-4358-b5b0-c22749400326\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-ktrzq" Feb 19 10:01:29 crc kubenswrapper[4965]: I0219 10:01:29.947810 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afbb0d2a-5cd0-4358-b5b0-c22749400326-config\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-ktrzq\" (UID: \"afbb0d2a-5cd0-4358-b5b0-c22749400326\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-ktrzq" Feb 19 10:01:29 crc kubenswrapper[4965]: I0219 10:01:29.947839 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/afbb0d2a-5cd0-4358-b5b0-c22749400326-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-ktrzq\" (UID: \"afbb0d2a-5cd0-4358-b5b0-c22749400326\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-ktrzq" Feb 19 10:01:29 crc kubenswrapper[4965]: I0219 10:01:29.947863 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/7adcb318-8832-417d-814a-7a2d21c8af30-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-58c84b5844-hb4c6\" (UID: \"7adcb318-8832-417d-814a-7a2d21c8af30\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-hb4c6" Feb 19 10:01:29 crc kubenswrapper[4965]: I0219 10:01:29.947886 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh6pq\" (UniqueName: \"kubernetes.io/projected/7adcb318-8832-417d-814a-7a2d21c8af30-kube-api-access-jh6pq\") pod \"cloudkitty-lokistack-querier-58c84b5844-hb4c6\" (UID: \"7adcb318-8832-417d-814a-7a2d21c8af30\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-hb4c6" Feb 19 10:01:29 crc kubenswrapper[4965]: I0219 10:01:29.947913 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7adcb318-8832-417d-814a-7a2d21c8af30-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-58c84b5844-hb4c6\" (UID: \"7adcb318-8832-417d-814a-7a2d21c8af30\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-hb4c6" Feb 19 10:01:29 crc kubenswrapper[4965]: I0219 10:01:29.947941 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7adcb318-8832-417d-814a-7a2d21c8af30-config\") pod \"cloudkitty-lokistack-querier-58c84b5844-hb4c6\" (UID: \"7adcb318-8832-417d-814a-7a2d21c8af30\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-hb4c6" Feb 19 10:01:29 crc kubenswrapper[4965]: I0219 10:01:29.950136 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/afbb0d2a-5cd0-4358-b5b0-c22749400326-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-ktrzq\" (UID: \"afbb0d2a-5cd0-4358-b5b0-c22749400326\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-ktrzq" Feb 19 10:01:29 crc kubenswrapper[4965]: I0219 10:01:29.950480 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afbb0d2a-5cd0-4358-b5b0-c22749400326-config\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-ktrzq\" (UID: \"afbb0d2a-5cd0-4358-b5b0-c22749400326\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-ktrzq" Feb 19 10:01:29 crc kubenswrapper[4965]: I0219 10:01:29.955939 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/afbb0d2a-5cd0-4358-b5b0-c22749400326-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-ktrzq\" (UID: \"afbb0d2a-5cd0-4358-b5b0-c22749400326\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-ktrzq" Feb 19 10:01:29 crc kubenswrapper[4965]: I0219 10:01:29.957908 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/afbb0d2a-5cd0-4358-b5b0-c22749400326-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-ktrzq\" (UID: \"afbb0d2a-5cd0-4358-b5b0-c22749400326\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-ktrzq" Feb 19 10:01:29 crc kubenswrapper[4965]: I0219 10:01:29.985354 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkfn5\" (UniqueName: \"kubernetes.io/projected/afbb0d2a-5cd0-4358-b5b0-c22749400326-kube-api-access-wkfn5\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-ktrzq\" (UID: \"afbb0d2a-5cd0-4358-b5b0-c22749400326\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-ktrzq" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.004359 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-4bmcg"] Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.005768 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-4bmcg" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.012945 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-query-frontend-grpc" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.015314 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-query-frontend-http" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.022057 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-4bmcg"] Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.036792 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-ktrzq" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.055076 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/7adcb318-8832-417d-814a-7a2d21c8af30-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-58c84b5844-hb4c6\" (UID: \"7adcb318-8832-417d-814a-7a2d21c8af30\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-hb4c6" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.055127 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/7adcb318-8832-417d-814a-7a2d21c8af30-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-58c84b5844-hb4c6\" (UID: \"7adcb318-8832-417d-814a-7a2d21c8af30\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-hb4c6" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.055445 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/7adcb318-8832-417d-814a-7a2d21c8af30-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-58c84b5844-hb4c6\" (UID: \"7adcb318-8832-417d-814a-7a2d21c8af30\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-hb4c6" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.056687 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jh6pq\" (UniqueName: \"kubernetes.io/projected/7adcb318-8832-417d-814a-7a2d21c8af30-kube-api-access-jh6pq\") pod \"cloudkitty-lokistack-querier-58c84b5844-hb4c6\" (UID: \"7adcb318-8832-417d-814a-7a2d21c8af30\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-hb4c6" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.056737 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7adcb318-8832-417d-814a-7a2d21c8af30-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-58c84b5844-hb4c6\" (UID: \"7adcb318-8832-417d-814a-7a2d21c8af30\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-hb4c6" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.056794 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7adcb318-8832-417d-814a-7a2d21c8af30-config\") pod \"cloudkitty-lokistack-querier-58c84b5844-hb4c6\" (UID: \"7adcb318-8832-417d-814a-7a2d21c8af30\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-hb4c6" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.058004 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7adcb318-8832-417d-814a-7a2d21c8af30-config\") pod \"cloudkitty-lokistack-querier-58c84b5844-hb4c6\" (UID: \"7adcb318-8832-417d-814a-7a2d21c8af30\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-hb4c6" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.058135 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7adcb318-8832-417d-814a-7a2d21c8af30-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-58c84b5844-hb4c6\" (UID: \"7adcb318-8832-417d-814a-7a2d21c8af30\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-hb4c6" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.062090 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/7adcb318-8832-417d-814a-7a2d21c8af30-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-58c84b5844-hb4c6\" (UID: \"7adcb318-8832-417d-814a-7a2d21c8af30\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-hb4c6" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.062730 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/7adcb318-8832-417d-814a-7a2d21c8af30-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-58c84b5844-hb4c6\" (UID: \"7adcb318-8832-417d-814a-7a2d21c8af30\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-hb4c6" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.064043 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/7adcb318-8832-417d-814a-7a2d21c8af30-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-58c84b5844-hb4c6\" (UID: \"7adcb318-8832-417d-814a-7a2d21c8af30\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-hb4c6" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.084691 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jh6pq\" (UniqueName: \"kubernetes.io/projected/7adcb318-8832-417d-814a-7a2d21c8af30-kube-api-access-jh6pq\") pod \"cloudkitty-lokistack-querier-58c84b5844-hb4c6\" (UID: \"7adcb318-8832-417d-814a-7a2d21c8af30\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-hb4c6" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.142315 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-h6555"] Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.150319 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-h6555" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.154625 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-gateway-ca-bundle" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.154994 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-dockercfg-26jr6" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.155209 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-ca" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.155429 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.155574 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-client-http" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.155688 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-http" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.155909 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-gateway" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.158844 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmf5m\" (UniqueName: \"kubernetes.io/projected/b1f1bb07-4bb0-4a1c-ab8a-0e97e8d57f1a-kube-api-access-tmf5m\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-h6555\" (UID: \"b1f1bb07-4bb0-4a1c-ab8a-0e97e8d57f1a\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-h6555" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.158893 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/b1f1bb07-4bb0-4a1c-ab8a-0e97e8d57f1a-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-h6555\" (UID: \"b1f1bb07-4bb0-4a1c-ab8a-0e97e8d57f1a\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-h6555" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.158925 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/b1f1bb07-4bb0-4a1c-ab8a-0e97e8d57f1a-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-h6555\" (UID: \"b1f1bb07-4bb0-4a1c-ab8a-0e97e8d57f1a\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-h6555" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.158952 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hldgm\" (UniqueName: \"kubernetes.io/projected/849f49ac-72be-49ce-ab6b-2eb5890a6337-kube-api-access-hldgm\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-4bmcg\" (UID: \"849f49ac-72be-49ce-ab6b-2eb5890a6337\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-4bmcg" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.158967 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/b1f1bb07-4bb0-4a1c-ab8a-0e97e8d57f1a-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-h6555\" (UID: \"b1f1bb07-4bb0-4a1c-ab8a-0e97e8d57f1a\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-h6555" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.158990 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/849f49ac-72be-49ce-ab6b-2eb5890a6337-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-4bmcg\" (UID: \"849f49ac-72be-49ce-ab6b-2eb5890a6337\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-4bmcg" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.159028 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1f1bb07-4bb0-4a1c-ab8a-0e97e8d57f1a-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-h6555\" (UID: \"b1f1bb07-4bb0-4a1c-ab8a-0e97e8d57f1a\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-h6555" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.159044 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/849f49ac-72be-49ce-ab6b-2eb5890a6337-config\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-4bmcg\" (UID: \"849f49ac-72be-49ce-ab6b-2eb5890a6337\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-4bmcg" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.159069 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/849f49ac-72be-49ce-ab6b-2eb5890a6337-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-4bmcg\" (UID: \"849f49ac-72be-49ce-ab6b-2eb5890a6337\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-4bmcg" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.159093 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/b1f1bb07-4bb0-4a1c-ab8a-0e97e8d57f1a-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-h6555\" (UID: \"b1f1bb07-4bb0-4a1c-ab8a-0e97e8d57f1a\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-h6555" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.159128 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1f1bb07-4bb0-4a1c-ab8a-0e97e8d57f1a-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-h6555\" (UID: \"b1f1bb07-4bb0-4a1c-ab8a-0e97e8d57f1a\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-h6555" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.159236 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/b1f1bb07-4bb0-4a1c-ab8a-0e97e8d57f1a-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-h6555\" (UID: \"b1f1bb07-4bb0-4a1c-ab8a-0e97e8d57f1a\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-h6555" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.159274 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/849f49ac-72be-49ce-ab6b-2eb5890a6337-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-4bmcg\" (UID: \"849f49ac-72be-49ce-ab6b-2eb5890a6337\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-4bmcg" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.159316 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1f1bb07-4bb0-4a1c-ab8a-0e97e8d57f1a-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-h6555\" (UID: \"b1f1bb07-4bb0-4a1c-ab8a-0e97e8d57f1a\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-h6555" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.168536 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-9vkbl"] Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.170783 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-9vkbl" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.196255 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-h6555"] Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.224656 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.225923 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-hb4c6" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.244158 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-9vkbl"] Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.261040 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hldgm\" (UniqueName: \"kubernetes.io/projected/849f49ac-72be-49ce-ab6b-2eb5890a6337-kube-api-access-hldgm\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-4bmcg\" (UID: \"849f49ac-72be-49ce-ab6b-2eb5890a6337\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-4bmcg" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.261169 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/b1f1bb07-4bb0-4a1c-ab8a-0e97e8d57f1a-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-h6555\" (UID: \"b1f1bb07-4bb0-4a1c-ab8a-0e97e8d57f1a\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-h6555" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.261278 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/849f49ac-72be-49ce-ab6b-2eb5890a6337-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-4bmcg\" (UID: \"849f49ac-72be-49ce-ab6b-2eb5890a6337\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-4bmcg" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.261339 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1f1bb07-4bb0-4a1c-ab8a-0e97e8d57f1a-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-h6555\" (UID: \"b1f1bb07-4bb0-4a1c-ab8a-0e97e8d57f1a\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-h6555" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.261397 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/849f49ac-72be-49ce-ab6b-2eb5890a6337-config\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-4bmcg\" (UID: \"849f49ac-72be-49ce-ab6b-2eb5890a6337\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-4bmcg" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.261485 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/849f49ac-72be-49ce-ab6b-2eb5890a6337-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-4bmcg\" (UID: \"849f49ac-72be-49ce-ab6b-2eb5890a6337\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-4bmcg" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.261519 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/b1f1bb07-4bb0-4a1c-ab8a-0e97e8d57f1a-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-h6555\" (UID: \"b1f1bb07-4bb0-4a1c-ab8a-0e97e8d57f1a\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-h6555" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.261549 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1f1bb07-4bb0-4a1c-ab8a-0e97e8d57f1a-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-h6555\" (UID: \"b1f1bb07-4bb0-4a1c-ab8a-0e97e8d57f1a\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-h6555" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.261651 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/b1f1bb07-4bb0-4a1c-ab8a-0e97e8d57f1a-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-h6555\" (UID: \"b1f1bb07-4bb0-4a1c-ab8a-0e97e8d57f1a\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-h6555" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.261696 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/849f49ac-72be-49ce-ab6b-2eb5890a6337-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-4bmcg\" (UID: \"849f49ac-72be-49ce-ab6b-2eb5890a6337\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-4bmcg" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.261743 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1f1bb07-4bb0-4a1c-ab8a-0e97e8d57f1a-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-h6555\" (UID: \"b1f1bb07-4bb0-4a1c-ab8a-0e97e8d57f1a\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-h6555" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.261782 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmf5m\" (UniqueName: \"kubernetes.io/projected/b1f1bb07-4bb0-4a1c-ab8a-0e97e8d57f1a-kube-api-access-tmf5m\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-h6555\" (UID: \"b1f1bb07-4bb0-4a1c-ab8a-0e97e8d57f1a\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-h6555" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.261810 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/b1f1bb07-4bb0-4a1c-ab8a-0e97e8d57f1a-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-h6555\" (UID: \"b1f1bb07-4bb0-4a1c-ab8a-0e97e8d57f1a\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-h6555" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.262778 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1f1bb07-4bb0-4a1c-ab8a-0e97e8d57f1a-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-h6555\" (UID: \"b1f1bb07-4bb0-4a1c-ab8a-0e97e8d57f1a\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-h6555" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.263444 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/b1f1bb07-4bb0-4a1c-ab8a-0e97e8d57f1a-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-h6555\" (UID: \"b1f1bb07-4bb0-4a1c-ab8a-0e97e8d57f1a\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-h6555" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.263512 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/849f49ac-72be-49ce-ab6b-2eb5890a6337-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-4bmcg\" (UID: \"849f49ac-72be-49ce-ab6b-2eb5890a6337\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-4bmcg" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.263557 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/849f49ac-72be-49ce-ab6b-2eb5890a6337-config\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-4bmcg\" (UID: \"849f49ac-72be-49ce-ab6b-2eb5890a6337\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-4bmcg" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.261843 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/b1f1bb07-4bb0-4a1c-ab8a-0e97e8d57f1a-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-h6555\" (UID: \"b1f1bb07-4bb0-4a1c-ab8a-0e97e8d57f1a\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-h6555" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.264175 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1f1bb07-4bb0-4a1c-ab8a-0e97e8d57f1a-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-h6555\" (UID: \"b1f1bb07-4bb0-4a1c-ab8a-0e97e8d57f1a\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-h6555" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.264220 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1f1bb07-4bb0-4a1c-ab8a-0e97e8d57f1a-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-h6555\" (UID: \"b1f1bb07-4bb0-4a1c-ab8a-0e97e8d57f1a\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-h6555" Feb 19 10:01:30 crc kubenswrapper[4965]: E0219 10:01:30.266326 4965 secret.go:188] Couldn't get secret openstack/cloudkitty-lokistack-gateway-http: secret "cloudkitty-lokistack-gateway-http" not found Feb 19 10:01:30 crc kubenswrapper[4965]: E0219 10:01:30.266388 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1f1bb07-4bb0-4a1c-ab8a-0e97e8d57f1a-tls-secret podName:b1f1bb07-4bb0-4a1c-ab8a-0e97e8d57f1a nodeName:}" failed. No retries permitted until 2026-02-19 10:01:30.766367642 +0000 UTC m=+1146.387689062 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/b1f1bb07-4bb0-4a1c-ab8a-0e97e8d57f1a-tls-secret") pod "cloudkitty-lokistack-gateway-7f8685b49f-h6555" (UID: "b1f1bb07-4bb0-4a1c-ab8a-0e97e8d57f1a") : secret "cloudkitty-lokistack-gateway-http" not found Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.267458 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/b1f1bb07-4bb0-4a1c-ab8a-0e97e8d57f1a-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-h6555\" (UID: \"b1f1bb07-4bb0-4a1c-ab8a-0e97e8d57f1a\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-h6555" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.269755 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/b1f1bb07-4bb0-4a1c-ab8a-0e97e8d57f1a-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-h6555\" (UID: \"b1f1bb07-4bb0-4a1c-ab8a-0e97e8d57f1a\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-h6555" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.269835 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/849f49ac-72be-49ce-ab6b-2eb5890a6337-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-4bmcg\" (UID: \"849f49ac-72be-49ce-ab6b-2eb5890a6337\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-4bmcg" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.272274 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/b1f1bb07-4bb0-4a1c-ab8a-0e97e8d57f1a-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-h6555\" (UID: \"b1f1bb07-4bb0-4a1c-ab8a-0e97e8d57f1a\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-h6555" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.276801 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/849f49ac-72be-49ce-ab6b-2eb5890a6337-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-4bmcg\" (UID: \"849f49ac-72be-49ce-ab6b-2eb5890a6337\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-4bmcg" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.285140 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hldgm\" (UniqueName: \"kubernetes.io/projected/849f49ac-72be-49ce-ab6b-2eb5890a6337-kube-api-access-hldgm\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-4bmcg\" (UID: \"849f49ac-72be-49ce-ab6b-2eb5890a6337\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-4bmcg" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.287951 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmf5m\" (UniqueName: \"kubernetes.io/projected/b1f1bb07-4bb0-4a1c-ab8a-0e97e8d57f1a-kube-api-access-tmf5m\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-h6555\" (UID: \"b1f1bb07-4bb0-4a1c-ab8a-0e97e8d57f1a\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-h6555" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.325153 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-5xvv9" event={"ID":"3f73b9d2-a434-4638-bce4-6c710166a455","Type":"ContainerStarted","Data":"16b56c3d65bf036b7a875d7b26a3cbe7d7b1ab5b8e5737919cbd729aa3139250"} Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.325297 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-5xvv9" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.341612 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-5xvv9" podStartSLOduration=7.226425688 podStartE2EDuration="18.341592384s" podCreationTimestamp="2026-02-19 10:01:12 +0000 UTC" firstStartedPulling="2026-02-19 10:01:16.300656876 +0000 UTC m=+1131.921978186" lastFinishedPulling="2026-02-19 10:01:27.415823572 +0000 UTC m=+1143.037144882" observedRunningTime="2026-02-19 10:01:30.33977513 +0000 UTC m=+1145.961096460" watchObservedRunningTime="2026-02-19 10:01:30.341592384 +0000 UTC m=+1145.962913694" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.345497 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6263bd60-b2d0-44ff-ae54-874728576f1d","Type":"ContainerStarted","Data":"e10a4be2e0f74e37559727d687c0b37894c0b3202300f5aaed28101235b5b398"} Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.351172 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"5b862187-0edd-4939-9260-d0d35653485c","Type":"ContainerStarted","Data":"51326f997eaf1ace2af27de52bcbb78fac5dfa795660deb14b465e642b2a9ca1"} Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.353290 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-jlns7" event={"ID":"3f408d9e-6ca2-490c-be7e-0516fa19db75","Type":"ContainerStarted","Data":"d5f2b31cf3622081626810efdc3021f571a0cdbe877a995251d332f9d8b8f04a"} Feb 19 10:01:30 crc kubenswrapper[4965]: W0219 10:01:30.357857 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98633dba_c95c_4f35_a045_5c738d652492.slice/crio-d0f7de02bdba794d4cab3c98cc1c1e34cbe60448a23f4dd25b5ccef574b33732 WatchSource:0}: Error finding container d0f7de02bdba794d4cab3c98cc1c1e34cbe60448a23f4dd25b5ccef574b33732: Status 404 returned error can't find the container with id d0f7de02bdba794d4cab3c98cc1c1e34cbe60448a23f4dd25b5ccef574b33732 Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.359759 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-4bmcg" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.369266 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c673b0f-7739-4b94-99b9-abd66fb51937-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-9vkbl\" (UID: \"3c673b0f-7739-4b94-99b9-abd66fb51937\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-9vkbl" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.369338 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/3c673b0f-7739-4b94-99b9-abd66fb51937-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-9vkbl\" (UID: \"3c673b0f-7739-4b94-99b9-abd66fb51937\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-9vkbl" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.369364 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fpp6\" (UniqueName: \"kubernetes.io/projected/3c673b0f-7739-4b94-99b9-abd66fb51937-kube-api-access-6fpp6\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-9vkbl\" (UID: \"3c673b0f-7739-4b94-99b9-abd66fb51937\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-9vkbl" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.369435 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c673b0f-7739-4b94-99b9-abd66fb51937-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-9vkbl\" (UID: \"3c673b0f-7739-4b94-99b9-abd66fb51937\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-9vkbl" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.369528 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/3c673b0f-7739-4b94-99b9-abd66fb51937-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-9vkbl\" (UID: \"3c673b0f-7739-4b94-99b9-abd66fb51937\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-9vkbl" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.369738 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/3c673b0f-7739-4b94-99b9-abd66fb51937-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-9vkbl\" (UID: \"3c673b0f-7739-4b94-99b9-abd66fb51937\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-9vkbl" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.369806 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c673b0f-7739-4b94-99b9-abd66fb51937-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-9vkbl\" (UID: \"3c673b0f-7739-4b94-99b9-abd66fb51937\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-9vkbl" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.369840 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/3c673b0f-7739-4b94-99b9-abd66fb51937-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-9vkbl\" (UID: \"3c673b0f-7739-4b94-99b9-abd66fb51937\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-9vkbl" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.369952 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/3c673b0f-7739-4b94-99b9-abd66fb51937-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-9vkbl\" (UID: \"3c673b0f-7739-4b94-99b9-abd66fb51937\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-9vkbl" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.472023 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/3c673b0f-7739-4b94-99b9-abd66fb51937-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-9vkbl\" (UID: \"3c673b0f-7739-4b94-99b9-abd66fb51937\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-9vkbl" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.472093 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/3c673b0f-7739-4b94-99b9-abd66fb51937-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-9vkbl\" (UID: \"3c673b0f-7739-4b94-99b9-abd66fb51937\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-9vkbl" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.472119 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/3c673b0f-7739-4b94-99b9-abd66fb51937-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-9vkbl\" (UID: \"3c673b0f-7739-4b94-99b9-abd66fb51937\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-9vkbl" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.472685 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c673b0f-7739-4b94-99b9-abd66fb51937-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-9vkbl\" (UID: \"3c673b0f-7739-4b94-99b9-abd66fb51937\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-9vkbl" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.472768 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/3c673b0f-7739-4b94-99b9-abd66fb51937-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-9vkbl\" (UID: \"3c673b0f-7739-4b94-99b9-abd66fb51937\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-9vkbl" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.473802 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c673b0f-7739-4b94-99b9-abd66fb51937-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-9vkbl\" (UID: \"3c673b0f-7739-4b94-99b9-abd66fb51937\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-9vkbl" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.474121 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c673b0f-7739-4b94-99b9-abd66fb51937-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-9vkbl\" (UID: \"3c673b0f-7739-4b94-99b9-abd66fb51937\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-9vkbl" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.474357 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c673b0f-7739-4b94-99b9-abd66fb51937-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-9vkbl\" (UID: \"3c673b0f-7739-4b94-99b9-abd66fb51937\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-9vkbl" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.474435 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/3c673b0f-7739-4b94-99b9-abd66fb51937-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-9vkbl\" (UID: \"3c673b0f-7739-4b94-99b9-abd66fb51937\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-9vkbl" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.474464 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fpp6\" (UniqueName: \"kubernetes.io/projected/3c673b0f-7739-4b94-99b9-abd66fb51937-kube-api-access-6fpp6\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-9vkbl\" (UID: \"3c673b0f-7739-4b94-99b9-abd66fb51937\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-9vkbl" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.474609 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c673b0f-7739-4b94-99b9-abd66fb51937-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-9vkbl\" (UID: \"3c673b0f-7739-4b94-99b9-abd66fb51937\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-9vkbl" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.474649 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/3c673b0f-7739-4b94-99b9-abd66fb51937-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-9vkbl\" (UID: \"3c673b0f-7739-4b94-99b9-abd66fb51937\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-9vkbl" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.475596 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c673b0f-7739-4b94-99b9-abd66fb51937-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-9vkbl\" (UID: \"3c673b0f-7739-4b94-99b9-abd66fb51937\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-9vkbl" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.475740 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/3c673b0f-7739-4b94-99b9-abd66fb51937-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-9vkbl\" (UID: \"3c673b0f-7739-4b94-99b9-abd66fb51937\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-9vkbl" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.477382 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/3c673b0f-7739-4b94-99b9-abd66fb51937-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-9vkbl\" (UID: \"3c673b0f-7739-4b94-99b9-abd66fb51937\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-9vkbl" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.478527 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/3c673b0f-7739-4b94-99b9-abd66fb51937-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-9vkbl\" (UID: \"3c673b0f-7739-4b94-99b9-abd66fb51937\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-9vkbl" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.479870 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/3c673b0f-7739-4b94-99b9-abd66fb51937-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-9vkbl\" (UID: \"3c673b0f-7739-4b94-99b9-abd66fb51937\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-9vkbl" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.506731 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fpp6\" (UniqueName: \"kubernetes.io/projected/3c673b0f-7739-4b94-99b9-abd66fb51937-kube-api-access-6fpp6\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-9vkbl\" (UID: \"3c673b0f-7739-4b94-99b9-abd66fb51937\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-9vkbl" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.522508 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-9vkbl" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.782310 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/b1f1bb07-4bb0-4a1c-ab8a-0e97e8d57f1a-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-h6555\" (UID: \"b1f1bb07-4bb0-4a1c-ab8a-0e97e8d57f1a\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-h6555" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.788478 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/b1f1bb07-4bb0-4a1c-ab8a-0e97e8d57f1a-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-h6555\" (UID: \"b1f1bb07-4bb0-4a1c-ab8a-0e97e8d57f1a\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-h6555" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.791083 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-h6555" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.865987 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.867048 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.874068 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-ingester-grpc" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.874257 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-ingester-http" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.880972 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.977549 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.978731 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.981177 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-compactor-http" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.981432 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-compactor-grpc" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.986397 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"faab82f2-bc31-438d-b329-9a31d6ba5040\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.986437 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm5k7\" (UniqueName: \"kubernetes.io/projected/faab82f2-bc31-438d-b329-9a31d6ba5040-kube-api-access-hm5k7\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"faab82f2-bc31-438d-b329-9a31d6ba5040\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.986460 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/faab82f2-bc31-438d-b329-9a31d6ba5040-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"faab82f2-bc31-438d-b329-9a31d6ba5040\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.986510 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/faab82f2-bc31-438d-b329-9a31d6ba5040-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"faab82f2-bc31-438d-b329-9a31d6ba5040\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.986553 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/faab82f2-bc31-438d-b329-9a31d6ba5040-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"faab82f2-bc31-438d-b329-9a31d6ba5040\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.986582 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/faab82f2-bc31-438d-b329-9a31d6ba5040-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"faab82f2-bc31-438d-b329-9a31d6ba5040\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.986604 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/faab82f2-bc31-438d-b329-9a31d6ba5040-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"faab82f2-bc31-438d-b329-9a31d6ba5040\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.986631 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"faab82f2-bc31-438d-b329-9a31d6ba5040\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 19 10:01:30 crc kubenswrapper[4965]: I0219 10:01:30.996889 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.013007 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-distributor-585d9bcbc-ktrzq"] Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.067628 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.069177 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.073648 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-index-gateway-grpc" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.074251 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-index-gateway-http" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.086594 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.091050 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"faab82f2-bc31-438d-b329-9a31d6ba5040\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.091169 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/5038aafe-e39d-479c-b355-bbac1a77fa4a-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"5038aafe-e39d-479c-b355-bbac1a77fa4a\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.091221 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5038aafe-e39d-479c-b355-bbac1a77fa4a-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"5038aafe-e39d-479c-b355-bbac1a77fa4a\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.091271 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hm5k7\" (UniqueName: \"kubernetes.io/projected/faab82f2-bc31-438d-b329-9a31d6ba5040-kube-api-access-hm5k7\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"faab82f2-bc31-438d-b329-9a31d6ba5040\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.091302 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5038aafe-e39d-479c-b355-bbac1a77fa4a-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"5038aafe-e39d-479c-b355-bbac1a77fa4a\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.091335 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/faab82f2-bc31-438d-b329-9a31d6ba5040-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"faab82f2-bc31-438d-b329-9a31d6ba5040\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.091398 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"5038aafe-e39d-479c-b355-bbac1a77fa4a\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.091450 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/faab82f2-bc31-438d-b329-9a31d6ba5040-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"faab82f2-bc31-438d-b329-9a31d6ba5040\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.091537 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/faab82f2-bc31-438d-b329-9a31d6ba5040-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"faab82f2-bc31-438d-b329-9a31d6ba5040\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.091549 4965 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"faab82f2-bc31-438d-b329-9a31d6ba5040\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.091586 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/faab82f2-bc31-438d-b329-9a31d6ba5040-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"faab82f2-bc31-438d-b329-9a31d6ba5040\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.095026 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/faab82f2-bc31-438d-b329-9a31d6ba5040-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"faab82f2-bc31-438d-b329-9a31d6ba5040\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.095074 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/5038aafe-e39d-479c-b355-bbac1a77fa4a-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"5038aafe-e39d-479c-b355-bbac1a77fa4a\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.095131 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"faab82f2-bc31-438d-b329-9a31d6ba5040\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.095240 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f85zx\" (UniqueName: \"kubernetes.io/projected/5038aafe-e39d-479c-b355-bbac1a77fa4a-kube-api-access-f85zx\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"5038aafe-e39d-479c-b355-bbac1a77fa4a\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.095271 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/5038aafe-e39d-479c-b355-bbac1a77fa4a-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"5038aafe-e39d-479c-b355-bbac1a77fa4a\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.095561 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/faab82f2-bc31-438d-b329-9a31d6ba5040-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"faab82f2-bc31-438d-b329-9a31d6ba5040\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.097510 4965 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"faab82f2-bc31-438d-b329-9a31d6ba5040\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.098806 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/faab82f2-bc31-438d-b329-9a31d6ba5040-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"faab82f2-bc31-438d-b329-9a31d6ba5040\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.099411 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/faab82f2-bc31-438d-b329-9a31d6ba5040-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"faab82f2-bc31-438d-b329-9a31d6ba5040\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.113037 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hm5k7\" (UniqueName: \"kubernetes.io/projected/faab82f2-bc31-438d-b329-9a31d6ba5040-kube-api-access-hm5k7\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"faab82f2-bc31-438d-b329-9a31d6ba5040\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.113168 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/faab82f2-bc31-438d-b329-9a31d6ba5040-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"faab82f2-bc31-438d-b329-9a31d6ba5040\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.116325 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/faab82f2-bc31-438d-b329-9a31d6ba5040-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"faab82f2-bc31-438d-b329-9a31d6ba5040\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.118673 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"faab82f2-bc31-438d-b329-9a31d6ba5040\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.121617 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"faab82f2-bc31-438d-b329-9a31d6ba5040\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.199687 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/f9902193-fba0-4ea4-8de6-352459b1c13f-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"f9902193-fba0-4ea4-8de6-352459b1c13f\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.199742 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"f9902193-fba0-4ea4-8de6-352459b1c13f\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.199828 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/f9902193-fba0-4ea4-8de6-352459b1c13f-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"f9902193-fba0-4ea4-8de6-352459b1c13f\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.199856 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9902193-fba0-4ea4-8de6-352459b1c13f-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"f9902193-fba0-4ea4-8de6-352459b1c13f\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.199878 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/5038aafe-e39d-479c-b355-bbac1a77fa4a-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"5038aafe-e39d-479c-b355-bbac1a77fa4a\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.199942 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9sbh\" (UniqueName: \"kubernetes.io/projected/f9902193-fba0-4ea4-8de6-352459b1c13f-kube-api-access-g9sbh\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"f9902193-fba0-4ea4-8de6-352459b1c13f\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.199961 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/f9902193-fba0-4ea4-8de6-352459b1c13f-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"f9902193-fba0-4ea4-8de6-352459b1c13f\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.199990 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f85zx\" (UniqueName: \"kubernetes.io/projected/5038aafe-e39d-479c-b355-bbac1a77fa4a-kube-api-access-f85zx\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"5038aafe-e39d-479c-b355-bbac1a77fa4a\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.201662 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/5038aafe-e39d-479c-b355-bbac1a77fa4a-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"5038aafe-e39d-479c-b355-bbac1a77fa4a\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.201740 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/5038aafe-e39d-479c-b355-bbac1a77fa4a-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"5038aafe-e39d-479c-b355-bbac1a77fa4a\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.201759 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9902193-fba0-4ea4-8de6-352459b1c13f-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"f9902193-fba0-4ea4-8de6-352459b1c13f\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.201780 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5038aafe-e39d-479c-b355-bbac1a77fa4a-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"5038aafe-e39d-479c-b355-bbac1a77fa4a\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.201893 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5038aafe-e39d-479c-b355-bbac1a77fa4a-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"5038aafe-e39d-479c-b355-bbac1a77fa4a\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.201967 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"5038aafe-e39d-479c-b355-bbac1a77fa4a\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.202148 4965 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"5038aafe-e39d-479c-b355-bbac1a77fa4a\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.203454 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5038aafe-e39d-479c-b355-bbac1a77fa4a-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"5038aafe-e39d-479c-b355-bbac1a77fa4a\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.205064 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5038aafe-e39d-479c-b355-bbac1a77fa4a-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"5038aafe-e39d-479c-b355-bbac1a77fa4a\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.210752 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/5038aafe-e39d-479c-b355-bbac1a77fa4a-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"5038aafe-e39d-479c-b355-bbac1a77fa4a\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.210764 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/5038aafe-e39d-479c-b355-bbac1a77fa4a-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"5038aafe-e39d-479c-b355-bbac1a77fa4a\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.213294 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/5038aafe-e39d-479c-b355-bbac1a77fa4a-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"5038aafe-e39d-479c-b355-bbac1a77fa4a\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.218708 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="218ae800-f2dc-4ae1-beeb-bf4847797fbd" path="/var/lib/kubelet/pods/218ae800-f2dc-4ae1-beeb-bf4847797fbd/volumes" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.219936 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a3ae7dc-3ce4-4d63-9e90-d005f3de3d8d" path="/var/lib/kubelet/pods/7a3ae7dc-3ce4-4d63-9e90-d005f3de3d8d/volumes" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.237214 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f85zx\" (UniqueName: \"kubernetes.io/projected/5038aafe-e39d-479c-b355-bbac1a77fa4a-kube-api-access-f85zx\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"5038aafe-e39d-479c-b355-bbac1a77fa4a\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.240279 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.248435 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"5038aafe-e39d-479c-b355-bbac1a77fa4a\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.296028 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.304832 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/f9902193-fba0-4ea4-8de6-352459b1c13f-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"f9902193-fba0-4ea4-8de6-352459b1c13f\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.304914 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"f9902193-fba0-4ea4-8de6-352459b1c13f\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.305051 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/f9902193-fba0-4ea4-8de6-352459b1c13f-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"f9902193-fba0-4ea4-8de6-352459b1c13f\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.305142 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9902193-fba0-4ea4-8de6-352459b1c13f-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"f9902193-fba0-4ea4-8de6-352459b1c13f\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.305278 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9sbh\" (UniqueName: \"kubernetes.io/projected/f9902193-fba0-4ea4-8de6-352459b1c13f-kube-api-access-g9sbh\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"f9902193-fba0-4ea4-8de6-352459b1c13f\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.305317 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/f9902193-fba0-4ea4-8de6-352459b1c13f-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"f9902193-fba0-4ea4-8de6-352459b1c13f\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.305484 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9902193-fba0-4ea4-8de6-352459b1c13f-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"f9902193-fba0-4ea4-8de6-352459b1c13f\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.306525 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9902193-fba0-4ea4-8de6-352459b1c13f-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"f9902193-fba0-4ea4-8de6-352459b1c13f\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.308403 4965 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"f9902193-fba0-4ea4-8de6-352459b1c13f\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.308580 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9902193-fba0-4ea4-8de6-352459b1c13f-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"f9902193-fba0-4ea4-8de6-352459b1c13f\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.311890 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/f9902193-fba0-4ea4-8de6-352459b1c13f-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"f9902193-fba0-4ea4-8de6-352459b1c13f\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.317408 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/f9902193-fba0-4ea4-8de6-352459b1c13f-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"f9902193-fba0-4ea4-8de6-352459b1c13f\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.342990 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9sbh\" (UniqueName: \"kubernetes.io/projected/f9902193-fba0-4ea4-8de6-352459b1c13f-kube-api-access-g9sbh\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"f9902193-fba0-4ea4-8de6-352459b1c13f\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.343862 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/f9902193-fba0-4ea4-8de6-352459b1c13f-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"f9902193-fba0-4ea4-8de6-352459b1c13f\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.369382 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"f9902193-fba0-4ea4-8de6-352459b1c13f\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.384597 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 19 10:01:31 crc kubenswrapper[4965]: I0219 10:01:31.403467 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"98633dba-c95c-4f35-a045-5c738d652492","Type":"ContainerStarted","Data":"d0f7de02bdba794d4cab3c98cc1c1e34cbe60448a23f4dd25b5ccef574b33732"} Feb 19 10:01:32 crc kubenswrapper[4965]: I0219 10:01:32.410274 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-ktrzq" event={"ID":"afbb0d2a-5cd0-4358-b5b0-c22749400326","Type":"ContainerStarted","Data":"ce846229bec310eb94e89c5aef44ae3a3a318d790a9d109ea1bb7dad646ab8a1"} Feb 19 10:01:32 crc kubenswrapper[4965]: I0219 10:01:32.854871 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-querier-58c84b5844-hb4c6"] Feb 19 10:01:34 crc kubenswrapper[4965]: I0219 10:01:34.952454 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-4bmcg"] Feb 19 10:01:37 crc kubenswrapper[4965]: I0219 10:01:37.452144 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-hb4c6" event={"ID":"7adcb318-8832-417d-814a-7a2d21c8af30","Type":"ContainerStarted","Data":"04ded600f25cfb134421877575b85ff82771e5ba689461d01e6e39321e33746a"} Feb 19 10:01:37 crc kubenswrapper[4965]: I0219 10:01:37.720418 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-5xvv9" Feb 19 10:01:37 crc kubenswrapper[4965]: I0219 10:01:37.782258 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-hb29s"] Feb 19 10:01:41 crc kubenswrapper[4965]: E0219 10:01:41.797355 4965 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified" Feb 19 10:01:41 crc kubenswrapper[4965]: E0219 10:01:41.798106 4965 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-sb,Image:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n76h5bh696h579h5d9h55bh69hc5hddh5fh559h5c8h7fh658h57h58bh544h54dh6fh584h5c9hfbh86h55dh558h5bh5dbh658h64fh5b8h66fhf5q,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-sb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hw9ph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-sb-0_openstack(1520d7ba-9d74-47f8-9c7a-9731ae9ff49e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 10:01:42 crc kubenswrapper[4965]: I0219 10:01:42.496269 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-4bmcg" event={"ID":"849f49ac-72be-49ce-ab6b-2eb5890a6337","Type":"ContainerStarted","Data":"d474a217bdc22d3744fd495762571d224947f88782ffacc71f78d3cd34a51ff5"} Feb 19 10:01:43 crc kubenswrapper[4965]: E0219 10:01:43.827224 4965 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified" Feb 19 10:01:43 crc kubenswrapper[4965]: E0219 10:01:43.829175 4965 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-controller,Image:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,Command:[ovn-controller --pidfile unix:/run/openvswitch/db.sock --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5cchd5h59hb5h5dbh5b9h699h567h58bh547h55dh66fh6h66h688h694h547hfch646h5h5c4h56fh5bh5cch56bh68dh676h55h5b7h55chb5h5bfq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-ovn,ReadOnly:false,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log-ovn,ReadOnly:false,MountPath:/var/log/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g97g9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_liveness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_readiness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/share/ovn/scripts/ovn-ctl stop_controller],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-mwlb6_openstack(0a44ec63-c497-4874-b0ca-ecb9d6c9bc2a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 10:01:43 crc kubenswrapper[4965]: E0219 10:01:43.830855 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-mwlb6" podUID="0a44ec63-c497-4874-b0ca-ecb9d6c9bc2a" Feb 19 10:01:44 crc kubenswrapper[4965]: I0219 10:01:44.435064 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Feb 19 10:01:44 crc kubenswrapper[4965]: I0219 10:01:44.449943 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-h6555"] Feb 19 10:01:44 crc kubenswrapper[4965]: I0219 10:01:44.466485 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-9vkbl"] Feb 19 10:01:44 crc kubenswrapper[4965]: E0219 10:01:44.523077 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified\\\"\"" pod="openstack/ovn-controller-mwlb6" podUID="0a44ec63-c497-4874-b0ca-ecb9d6c9bc2a" Feb 19 10:01:44 crc kubenswrapper[4965]: W0219 10:01:44.524037 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfaab82f2_bc31_438d_b329_9a31d6ba5040.slice/crio-f812e23569c43c2bd6301f7614bbb45a47b762ebe8d351c832cf5b60d514d00d WatchSource:0}: Error finding container f812e23569c43c2bd6301f7614bbb45a47b762ebe8d351c832cf5b60d514d00d: Status 404 returned error can't find the container with id f812e23569c43c2bd6301f7614bbb45a47b762ebe8d351c832cf5b60d514d00d Feb 19 10:01:44 crc kubenswrapper[4965]: I0219 10:01:44.773884 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Feb 19 10:01:44 crc kubenswrapper[4965]: I0219 10:01:44.838611 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Feb 19 10:01:45 crc kubenswrapper[4965]: E0219 10:01:45.146468 4965 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Feb 19 10:01:45 crc kubenswrapper[4965]: E0219 10:01:45.146554 4965 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Feb 19 10:01:45 crc kubenswrapper[4965]: E0219 10:01:45.146803 4965 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zcqdp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(6263bd60-b2d0-44ff-ae54-874728576f1d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 10:01:45 crc kubenswrapper[4965]: E0219 10:01:45.148625 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="6263bd60-b2d0-44ff-ae54-874728576f1d" Feb 19 10:01:45 crc kubenswrapper[4965]: I0219 10:01:45.522479 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-ingester-0" event={"ID":"faab82f2-bc31-438d-b329-9a31d6ba5040","Type":"ContainerStarted","Data":"f812e23569c43c2bd6301f7614bbb45a47b762ebe8d351c832cf5b60d514d00d"} Feb 19 10:01:45 crc kubenswrapper[4965]: I0219 10:01:45.524066 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-9vkbl" event={"ID":"3c673b0f-7739-4b94-99b9-abd66fb51937","Type":"ContainerStarted","Data":"8c09b976962061978d798541b72b2fba6a4e19213286123af72f44065dbb14de"} Feb 19 10:01:45 crc kubenswrapper[4965]: I0219 10:01:45.526265 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-hb29s" event={"ID":"45fd5ec9-f248-4c13-b4a5-85885283391e","Type":"ContainerStarted","Data":"4510ff5521596128b6842f4d75fff21620f3b5eab589a07f182813e8f8a49a65"} Feb 19 10:01:45 crc kubenswrapper[4965]: I0219 10:01:45.526330 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-hb29s" podUID="45fd5ec9-f248-4c13-b4a5-85885283391e" containerName="dnsmasq-dns" containerID="cri-o://4510ff5521596128b6842f4d75fff21620f3b5eab589a07f182813e8f8a49a65" gracePeriod=10 Feb 19 10:01:45 crc kubenswrapper[4965]: I0219 10:01:45.526361 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-hb29s" Feb 19 10:01:45 crc kubenswrapper[4965]: I0219 10:01:45.528092 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-h6555" event={"ID":"b1f1bb07-4bb0-4a1c-ab8a-0e97e8d57f1a","Type":"ContainerStarted","Data":"8f8179bce768790fb830f6290af9091ac8373d50e016bad0efda5181df2e520a"} Feb 19 10:01:45 crc kubenswrapper[4965]: E0219 10:01:45.530906 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="6263bd60-b2d0-44ff-ae54-874728576f1d" Feb 19 10:01:45 crc kubenswrapper[4965]: I0219 10:01:45.550903 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-hb29s" podStartSLOduration=20.068577681 podStartE2EDuration="34.550882928s" podCreationTimestamp="2026-02-19 10:01:11 +0000 UTC" firstStartedPulling="2026-02-19 10:01:12.900466251 +0000 UTC m=+1128.521787561" lastFinishedPulling="2026-02-19 10:01:27.382771498 +0000 UTC m=+1143.004092808" observedRunningTime="2026-02-19 10:01:45.548002927 +0000 UTC m=+1161.169324237" watchObservedRunningTime="2026-02-19 10:01:45.550882928 +0000 UTC m=+1161.172204238" Feb 19 10:01:45 crc kubenswrapper[4965]: W0219 10:01:45.721847 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9902193_fba0_4ea4_8de6_352459b1c13f.slice/crio-430aea640a3f1b7c5c97cba7b02206f8da5de217dd8e9c85004514df0f81599b WatchSource:0}: Error finding container 430aea640a3f1b7c5c97cba7b02206f8da5de217dd8e9c85004514df0f81599b: Status 404 returned error can't find the container with id 430aea640a3f1b7c5c97cba7b02206f8da5de217dd8e9c85004514df0f81599b Feb 19 10:01:46 crc kubenswrapper[4965]: I0219 10:01:46.536060 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-compactor-0" event={"ID":"5038aafe-e39d-479c-b355-bbac1a77fa4a","Type":"ContainerStarted","Data":"836398b6be129f06ca59664b4cc9c5ba245615cd481e45dbc3f2b26b15a4d7d9"} Feb 19 10:01:46 crc kubenswrapper[4965]: I0219 10:01:46.536926 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-index-gateway-0" event={"ID":"f9902193-fba0-4ea4-8de6-352459b1c13f","Type":"ContainerStarted","Data":"430aea640a3f1b7c5c97cba7b02206f8da5de217dd8e9c85004514df0f81599b"} Feb 19 10:01:46 crc kubenswrapper[4965]: I0219 10:01:46.538358 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bbd64606-53f8-484e-b8d2-c0fef4acb1bd","Type":"ContainerStarted","Data":"5a58c40e549604532d6c9dd9b699ffe7b8b46ce4c58064a2f3ecc8b63cbc14f1"} Feb 19 10:01:46 crc kubenswrapper[4965]: I0219 10:01:46.540700 4965 generic.go:334] "Generic (PLEG): container finished" podID="45fd5ec9-f248-4c13-b4a5-85885283391e" containerID="4510ff5521596128b6842f4d75fff21620f3b5eab589a07f182813e8f8a49a65" exitCode=0 Feb 19 10:01:46 crc kubenswrapper[4965]: I0219 10:01:46.540736 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-hb29s" event={"ID":"45fd5ec9-f248-4c13-b4a5-85885283391e","Type":"ContainerDied","Data":"4510ff5521596128b6842f4d75fff21620f3b5eab589a07f182813e8f8a49a65"} Feb 19 10:01:46 crc kubenswrapper[4965]: I0219 10:01:46.540753 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-hb29s" event={"ID":"45fd5ec9-f248-4c13-b4a5-85885283391e","Type":"ContainerDied","Data":"94e201d574249473114d4fc54e519fe0bfcec73df77264df290763c5eb533e91"} Feb 19 10:01:46 crc kubenswrapper[4965]: I0219 10:01:46.540765 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94e201d574249473114d4fc54e519fe0bfcec73df77264df290763c5eb533e91" Feb 19 10:01:46 crc kubenswrapper[4965]: I0219 10:01:46.601439 4965 patch_prober.go:28] interesting pod/machine-config-daemon-7mhh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:01:46 crc kubenswrapper[4965]: I0219 10:01:46.601498 4965 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:01:46 crc kubenswrapper[4965]: I0219 10:01:46.601548 4965 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" Feb 19 10:01:46 crc kubenswrapper[4965]: I0219 10:01:46.602277 4965 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ac6c3a11724d0b4226206f45a1c130a82ce4948594339da20a6fb6307209a67e"} pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 10:01:46 crc kubenswrapper[4965]: I0219 10:01:46.602345 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerName="machine-config-daemon" containerID="cri-o://ac6c3a11724d0b4226206f45a1c130a82ce4948594339da20a6fb6307209a67e" gracePeriod=600 Feb 19 10:01:47 crc kubenswrapper[4965]: I0219 10:01:47.058748 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-hb29s" Feb 19 10:01:47 crc kubenswrapper[4965]: I0219 10:01:47.127338 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45fd5ec9-f248-4c13-b4a5-85885283391e-config\") pod \"45fd5ec9-f248-4c13-b4a5-85885283391e\" (UID: \"45fd5ec9-f248-4c13-b4a5-85885283391e\") " Feb 19 10:01:47 crc kubenswrapper[4965]: I0219 10:01:47.127880 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hq544\" (UniqueName: \"kubernetes.io/projected/45fd5ec9-f248-4c13-b4a5-85885283391e-kube-api-access-hq544\") pod \"45fd5ec9-f248-4c13-b4a5-85885283391e\" (UID: \"45fd5ec9-f248-4c13-b4a5-85885283391e\") " Feb 19 10:01:47 crc kubenswrapper[4965]: I0219 10:01:47.127926 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45fd5ec9-f248-4c13-b4a5-85885283391e-dns-svc\") pod \"45fd5ec9-f248-4c13-b4a5-85885283391e\" (UID: \"45fd5ec9-f248-4c13-b4a5-85885283391e\") " Feb 19 10:01:47 crc kubenswrapper[4965]: I0219 10:01:47.208154 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45fd5ec9-f248-4c13-b4a5-85885283391e-kube-api-access-hq544" (OuterVolumeSpecName: "kube-api-access-hq544") pod "45fd5ec9-f248-4c13-b4a5-85885283391e" (UID: "45fd5ec9-f248-4c13-b4a5-85885283391e"). InnerVolumeSpecName "kube-api-access-hq544". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:01:47 crc kubenswrapper[4965]: I0219 10:01:47.231337 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hq544\" (UniqueName: \"kubernetes.io/projected/45fd5ec9-f248-4c13-b4a5-85885283391e-kube-api-access-hq544\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:47 crc kubenswrapper[4965]: I0219 10:01:47.558860 4965 generic.go:334] "Generic (PLEG): container finished" podID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerID="ac6c3a11724d0b4226206f45a1c130a82ce4948594339da20a6fb6307209a67e" exitCode=0 Feb 19 10:01:47 crc kubenswrapper[4965]: I0219 10:01:47.566721 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-hb29s" Feb 19 10:01:47 crc kubenswrapper[4965]: I0219 10:01:47.709550 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" event={"ID":"63ef3eb8-6103-492d-b6ef-f16081d15e83","Type":"ContainerDied","Data":"ac6c3a11724d0b4226206f45a1c130a82ce4948594339da20a6fb6307209a67e"} Feb 19 10:01:47 crc kubenswrapper[4965]: I0219 10:01:47.711261 4965 scope.go:117] "RemoveContainer" containerID="2381a024086baeb4b1c2a62ae636f4e796e3ec1a1ca046d7c801db6f42b09ff3" Feb 19 10:01:47 crc kubenswrapper[4965]: I0219 10:01:47.710283 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"45105c9e-db96-41c5-ba42-d56027ca318c","Type":"ContainerStarted","Data":"68b64c597facc59d7d7ca9436c6435511991e443832c3646adbed43348d17a35"} Feb 19 10:01:47 crc kubenswrapper[4965]: I0219 10:01:47.715288 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"215df1f4-6c30-4144-b141-5a867e8d2728","Type":"ContainerStarted","Data":"79a30afdb7745a41590c2896066c61a36de701e906943b46e9057a390881f253"} Feb 19 10:01:48 crc kubenswrapper[4965]: I0219 10:01:48.428054 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45fd5ec9-f248-4c13-b4a5-85885283391e-config" (OuterVolumeSpecName: "config") pod "45fd5ec9-f248-4c13-b4a5-85885283391e" (UID: "45fd5ec9-f248-4c13-b4a5-85885283391e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:01:48 crc kubenswrapper[4965]: I0219 10:01:48.453632 4965 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45fd5ec9-f248-4c13-b4a5-85885283391e-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:48 crc kubenswrapper[4965]: I0219 10:01:48.537644 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45fd5ec9-f248-4c13-b4a5-85885283391e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "45fd5ec9-f248-4c13-b4a5-85885283391e" (UID: "45fd5ec9-f248-4c13-b4a5-85885283391e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:01:48 crc kubenswrapper[4965]: I0219 10:01:48.555992 4965 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45fd5ec9-f248-4c13-b4a5-85885283391e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 10:01:48 crc kubenswrapper[4965]: I0219 10:01:48.576337 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-hb4c6" event={"ID":"7adcb318-8832-417d-814a-7a2d21c8af30","Type":"ContainerStarted","Data":"342ec8a666fa138eed464251c5ec08bcddb48619133ecd4887f8048c492f9276"} Feb 19 10:01:48 crc kubenswrapper[4965]: I0219 10:01:48.576498 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-hb4c6" Feb 19 10:01:48 crc kubenswrapper[4965]: I0219 10:01:48.579170 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-ktrzq" event={"ID":"afbb0d2a-5cd0-4358-b5b0-c22749400326","Type":"ContainerStarted","Data":"1dec3bc4e9195ace5710c165b2e04d0f9bc1512655c00ee942e4db818eb85bb5"} Feb 19 10:01:48 crc kubenswrapper[4965]: I0219 10:01:48.579355 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-ktrzq" Feb 19 10:01:48 crc kubenswrapper[4965]: I0219 10:01:48.581180 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"40caef4c-7f84-42cb-b51c-b0884efc2052","Type":"ContainerStarted","Data":"ca775def917d30efdbd9a3c98a44664addf959106e034950a3a0ffa200758f90"} Feb 19 10:01:48 crc kubenswrapper[4965]: I0219 10:01:48.581306 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 19 10:01:48 crc kubenswrapper[4965]: I0219 10:01:48.582993 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e7a4c9f4-b898-43b4-812d-ab4f17c2124d","Type":"ContainerStarted","Data":"fcb4fa855945951961264a47f8ad49bbe1837e56000c48259f5d69179ff8e515"} Feb 19 10:01:48 crc kubenswrapper[4965]: I0219 10:01:48.603054 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-hb4c6" podStartSLOduration=11.810204525 podStartE2EDuration="19.603031076s" podCreationTimestamp="2026-02-19 10:01:29 +0000 UTC" firstStartedPulling="2026-02-19 10:01:37.027916865 +0000 UTC m=+1152.649238175" lastFinishedPulling="2026-02-19 10:01:44.820743416 +0000 UTC m=+1160.442064726" observedRunningTime="2026-02-19 10:01:48.593547176 +0000 UTC m=+1164.214868486" watchObservedRunningTime="2026-02-19 10:01:48.603031076 +0000 UTC m=+1164.224352386" Feb 19 10:01:48 crc kubenswrapper[4965]: I0219 10:01:48.612314 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-ktrzq" podStartSLOduration=6.524878673 podStartE2EDuration="19.612294862s" podCreationTimestamp="2026-02-19 10:01:29 +0000 UTC" firstStartedPulling="2026-02-19 10:01:31.402980202 +0000 UTC m=+1147.024301512" lastFinishedPulling="2026-02-19 10:01:44.490396391 +0000 UTC m=+1160.111717701" observedRunningTime="2026-02-19 10:01:48.608992592 +0000 UTC m=+1164.230313912" watchObservedRunningTime="2026-02-19 10:01:48.612294862 +0000 UTC m=+1164.233616182" Feb 19 10:01:48 crc kubenswrapper[4965]: I0219 10:01:48.636132 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=18.080467714 podStartE2EDuration="33.636113582s" podCreationTimestamp="2026-02-19 10:01:15 +0000 UTC" firstStartedPulling="2026-02-19 10:01:28.325445365 +0000 UTC m=+1143.946766675" lastFinishedPulling="2026-02-19 10:01:43.881091233 +0000 UTC m=+1159.502412543" observedRunningTime="2026-02-19 10:01:48.630952916 +0000 UTC m=+1164.252274226" watchObservedRunningTime="2026-02-19 10:01:48.636113582 +0000 UTC m=+1164.257434892" Feb 19 10:01:48 crc kubenswrapper[4965]: I0219 10:01:48.795022 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-hb29s"] Feb 19 10:01:48 crc kubenswrapper[4965]: I0219 10:01:48.802990 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-hb29s"] Feb 19 10:01:48 crc kubenswrapper[4965]: E0219 10:01:48.946056 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-sb\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-sb-0" podUID="1520d7ba-9d74-47f8-9c7a-9731ae9ff49e" Feb 19 10:01:49 crc kubenswrapper[4965]: I0219 10:01:49.209606 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45fd5ec9-f248-4c13-b4a5-85885283391e" path="/var/lib/kubelet/pods/45fd5ec9-f248-4c13-b4a5-85885283391e/volumes" Feb 19 10:01:49 crc kubenswrapper[4965]: I0219 10:01:49.594654 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" event={"ID":"63ef3eb8-6103-492d-b6ef-f16081d15e83","Type":"ContainerStarted","Data":"6b5800cd8d3cdf0bd49b0429f539e236aa824e01e6e8bf55c3f2737a438df531"} Feb 19 10:01:49 crc kubenswrapper[4965]: I0219 10:01:49.596717 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"305a32d6-c9f8-4494-b356-75d6c54c7467","Type":"ContainerStarted","Data":"a1a65257004b242a15769f37a6af84d074317c1c847f8be584c5160b609c9964"} Feb 19 10:01:49 crc kubenswrapper[4965]: I0219 10:01:49.598918 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"5b862187-0edd-4939-9260-d0d35653485c","Type":"ContainerStarted","Data":"817f8381984aa8f0699a01b15f61518dc8161b7621dfb2162577018cf1a81d36"} Feb 19 10:01:49 crc kubenswrapper[4965]: I0219 10:01:49.601097 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-ingester-0" event={"ID":"faab82f2-bc31-438d-b329-9a31d6ba5040","Type":"ContainerStarted","Data":"be2436a6c582fa274d55b836c9b0a0b4cfea5c831294c4f4a443de014f1a4f20"} Feb 19 10:01:49 crc kubenswrapper[4965]: I0219 10:01:49.601642 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 19 10:01:49 crc kubenswrapper[4965]: I0219 10:01:49.605971 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-4bmcg" event={"ID":"849f49ac-72be-49ce-ab6b-2eb5890a6337","Type":"ContainerStarted","Data":"cd6499409682914296aa621f31140096bef0e389983b6d1a14eba7e83eba1a5a"} Feb 19 10:01:49 crc kubenswrapper[4965]: I0219 10:01:49.606390 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-4bmcg" Feb 19 10:01:49 crc kubenswrapper[4965]: I0219 10:01:49.609500 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"98633dba-c95c-4f35-a045-5c738d652492","Type":"ContainerStarted","Data":"8389e1b06f3371d39ba9e547f4cf2b901e8a1e6144e79230b2833e0c0e109eee"} Feb 19 10:01:49 crc kubenswrapper[4965]: I0219 10:01:49.609546 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"98633dba-c95c-4f35-a045-5c738d652492","Type":"ContainerStarted","Data":"dd59a8d890c0b8aa4eb2e3070c0eb78f28c6a0c7ca25eed2a160ff1de71e89d4"} Feb 19 10:01:49 crc kubenswrapper[4965]: I0219 10:01:49.614635 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-compactor-0" event={"ID":"5038aafe-e39d-479c-b355-bbac1a77fa4a","Type":"ContainerStarted","Data":"fb35ebf29782ae6162ec4e0078b92d42b2d91ad6e7392c0456dde040d544f048"} Feb 19 10:01:49 crc kubenswrapper[4965]: I0219 10:01:49.614748 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 19 10:01:49 crc kubenswrapper[4965]: I0219 10:01:49.616523 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-h6555" event={"ID":"b1f1bb07-4bb0-4a1c-ab8a-0e97e8d57f1a","Type":"ContainerStarted","Data":"fb820b29b92fbed47d363a26fa4b7a502a852b8a33827e6fbc39f871aa92ad9d"} Feb 19 10:01:49 crc kubenswrapper[4965]: I0219 10:01:49.616695 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-h6555" Feb 19 10:01:49 crc kubenswrapper[4965]: I0219 10:01:49.618590 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-index-gateway-0" event={"ID":"f9902193-fba0-4ea4-8de6-352459b1c13f","Type":"ContainerStarted","Data":"bdc79a8b49cea116ab8fa92c807068e66e0af6b17354041731ebc60dd492e5bf"} Feb 19 10:01:49 crc kubenswrapper[4965]: I0219 10:01:49.619042 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 19 10:01:49 crc kubenswrapper[4965]: I0219 10:01:49.620593 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"1520d7ba-9d74-47f8-9c7a-9731ae9ff49e","Type":"ContainerStarted","Data":"f3f597d11f4f10390999433ed9c7f8315c6bf69beab815cf8f18b8b28c63d287"} Feb 19 10:01:49 crc kubenswrapper[4965]: E0219 10:01:49.622063 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-sb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="1520d7ba-9d74-47f8-9c7a-9731ae9ff49e" Feb 19 10:01:49 crc kubenswrapper[4965]: I0219 10:01:49.622524 4965 generic.go:334] "Generic (PLEG): container finished" podID="3f408d9e-6ca2-490c-be7e-0516fa19db75" containerID="dff66afc07dec0a4fa7d0e4d5626c893acb2180ddc728ed5a7db0084370907d3" exitCode=0 Feb 19 10:01:49 crc kubenswrapper[4965]: I0219 10:01:49.622582 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-jlns7" event={"ID":"3f408d9e-6ca2-490c-be7e-0516fa19db75","Type":"ContainerDied","Data":"dff66afc07dec0a4fa7d0e4d5626c893acb2180ddc728ed5a7db0084370907d3"} Feb 19 10:01:49 crc kubenswrapper[4965]: I0219 10:01:49.625942 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-9vkbl" event={"ID":"3c673b0f-7739-4b94-99b9-abd66fb51937","Type":"ContainerStarted","Data":"d0309e3dff5fecf5b92f1411983290eb6b2bc443c4ff740572d9c7febaaf1fe6"} Feb 19 10:01:49 crc kubenswrapper[4965]: I0219 10:01:49.626388 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-9vkbl" Feb 19 10:01:49 crc kubenswrapper[4965]: I0219 10:01:49.639819 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-9vkbl" Feb 19 10:01:49 crc kubenswrapper[4965]: I0219 10:01:49.970624 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-h6555" Feb 19 10:01:50 crc kubenswrapper[4965]: I0219 10:01:50.016342 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-9vkbl" podStartSLOduration=16.858944146 podStartE2EDuration="20.016317817s" podCreationTimestamp="2026-02-19 10:01:30 +0000 UTC" firstStartedPulling="2026-02-19 10:01:44.624359584 +0000 UTC m=+1160.245680894" lastFinishedPulling="2026-02-19 10:01:47.781733255 +0000 UTC m=+1163.403054565" observedRunningTime="2026-02-19 10:01:50.005984165 +0000 UTC m=+1165.627305505" watchObservedRunningTime="2026-02-19 10:01:50.016317817 +0000 UTC m=+1165.637639127" Feb 19 10:01:50 crc kubenswrapper[4965]: I0219 10:01:50.095080 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-compactor-0" podStartSLOduration=21.095054138 podStartE2EDuration="21.095054138s" podCreationTimestamp="2026-02-19 10:01:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:01:50.094154146 +0000 UTC m=+1165.715475456" watchObservedRunningTime="2026-02-19 10:01:50.095054138 +0000 UTC m=+1165.716375448" Feb 19 10:01:50 crc kubenswrapper[4965]: I0219 10:01:50.138117 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-h6555" podStartSLOduration=16.980841285 podStartE2EDuration="20.138100044s" podCreationTimestamp="2026-02-19 10:01:30 +0000 UTC" firstStartedPulling="2026-02-19 10:01:44.624577249 +0000 UTC m=+1160.245898559" lastFinishedPulling="2026-02-19 10:01:47.781836008 +0000 UTC m=+1163.403157318" observedRunningTime="2026-02-19 10:01:50.134641619 +0000 UTC m=+1165.755962939" watchObservedRunningTime="2026-02-19 10:01:50.138100044 +0000 UTC m=+1165.759421354" Feb 19 10:01:50 crc kubenswrapper[4965]: I0219 10:01:50.152035 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-index-gateway-0" podStartSLOduration=20.152011901 podStartE2EDuration="20.152011901s" podCreationTimestamp="2026-02-19 10:01:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:01:50.150995256 +0000 UTC m=+1165.772316566" watchObservedRunningTime="2026-02-19 10:01:50.152011901 +0000 UTC m=+1165.773333211" Feb 19 10:01:50 crc kubenswrapper[4965]: I0219 10:01:50.205898 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=15.773028614 podStartE2EDuration="30.205872189s" podCreationTimestamp="2026-02-19 10:01:20 +0000 UTC" firstStartedPulling="2026-02-19 10:01:30.387595504 +0000 UTC m=+1146.008916814" lastFinishedPulling="2026-02-19 10:01:44.820439089 +0000 UTC m=+1160.441760389" observedRunningTime="2026-02-19 10:01:50.198342976 +0000 UTC m=+1165.819664306" watchObservedRunningTime="2026-02-19 10:01:50.205872189 +0000 UTC m=+1165.827193499" Feb 19 10:01:50 crc kubenswrapper[4965]: I0219 10:01:50.236633 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-ingester-0" podStartSLOduration=21.236609975 podStartE2EDuration="21.236609975s" podCreationTimestamp="2026-02-19 10:01:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:01:50.231923822 +0000 UTC m=+1165.853245152" watchObservedRunningTime="2026-02-19 10:01:50.236609975 +0000 UTC m=+1165.857931295" Feb 19 10:01:50 crc kubenswrapper[4965]: I0219 10:01:50.261905 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-4bmcg" podStartSLOduration=18.072813045 podStartE2EDuration="21.261883919s" podCreationTimestamp="2026-02-19 10:01:29 +0000 UTC" firstStartedPulling="2026-02-19 10:01:41.5424899 +0000 UTC m=+1157.163811210" lastFinishedPulling="2026-02-19 10:01:44.731560774 +0000 UTC m=+1160.352882084" observedRunningTime="2026-02-19 10:01:50.254653533 +0000 UTC m=+1165.875974873" watchObservedRunningTime="2026-02-19 10:01:50.261883919 +0000 UTC m=+1165.883205229" Feb 19 10:01:50 crc kubenswrapper[4965]: I0219 10:01:50.642047 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-jlns7" event={"ID":"3f408d9e-6ca2-490c-be7e-0516fa19db75","Type":"ContainerStarted","Data":"b73d50050f6d2716b6aa4614e28257bf6cb3aa04d7586bbd4b55316e5057f7df"} Feb 19 10:01:50 crc kubenswrapper[4965]: E0219 10:01:50.645881 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-sb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="1520d7ba-9d74-47f8-9c7a-9731ae9ff49e" Feb 19 10:01:51 crc kubenswrapper[4965]: I0219 10:01:51.665984 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-jlns7" event={"ID":"3f408d9e-6ca2-490c-be7e-0516fa19db75","Type":"ContainerStarted","Data":"b17789b6a41595c4db92617889d84600b608d8b51b592dbc6a6d48b0d7965cb1"} Feb 19 10:01:51 crc kubenswrapper[4965]: I0219 10:01:51.667045 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-jlns7" Feb 19 10:01:51 crc kubenswrapper[4965]: I0219 10:01:51.692880 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-jlns7" podStartSLOduration=14.643409382 podStartE2EDuration="29.692859252s" podCreationTimestamp="2026-02-19 10:01:22 +0000 UTC" firstStartedPulling="2026-02-19 10:01:29.27172628 +0000 UTC m=+1144.893047590" lastFinishedPulling="2026-02-19 10:01:44.32117615 +0000 UTC m=+1159.942497460" observedRunningTime="2026-02-19 10:01:51.686693833 +0000 UTC m=+1167.308015173" watchObservedRunningTime="2026-02-19 10:01:51.692859252 +0000 UTC m=+1167.314180562" Feb 19 10:01:52 crc kubenswrapper[4965]: I0219 10:01:52.192688 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 19 10:01:52 crc kubenswrapper[4965]: I0219 10:01:52.192807 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 19 10:01:52 crc kubenswrapper[4965]: I0219 10:01:52.230609 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 19 10:01:52 crc kubenswrapper[4965]: I0219 10:01:52.529461 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-jlns7" Feb 19 10:01:52 crc kubenswrapper[4965]: I0219 10:01:52.683392 4965 generic.go:334] "Generic (PLEG): container finished" podID="215df1f4-6c30-4144-b141-5a867e8d2728" containerID="79a30afdb7745a41590c2896066c61a36de701e906943b46e9057a390881f253" exitCode=0 Feb 19 10:01:52 crc kubenswrapper[4965]: I0219 10:01:52.683449 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"215df1f4-6c30-4144-b141-5a867e8d2728","Type":"ContainerDied","Data":"79a30afdb7745a41590c2896066c61a36de701e906943b46e9057a390881f253"} Feb 19 10:01:53 crc kubenswrapper[4965]: I0219 10:01:53.694784 4965 generic.go:334] "Generic (PLEG): container finished" podID="45105c9e-db96-41c5-ba42-d56027ca318c" containerID="68b64c597facc59d7d7ca9436c6435511991e443832c3646adbed43348d17a35" exitCode=0 Feb 19 10:01:53 crc kubenswrapper[4965]: I0219 10:01:53.694875 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"45105c9e-db96-41c5-ba42-d56027ca318c","Type":"ContainerDied","Data":"68b64c597facc59d7d7ca9436c6435511991e443832c3646adbed43348d17a35"} Feb 19 10:01:53 crc kubenswrapper[4965]: I0219 10:01:53.699507 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"215df1f4-6c30-4144-b141-5a867e8d2728","Type":"ContainerStarted","Data":"6203628deee8089496b5370b7f480152674992439f0105f5ebf4f6a8c73b5b6d"} Feb 19 10:01:53 crc kubenswrapper[4965]: I0219 10:01:53.701503 4965 generic.go:334] "Generic (PLEG): container finished" podID="5b862187-0edd-4939-9260-d0d35653485c" containerID="817f8381984aa8f0699a01b15f61518dc8161b7621dfb2162577018cf1a81d36" exitCode=0 Feb 19 10:01:53 crc kubenswrapper[4965]: I0219 10:01:53.701535 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"5b862187-0edd-4939-9260-d0d35653485c","Type":"ContainerDied","Data":"817f8381984aa8f0699a01b15f61518dc8161b7621dfb2162577018cf1a81d36"} Feb 19 10:01:53 crc kubenswrapper[4965]: I0219 10:01:53.783422 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=24.464111034 podStartE2EDuration="40.783397849s" podCreationTimestamp="2026-02-19 10:01:13 +0000 UTC" firstStartedPulling="2026-02-19 10:01:28.293183329 +0000 UTC m=+1143.914504639" lastFinishedPulling="2026-02-19 10:01:44.612470144 +0000 UTC m=+1160.233791454" observedRunningTime="2026-02-19 10:01:53.774584196 +0000 UTC m=+1169.395905506" watchObservedRunningTime="2026-02-19 10:01:53.783397849 +0000 UTC m=+1169.404719179" Feb 19 10:01:54 crc kubenswrapper[4965]: I0219 10:01:54.713855 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"5b862187-0edd-4939-9260-d0d35653485c","Type":"ContainerStarted","Data":"0745c47f49bf388c69d25866c1bb44dfc8e2a2e5578d5d125817997207744cfe"} Feb 19 10:01:54 crc kubenswrapper[4965]: I0219 10:01:54.750621 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=25.542678374 podStartE2EDuration="40.750597853s" podCreationTimestamp="2026-02-19 10:01:14 +0000 UTC" firstStartedPulling="2026-02-19 10:01:29.127302513 +0000 UTC m=+1144.748623833" lastFinishedPulling="2026-02-19 10:01:44.335221992 +0000 UTC m=+1159.956543312" observedRunningTime="2026-02-19 10:01:54.738423867 +0000 UTC m=+1170.359745197" watchObservedRunningTime="2026-02-19 10:01:54.750597853 +0000 UTC m=+1170.371919173" Feb 19 10:01:54 crc kubenswrapper[4965]: I0219 10:01:54.921052 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 19 10:01:54 crc kubenswrapper[4965]: I0219 10:01:54.921393 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 19 10:01:55 crc kubenswrapper[4965]: I0219 10:01:55.725340 4965 generic.go:334] "Generic (PLEG): container finished" podID="e7a4c9f4-b898-43b4-812d-ab4f17c2124d" containerID="fcb4fa855945951961264a47f8ad49bbe1837e56000c48259f5d69179ff8e515" exitCode=0 Feb 19 10:01:55 crc kubenswrapper[4965]: I0219 10:01:55.725383 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e7a4c9f4-b898-43b4-812d-ab4f17c2124d","Type":"ContainerDied","Data":"fcb4fa855945951961264a47f8ad49bbe1837e56000c48259f5d69179ff8e515"} Feb 19 10:01:56 crc kubenswrapper[4965]: I0219 10:01:56.342681 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 19 10:01:56 crc kubenswrapper[4965]: I0219 10:01:56.343054 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 19 10:01:56 crc kubenswrapper[4965]: I0219 10:01:56.383549 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 19 10:01:56 crc kubenswrapper[4965]: I0219 10:01:56.734681 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"45105c9e-db96-41c5-ba42-d56027ca318c","Type":"ContainerStarted","Data":"932321e0cd00cd283f66a427d3424f502887b7daba84a66db501656baf4acb4d"} Feb 19 10:01:57 crc kubenswrapper[4965]: I0219 10:01:57.448427 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 19 10:01:57 crc kubenswrapper[4965]: I0219 10:01:57.713070 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-n99vb"] Feb 19 10:01:57 crc kubenswrapper[4965]: E0219 10:01:57.713523 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45fd5ec9-f248-4c13-b4a5-85885283391e" containerName="init" Feb 19 10:01:57 crc kubenswrapper[4965]: I0219 10:01:57.713546 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="45fd5ec9-f248-4c13-b4a5-85885283391e" containerName="init" Feb 19 10:01:57 crc kubenswrapper[4965]: E0219 10:01:57.713568 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45fd5ec9-f248-4c13-b4a5-85885283391e" containerName="dnsmasq-dns" Feb 19 10:01:57 crc kubenswrapper[4965]: I0219 10:01:57.713576 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="45fd5ec9-f248-4c13-b4a5-85885283391e" containerName="dnsmasq-dns" Feb 19 10:01:57 crc kubenswrapper[4965]: I0219 10:01:57.713763 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="45fd5ec9-f248-4c13-b4a5-85885283391e" containerName="dnsmasq-dns" Feb 19 10:01:57 crc kubenswrapper[4965]: I0219 10:01:57.714991 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-n99vb" Feb 19 10:01:57 crc kubenswrapper[4965]: I0219 10:01:57.716940 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 19 10:01:57 crc kubenswrapper[4965]: I0219 10:01:57.729245 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-n99vb"] Feb 19 10:01:57 crc kubenswrapper[4965]: I0219 10:01:57.765328 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6263bd60-b2d0-44ff-ae54-874728576f1d","Type":"ContainerStarted","Data":"5f5b70678e77927b68b8ea51dd42cb45ad507a78c7344cc371c42214804244fd"} Feb 19 10:01:57 crc kubenswrapper[4965]: I0219 10:01:57.766122 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 19 10:01:57 crc kubenswrapper[4965]: I0219 10:01:57.774639 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-wwgl6"] Feb 19 10:01:57 crc kubenswrapper[4965]: I0219 10:01:57.776238 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-wwgl6" Feb 19 10:01:57 crc kubenswrapper[4965]: I0219 10:01:57.779142 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 19 10:01:57 crc kubenswrapper[4965]: I0219 10:01:57.788320 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-wwgl6"] Feb 19 10:01:57 crc kubenswrapper[4965]: I0219 10:01:57.803619 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=12.230230934 podStartE2EDuration="39.803595289s" podCreationTimestamp="2026-02-19 10:01:18 +0000 UTC" firstStartedPulling="2026-02-19 10:01:29.166281251 +0000 UTC m=+1144.787602561" lastFinishedPulling="2026-02-19 10:01:56.739645606 +0000 UTC m=+1172.360966916" observedRunningTime="2026-02-19 10:01:57.797025529 +0000 UTC m=+1173.418346839" watchObservedRunningTime="2026-02-19 10:01:57.803595289 +0000 UTC m=+1173.424916609" Feb 19 10:01:57 crc kubenswrapper[4965]: I0219 10:01:57.847725 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/154fb9e1-1e52-4338-964c-8210b8bbbc57-combined-ca-bundle\") pod \"ovn-controller-metrics-wwgl6\" (UID: \"154fb9e1-1e52-4338-964c-8210b8bbbc57\") " pod="openstack/ovn-controller-metrics-wwgl6" Feb 19 10:01:57 crc kubenswrapper[4965]: I0219 10:01:57.847874 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6279de0c-ae0a-49f5-983b-aed9821a1c6f-config\") pod \"dnsmasq-dns-5bf47b49b7-n99vb\" (UID: \"6279de0c-ae0a-49f5-983b-aed9821a1c6f\") " pod="openstack/dnsmasq-dns-5bf47b49b7-n99vb" Feb 19 10:01:57 crc kubenswrapper[4965]: I0219 10:01:57.847925 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/154fb9e1-1e52-4338-964c-8210b8bbbc57-ovs-rundir\") pod \"ovn-controller-metrics-wwgl6\" (UID: \"154fb9e1-1e52-4338-964c-8210b8bbbc57\") " pod="openstack/ovn-controller-metrics-wwgl6" Feb 19 10:01:57 crc kubenswrapper[4965]: I0219 10:01:57.847968 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/154fb9e1-1e52-4338-964c-8210b8bbbc57-ovn-rundir\") pod \"ovn-controller-metrics-wwgl6\" (UID: \"154fb9e1-1e52-4338-964c-8210b8bbbc57\") " pod="openstack/ovn-controller-metrics-wwgl6" Feb 19 10:01:57 crc kubenswrapper[4965]: I0219 10:01:57.848002 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/154fb9e1-1e52-4338-964c-8210b8bbbc57-config\") pod \"ovn-controller-metrics-wwgl6\" (UID: \"154fb9e1-1e52-4338-964c-8210b8bbbc57\") " pod="openstack/ovn-controller-metrics-wwgl6" Feb 19 10:01:57 crc kubenswrapper[4965]: I0219 10:01:57.848027 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6279de0c-ae0a-49f5-983b-aed9821a1c6f-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-n99vb\" (UID: \"6279de0c-ae0a-49f5-983b-aed9821a1c6f\") " pod="openstack/dnsmasq-dns-5bf47b49b7-n99vb" Feb 19 10:01:57 crc kubenswrapper[4965]: I0219 10:01:57.848056 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbfns\" (UniqueName: \"kubernetes.io/projected/6279de0c-ae0a-49f5-983b-aed9821a1c6f-kube-api-access-mbfns\") pod \"dnsmasq-dns-5bf47b49b7-n99vb\" (UID: \"6279de0c-ae0a-49f5-983b-aed9821a1c6f\") " pod="openstack/dnsmasq-dns-5bf47b49b7-n99vb" Feb 19 10:01:57 crc kubenswrapper[4965]: I0219 10:01:57.848083 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hphrb\" (UniqueName: \"kubernetes.io/projected/154fb9e1-1e52-4338-964c-8210b8bbbc57-kube-api-access-hphrb\") pod \"ovn-controller-metrics-wwgl6\" (UID: \"154fb9e1-1e52-4338-964c-8210b8bbbc57\") " pod="openstack/ovn-controller-metrics-wwgl6" Feb 19 10:01:57 crc kubenswrapper[4965]: I0219 10:01:57.848116 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/154fb9e1-1e52-4338-964c-8210b8bbbc57-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-wwgl6\" (UID: \"154fb9e1-1e52-4338-964c-8210b8bbbc57\") " pod="openstack/ovn-controller-metrics-wwgl6" Feb 19 10:01:57 crc kubenswrapper[4965]: I0219 10:01:57.848959 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6279de0c-ae0a-49f5-983b-aed9821a1c6f-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-n99vb\" (UID: \"6279de0c-ae0a-49f5-983b-aed9821a1c6f\") " pod="openstack/dnsmasq-dns-5bf47b49b7-n99vb" Feb 19 10:01:57 crc kubenswrapper[4965]: I0219 10:01:57.951061 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6279de0c-ae0a-49f5-983b-aed9821a1c6f-config\") pod \"dnsmasq-dns-5bf47b49b7-n99vb\" (UID: \"6279de0c-ae0a-49f5-983b-aed9821a1c6f\") " pod="openstack/dnsmasq-dns-5bf47b49b7-n99vb" Feb 19 10:01:57 crc kubenswrapper[4965]: I0219 10:01:57.951129 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/154fb9e1-1e52-4338-964c-8210b8bbbc57-ovs-rundir\") pod \"ovn-controller-metrics-wwgl6\" (UID: \"154fb9e1-1e52-4338-964c-8210b8bbbc57\") " pod="openstack/ovn-controller-metrics-wwgl6" Feb 19 10:01:57 crc kubenswrapper[4965]: I0219 10:01:57.951175 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/154fb9e1-1e52-4338-964c-8210b8bbbc57-ovn-rundir\") pod \"ovn-controller-metrics-wwgl6\" (UID: \"154fb9e1-1e52-4338-964c-8210b8bbbc57\") " pod="openstack/ovn-controller-metrics-wwgl6" Feb 19 10:01:57 crc kubenswrapper[4965]: I0219 10:01:57.951254 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/154fb9e1-1e52-4338-964c-8210b8bbbc57-config\") pod \"ovn-controller-metrics-wwgl6\" (UID: \"154fb9e1-1e52-4338-964c-8210b8bbbc57\") " pod="openstack/ovn-controller-metrics-wwgl6" Feb 19 10:01:57 crc kubenswrapper[4965]: I0219 10:01:57.951280 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6279de0c-ae0a-49f5-983b-aed9821a1c6f-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-n99vb\" (UID: \"6279de0c-ae0a-49f5-983b-aed9821a1c6f\") " pod="openstack/dnsmasq-dns-5bf47b49b7-n99vb" Feb 19 10:01:57 crc kubenswrapper[4965]: I0219 10:01:57.951302 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbfns\" (UniqueName: \"kubernetes.io/projected/6279de0c-ae0a-49f5-983b-aed9821a1c6f-kube-api-access-mbfns\") pod \"dnsmasq-dns-5bf47b49b7-n99vb\" (UID: \"6279de0c-ae0a-49f5-983b-aed9821a1c6f\") " pod="openstack/dnsmasq-dns-5bf47b49b7-n99vb" Feb 19 10:01:57 crc kubenswrapper[4965]: I0219 10:01:57.951326 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hphrb\" (UniqueName: \"kubernetes.io/projected/154fb9e1-1e52-4338-964c-8210b8bbbc57-kube-api-access-hphrb\") pod \"ovn-controller-metrics-wwgl6\" (UID: \"154fb9e1-1e52-4338-964c-8210b8bbbc57\") " pod="openstack/ovn-controller-metrics-wwgl6" Feb 19 10:01:57 crc kubenswrapper[4965]: I0219 10:01:57.951352 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/154fb9e1-1e52-4338-964c-8210b8bbbc57-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-wwgl6\" (UID: \"154fb9e1-1e52-4338-964c-8210b8bbbc57\") " pod="openstack/ovn-controller-metrics-wwgl6" Feb 19 10:01:57 crc kubenswrapper[4965]: I0219 10:01:57.951380 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6279de0c-ae0a-49f5-983b-aed9821a1c6f-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-n99vb\" (UID: \"6279de0c-ae0a-49f5-983b-aed9821a1c6f\") " pod="openstack/dnsmasq-dns-5bf47b49b7-n99vb" Feb 19 10:01:57 crc kubenswrapper[4965]: I0219 10:01:57.951406 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/154fb9e1-1e52-4338-964c-8210b8bbbc57-combined-ca-bundle\") pod \"ovn-controller-metrics-wwgl6\" (UID: \"154fb9e1-1e52-4338-964c-8210b8bbbc57\") " pod="openstack/ovn-controller-metrics-wwgl6" Feb 19 10:01:57 crc kubenswrapper[4965]: I0219 10:01:57.951514 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/154fb9e1-1e52-4338-964c-8210b8bbbc57-ovs-rundir\") pod \"ovn-controller-metrics-wwgl6\" (UID: \"154fb9e1-1e52-4338-964c-8210b8bbbc57\") " pod="openstack/ovn-controller-metrics-wwgl6" Feb 19 10:01:57 crc kubenswrapper[4965]: I0219 10:01:57.951581 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/154fb9e1-1e52-4338-964c-8210b8bbbc57-ovn-rundir\") pod \"ovn-controller-metrics-wwgl6\" (UID: \"154fb9e1-1e52-4338-964c-8210b8bbbc57\") " pod="openstack/ovn-controller-metrics-wwgl6" Feb 19 10:01:57 crc kubenswrapper[4965]: I0219 10:01:57.952509 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6279de0c-ae0a-49f5-983b-aed9821a1c6f-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-n99vb\" (UID: \"6279de0c-ae0a-49f5-983b-aed9821a1c6f\") " pod="openstack/dnsmasq-dns-5bf47b49b7-n99vb" Feb 19 10:01:57 crc kubenswrapper[4965]: I0219 10:01:57.952574 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6279de0c-ae0a-49f5-983b-aed9821a1c6f-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-n99vb\" (UID: \"6279de0c-ae0a-49f5-983b-aed9821a1c6f\") " pod="openstack/dnsmasq-dns-5bf47b49b7-n99vb" Feb 19 10:01:57 crc kubenswrapper[4965]: I0219 10:01:57.952682 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/154fb9e1-1e52-4338-964c-8210b8bbbc57-config\") pod \"ovn-controller-metrics-wwgl6\" (UID: \"154fb9e1-1e52-4338-964c-8210b8bbbc57\") " pod="openstack/ovn-controller-metrics-wwgl6" Feb 19 10:01:57 crc kubenswrapper[4965]: I0219 10:01:57.952973 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6279de0c-ae0a-49f5-983b-aed9821a1c6f-config\") pod \"dnsmasq-dns-5bf47b49b7-n99vb\" (UID: \"6279de0c-ae0a-49f5-983b-aed9821a1c6f\") " pod="openstack/dnsmasq-dns-5bf47b49b7-n99vb" Feb 19 10:01:57 crc kubenswrapper[4965]: I0219 10:01:57.957844 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/154fb9e1-1e52-4338-964c-8210b8bbbc57-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-wwgl6\" (UID: \"154fb9e1-1e52-4338-964c-8210b8bbbc57\") " pod="openstack/ovn-controller-metrics-wwgl6" Feb 19 10:01:57 crc kubenswrapper[4965]: I0219 10:01:57.957902 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/154fb9e1-1e52-4338-964c-8210b8bbbc57-combined-ca-bundle\") pod \"ovn-controller-metrics-wwgl6\" (UID: \"154fb9e1-1e52-4338-964c-8210b8bbbc57\") " pod="openstack/ovn-controller-metrics-wwgl6" Feb 19 10:01:57 crc kubenswrapper[4965]: I0219 10:01:57.977007 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hphrb\" (UniqueName: \"kubernetes.io/projected/154fb9e1-1e52-4338-964c-8210b8bbbc57-kube-api-access-hphrb\") pod \"ovn-controller-metrics-wwgl6\" (UID: \"154fb9e1-1e52-4338-964c-8210b8bbbc57\") " pod="openstack/ovn-controller-metrics-wwgl6" Feb 19 10:01:57 crc kubenswrapper[4965]: I0219 10:01:57.984101 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbfns\" (UniqueName: \"kubernetes.io/projected/6279de0c-ae0a-49f5-983b-aed9821a1c6f-kube-api-access-mbfns\") pod \"dnsmasq-dns-5bf47b49b7-n99vb\" (UID: \"6279de0c-ae0a-49f5-983b-aed9821a1c6f\") " pod="openstack/dnsmasq-dns-5bf47b49b7-n99vb" Feb 19 10:01:58 crc kubenswrapper[4965]: I0219 10:01:58.059530 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-n99vb" Feb 19 10:01:58 crc kubenswrapper[4965]: I0219 10:01:58.096427 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-wwgl6" Feb 19 10:01:58 crc kubenswrapper[4965]: I0219 10:01:58.161047 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-n99vb"] Feb 19 10:01:58 crc kubenswrapper[4965]: I0219 10:01:58.200916 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-v5nw7"] Feb 19 10:01:58 crc kubenswrapper[4965]: I0219 10:01:58.202588 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-v5nw7" Feb 19 10:01:58 crc kubenswrapper[4965]: I0219 10:01:58.210582 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-v5nw7"] Feb 19 10:01:58 crc kubenswrapper[4965]: I0219 10:01:58.211141 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 19 10:01:58 crc kubenswrapper[4965]: I0219 10:01:58.261366 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89777380-e0ef-43b5-b247-4033b38bfaba-dns-svc\") pod \"dnsmasq-dns-8554648995-v5nw7\" (UID: \"89777380-e0ef-43b5-b247-4033b38bfaba\") " pod="openstack/dnsmasq-dns-8554648995-v5nw7" Feb 19 10:01:58 crc kubenswrapper[4965]: I0219 10:01:58.261425 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/89777380-e0ef-43b5-b247-4033b38bfaba-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-v5nw7\" (UID: \"89777380-e0ef-43b5-b247-4033b38bfaba\") " pod="openstack/dnsmasq-dns-8554648995-v5nw7" Feb 19 10:01:58 crc kubenswrapper[4965]: I0219 10:01:58.261572 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsg8b\" (UniqueName: \"kubernetes.io/projected/89777380-e0ef-43b5-b247-4033b38bfaba-kube-api-access-tsg8b\") pod \"dnsmasq-dns-8554648995-v5nw7\" (UID: \"89777380-e0ef-43b5-b247-4033b38bfaba\") " pod="openstack/dnsmasq-dns-8554648995-v5nw7" Feb 19 10:01:58 crc kubenswrapper[4965]: I0219 10:01:58.261640 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/89777380-e0ef-43b5-b247-4033b38bfaba-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-v5nw7\" (UID: \"89777380-e0ef-43b5-b247-4033b38bfaba\") " pod="openstack/dnsmasq-dns-8554648995-v5nw7" Feb 19 10:01:58 crc kubenswrapper[4965]: I0219 10:01:58.261713 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89777380-e0ef-43b5-b247-4033b38bfaba-config\") pod \"dnsmasq-dns-8554648995-v5nw7\" (UID: \"89777380-e0ef-43b5-b247-4033b38bfaba\") " pod="openstack/dnsmasq-dns-8554648995-v5nw7" Feb 19 10:01:58 crc kubenswrapper[4965]: I0219 10:01:58.362780 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsg8b\" (UniqueName: \"kubernetes.io/projected/89777380-e0ef-43b5-b247-4033b38bfaba-kube-api-access-tsg8b\") pod \"dnsmasq-dns-8554648995-v5nw7\" (UID: \"89777380-e0ef-43b5-b247-4033b38bfaba\") " pod="openstack/dnsmasq-dns-8554648995-v5nw7" Feb 19 10:01:58 crc kubenswrapper[4965]: I0219 10:01:58.363119 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/89777380-e0ef-43b5-b247-4033b38bfaba-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-v5nw7\" (UID: \"89777380-e0ef-43b5-b247-4033b38bfaba\") " pod="openstack/dnsmasq-dns-8554648995-v5nw7" Feb 19 10:01:58 crc kubenswrapper[4965]: I0219 10:01:58.363173 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89777380-e0ef-43b5-b247-4033b38bfaba-config\") pod \"dnsmasq-dns-8554648995-v5nw7\" (UID: \"89777380-e0ef-43b5-b247-4033b38bfaba\") " pod="openstack/dnsmasq-dns-8554648995-v5nw7" Feb 19 10:01:58 crc kubenswrapper[4965]: I0219 10:01:58.363254 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89777380-e0ef-43b5-b247-4033b38bfaba-dns-svc\") pod \"dnsmasq-dns-8554648995-v5nw7\" (UID: \"89777380-e0ef-43b5-b247-4033b38bfaba\") " pod="openstack/dnsmasq-dns-8554648995-v5nw7" Feb 19 10:01:58 crc kubenswrapper[4965]: I0219 10:01:58.363299 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/89777380-e0ef-43b5-b247-4033b38bfaba-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-v5nw7\" (UID: \"89777380-e0ef-43b5-b247-4033b38bfaba\") " pod="openstack/dnsmasq-dns-8554648995-v5nw7" Feb 19 10:01:58 crc kubenswrapper[4965]: I0219 10:01:58.364223 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/89777380-e0ef-43b5-b247-4033b38bfaba-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-v5nw7\" (UID: \"89777380-e0ef-43b5-b247-4033b38bfaba\") " pod="openstack/dnsmasq-dns-8554648995-v5nw7" Feb 19 10:01:58 crc kubenswrapper[4965]: I0219 10:01:58.364743 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/89777380-e0ef-43b5-b247-4033b38bfaba-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-v5nw7\" (UID: \"89777380-e0ef-43b5-b247-4033b38bfaba\") " pod="openstack/dnsmasq-dns-8554648995-v5nw7" Feb 19 10:01:58 crc kubenswrapper[4965]: I0219 10:01:58.366855 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89777380-e0ef-43b5-b247-4033b38bfaba-config\") pod \"dnsmasq-dns-8554648995-v5nw7\" (UID: \"89777380-e0ef-43b5-b247-4033b38bfaba\") " pod="openstack/dnsmasq-dns-8554648995-v5nw7" Feb 19 10:01:58 crc kubenswrapper[4965]: I0219 10:01:58.367095 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89777380-e0ef-43b5-b247-4033b38bfaba-dns-svc\") pod \"dnsmasq-dns-8554648995-v5nw7\" (UID: \"89777380-e0ef-43b5-b247-4033b38bfaba\") " pod="openstack/dnsmasq-dns-8554648995-v5nw7" Feb 19 10:01:58 crc kubenswrapper[4965]: I0219 10:01:58.412951 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsg8b\" (UniqueName: \"kubernetes.io/projected/89777380-e0ef-43b5-b247-4033b38bfaba-kube-api-access-tsg8b\") pod \"dnsmasq-dns-8554648995-v5nw7\" (UID: \"89777380-e0ef-43b5-b247-4033b38bfaba\") " pod="openstack/dnsmasq-dns-8554648995-v5nw7" Feb 19 10:01:58 crc kubenswrapper[4965]: I0219 10:01:58.419801 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-v5nw7"] Feb 19 10:01:58 crc kubenswrapper[4965]: I0219 10:01:58.421087 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-v5nw7" Feb 19 10:01:58 crc kubenswrapper[4965]: I0219 10:01:58.447698 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-7nzqs"] Feb 19 10:01:58 crc kubenswrapper[4965]: I0219 10:01:58.449160 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-7nzqs" Feb 19 10:01:58 crc kubenswrapper[4965]: I0219 10:01:58.496077 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-7nzqs"] Feb 19 10:01:58 crc kubenswrapper[4965]: I0219 10:01:58.568141 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59e04a77-6c47-4906-86c7-72e8a36e120c-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-7nzqs\" (UID: \"59e04a77-6c47-4906-86c7-72e8a36e120c\") " pod="openstack/dnsmasq-dns-b8fbc5445-7nzqs" Feb 19 10:01:58 crc kubenswrapper[4965]: I0219 10:01:58.568254 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/59e04a77-6c47-4906-86c7-72e8a36e120c-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-7nzqs\" (UID: \"59e04a77-6c47-4906-86c7-72e8a36e120c\") " pod="openstack/dnsmasq-dns-b8fbc5445-7nzqs" Feb 19 10:01:58 crc kubenswrapper[4965]: I0219 10:01:58.568294 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb4x5\" (UniqueName: \"kubernetes.io/projected/59e04a77-6c47-4906-86c7-72e8a36e120c-kube-api-access-wb4x5\") pod \"dnsmasq-dns-b8fbc5445-7nzqs\" (UID: \"59e04a77-6c47-4906-86c7-72e8a36e120c\") " pod="openstack/dnsmasq-dns-b8fbc5445-7nzqs" Feb 19 10:01:58 crc kubenswrapper[4965]: I0219 10:01:58.568374 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/59e04a77-6c47-4906-86c7-72e8a36e120c-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-7nzqs\" (UID: \"59e04a77-6c47-4906-86c7-72e8a36e120c\") " pod="openstack/dnsmasq-dns-b8fbc5445-7nzqs" Feb 19 10:01:58 crc kubenswrapper[4965]: I0219 10:01:58.568429 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59e04a77-6c47-4906-86c7-72e8a36e120c-config\") pod \"dnsmasq-dns-b8fbc5445-7nzqs\" (UID: \"59e04a77-6c47-4906-86c7-72e8a36e120c\") " pod="openstack/dnsmasq-dns-b8fbc5445-7nzqs" Feb 19 10:01:58 crc kubenswrapper[4965]: I0219 10:01:58.671760 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59e04a77-6c47-4906-86c7-72e8a36e120c-config\") pod \"dnsmasq-dns-b8fbc5445-7nzqs\" (UID: \"59e04a77-6c47-4906-86c7-72e8a36e120c\") " pod="openstack/dnsmasq-dns-b8fbc5445-7nzqs" Feb 19 10:01:58 crc kubenswrapper[4965]: I0219 10:01:58.671851 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59e04a77-6c47-4906-86c7-72e8a36e120c-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-7nzqs\" (UID: \"59e04a77-6c47-4906-86c7-72e8a36e120c\") " pod="openstack/dnsmasq-dns-b8fbc5445-7nzqs" Feb 19 10:01:58 crc kubenswrapper[4965]: I0219 10:01:58.671915 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/59e04a77-6c47-4906-86c7-72e8a36e120c-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-7nzqs\" (UID: \"59e04a77-6c47-4906-86c7-72e8a36e120c\") " pod="openstack/dnsmasq-dns-b8fbc5445-7nzqs" Feb 19 10:01:58 crc kubenswrapper[4965]: I0219 10:01:58.671954 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb4x5\" (UniqueName: \"kubernetes.io/projected/59e04a77-6c47-4906-86c7-72e8a36e120c-kube-api-access-wb4x5\") pod \"dnsmasq-dns-b8fbc5445-7nzqs\" (UID: \"59e04a77-6c47-4906-86c7-72e8a36e120c\") " pod="openstack/dnsmasq-dns-b8fbc5445-7nzqs" Feb 19 10:01:58 crc kubenswrapper[4965]: I0219 10:01:58.672039 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/59e04a77-6c47-4906-86c7-72e8a36e120c-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-7nzqs\" (UID: \"59e04a77-6c47-4906-86c7-72e8a36e120c\") " pod="openstack/dnsmasq-dns-b8fbc5445-7nzqs" Feb 19 10:01:58 crc kubenswrapper[4965]: I0219 10:01:58.673029 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/59e04a77-6c47-4906-86c7-72e8a36e120c-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-7nzqs\" (UID: \"59e04a77-6c47-4906-86c7-72e8a36e120c\") " pod="openstack/dnsmasq-dns-b8fbc5445-7nzqs" Feb 19 10:01:58 crc kubenswrapper[4965]: I0219 10:01:58.673258 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59e04a77-6c47-4906-86c7-72e8a36e120c-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-7nzqs\" (UID: \"59e04a77-6c47-4906-86c7-72e8a36e120c\") " pod="openstack/dnsmasq-dns-b8fbc5445-7nzqs" Feb 19 10:01:58 crc kubenswrapper[4965]: I0219 10:01:58.673658 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/59e04a77-6c47-4906-86c7-72e8a36e120c-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-7nzqs\" (UID: \"59e04a77-6c47-4906-86c7-72e8a36e120c\") " pod="openstack/dnsmasq-dns-b8fbc5445-7nzqs" Feb 19 10:01:58 crc kubenswrapper[4965]: I0219 10:01:58.674285 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59e04a77-6c47-4906-86c7-72e8a36e120c-config\") pod \"dnsmasq-dns-b8fbc5445-7nzqs\" (UID: \"59e04a77-6c47-4906-86c7-72e8a36e120c\") " pod="openstack/dnsmasq-dns-b8fbc5445-7nzqs" Feb 19 10:01:58 crc kubenswrapper[4965]: I0219 10:01:58.698530 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb4x5\" (UniqueName: \"kubernetes.io/projected/59e04a77-6c47-4906-86c7-72e8a36e120c-kube-api-access-wb4x5\") pod \"dnsmasq-dns-b8fbc5445-7nzqs\" (UID: \"59e04a77-6c47-4906-86c7-72e8a36e120c\") " pod="openstack/dnsmasq-dns-b8fbc5445-7nzqs" Feb 19 10:01:58 crc kubenswrapper[4965]: I0219 10:01:58.760228 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-wwgl6"] Feb 19 10:01:58 crc kubenswrapper[4965]: I0219 10:01:58.797231 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-7nzqs" Feb 19 10:01:58 crc kubenswrapper[4965]: W0219 10:01:58.797765 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6279de0c_ae0a_49f5_983b_aed9821a1c6f.slice/crio-01276af75e3bac56a091aa0a1ab118a1071a1770af55b6a2072ccec67ffee323 WatchSource:0}: Error finding container 01276af75e3bac56a091aa0a1ab118a1071a1770af55b6a2072ccec67ffee323: Status 404 returned error can't find the container with id 01276af75e3bac56a091aa0a1ab118a1071a1770af55b6a2072ccec67ffee323 Feb 19 10:01:58 crc kubenswrapper[4965]: I0219 10:01:58.797776 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-n99vb"] Feb 19 10:01:58 crc kubenswrapper[4965]: I0219 10:01:58.805977 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-wwgl6" event={"ID":"154fb9e1-1e52-4338-964c-8210b8bbbc57","Type":"ContainerStarted","Data":"4ad9cd3caf5254939155f31075cc881c5121a0f338e6e0d392ed7fa180ccb24a"} Feb 19 10:01:59 crc kubenswrapper[4965]: I0219 10:01:59.018685 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 19 10:01:59 crc kubenswrapper[4965]: I0219 10:01:59.267970 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-v5nw7"] Feb 19 10:01:59 crc kubenswrapper[4965]: I0219 10:01:59.276224 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 19 10:01:59 crc kubenswrapper[4965]: I0219 10:01:59.438243 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-7nzqs"] Feb 19 10:01:59 crc kubenswrapper[4965]: I0219 10:01:59.575169 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 19 10:01:59 crc kubenswrapper[4965]: I0219 10:01:59.582831 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 19 10:01:59 crc kubenswrapper[4965]: I0219 10:01:59.587340 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 19 10:01:59 crc kubenswrapper[4965]: I0219 10:01:59.587461 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 19 10:01:59 crc kubenswrapper[4965]: I0219 10:01:59.587628 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-pwg5w" Feb 19 10:01:59 crc kubenswrapper[4965]: I0219 10:01:59.591819 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 19 10:01:59 crc kubenswrapper[4965]: I0219 10:01:59.638158 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 19 10:01:59 crc kubenswrapper[4965]: I0219 10:01:59.639248 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/2c3ae050-b164-4fbc-9e5b-392eb0a4fb53-lock\") pod \"swift-storage-0\" (UID: \"2c3ae050-b164-4fbc-9e5b-392eb0a4fb53\") " pod="openstack/swift-storage-0" Feb 19 10:01:59 crc kubenswrapper[4965]: I0219 10:01:59.639363 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c3ae050-b164-4fbc-9e5b-392eb0a4fb53-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"2c3ae050-b164-4fbc-9e5b-392eb0a4fb53\") " pod="openstack/swift-storage-0" Feb 19 10:01:59 crc kubenswrapper[4965]: I0219 10:01:59.639410 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2c3ae050-b164-4fbc-9e5b-392eb0a4fb53-etc-swift\") pod \"swift-storage-0\" (UID: \"2c3ae050-b164-4fbc-9e5b-392eb0a4fb53\") " pod="openstack/swift-storage-0" Feb 19 10:01:59 crc kubenswrapper[4965]: I0219 10:01:59.639437 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ba432415-8382-4bbb-a3bc-3abd6cc45e1a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba432415-8382-4bbb-a3bc-3abd6cc45e1a\") pod \"swift-storage-0\" (UID: \"2c3ae050-b164-4fbc-9e5b-392eb0a4fb53\") " pod="openstack/swift-storage-0" Feb 19 10:01:59 crc kubenswrapper[4965]: I0219 10:01:59.639480 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/2c3ae050-b164-4fbc-9e5b-392eb0a4fb53-cache\") pod \"swift-storage-0\" (UID: \"2c3ae050-b164-4fbc-9e5b-392eb0a4fb53\") " pod="openstack/swift-storage-0" Feb 19 10:01:59 crc kubenswrapper[4965]: I0219 10:01:59.639506 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kt75\" (UniqueName: \"kubernetes.io/projected/2c3ae050-b164-4fbc-9e5b-392eb0a4fb53-kube-api-access-2kt75\") pod \"swift-storage-0\" (UID: \"2c3ae050-b164-4fbc-9e5b-392eb0a4fb53\") " pod="openstack/swift-storage-0" Feb 19 10:01:59 crc kubenswrapper[4965]: I0219 10:01:59.742483 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c3ae050-b164-4fbc-9e5b-392eb0a4fb53-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"2c3ae050-b164-4fbc-9e5b-392eb0a4fb53\") " pod="openstack/swift-storage-0" Feb 19 10:01:59 crc kubenswrapper[4965]: I0219 10:01:59.742564 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2c3ae050-b164-4fbc-9e5b-392eb0a4fb53-etc-swift\") pod \"swift-storage-0\" (UID: \"2c3ae050-b164-4fbc-9e5b-392eb0a4fb53\") " pod="openstack/swift-storage-0" Feb 19 10:01:59 crc kubenswrapper[4965]: I0219 10:01:59.742722 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ba432415-8382-4bbb-a3bc-3abd6cc45e1a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba432415-8382-4bbb-a3bc-3abd6cc45e1a\") pod \"swift-storage-0\" (UID: \"2c3ae050-b164-4fbc-9e5b-392eb0a4fb53\") " pod="openstack/swift-storage-0" Feb 19 10:01:59 crc kubenswrapper[4965]: I0219 10:01:59.742770 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/2c3ae050-b164-4fbc-9e5b-392eb0a4fb53-cache\") pod \"swift-storage-0\" (UID: \"2c3ae050-b164-4fbc-9e5b-392eb0a4fb53\") " pod="openstack/swift-storage-0" Feb 19 10:01:59 crc kubenswrapper[4965]: I0219 10:01:59.742821 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kt75\" (UniqueName: \"kubernetes.io/projected/2c3ae050-b164-4fbc-9e5b-392eb0a4fb53-kube-api-access-2kt75\") pod \"swift-storage-0\" (UID: \"2c3ae050-b164-4fbc-9e5b-392eb0a4fb53\") " pod="openstack/swift-storage-0" Feb 19 10:01:59 crc kubenswrapper[4965]: I0219 10:01:59.742901 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/2c3ae050-b164-4fbc-9e5b-392eb0a4fb53-lock\") pod \"swift-storage-0\" (UID: \"2c3ae050-b164-4fbc-9e5b-392eb0a4fb53\") " pod="openstack/swift-storage-0" Feb 19 10:01:59 crc kubenswrapper[4965]: E0219 10:01:59.743449 4965 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 10:01:59 crc kubenswrapper[4965]: E0219 10:01:59.743487 4965 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 10:01:59 crc kubenswrapper[4965]: E0219 10:01:59.743547 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2c3ae050-b164-4fbc-9e5b-392eb0a4fb53-etc-swift podName:2c3ae050-b164-4fbc-9e5b-392eb0a4fb53 nodeName:}" failed. No retries permitted until 2026-02-19 10:02:00.243525939 +0000 UTC m=+1175.864847249 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/2c3ae050-b164-4fbc-9e5b-392eb0a4fb53-etc-swift") pod "swift-storage-0" (UID: "2c3ae050-b164-4fbc-9e5b-392eb0a4fb53") : configmap "swift-ring-files" not found Feb 19 10:01:59 crc kubenswrapper[4965]: I0219 10:01:59.743593 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/2c3ae050-b164-4fbc-9e5b-392eb0a4fb53-cache\") pod \"swift-storage-0\" (UID: \"2c3ae050-b164-4fbc-9e5b-392eb0a4fb53\") " pod="openstack/swift-storage-0" Feb 19 10:01:59 crc kubenswrapper[4965]: I0219 10:01:59.743615 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/2c3ae050-b164-4fbc-9e5b-392eb0a4fb53-lock\") pod \"swift-storage-0\" (UID: \"2c3ae050-b164-4fbc-9e5b-392eb0a4fb53\") " pod="openstack/swift-storage-0" Feb 19 10:01:59 crc kubenswrapper[4965]: I0219 10:01:59.748782 4965 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 10:01:59 crc kubenswrapper[4965]: I0219 10:01:59.749092 4965 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ba432415-8382-4bbb-a3bc-3abd6cc45e1a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba432415-8382-4bbb-a3bc-3abd6cc45e1a\") pod \"swift-storage-0\" (UID: \"2c3ae050-b164-4fbc-9e5b-392eb0a4fb53\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6c873a75eeb7c534ac156755ad63667ad496021176c0e67103baf2b8e047e109/globalmount\"" pod="openstack/swift-storage-0" Feb 19 10:01:59 crc kubenswrapper[4965]: I0219 10:01:59.750700 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c3ae050-b164-4fbc-9e5b-392eb0a4fb53-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"2c3ae050-b164-4fbc-9e5b-392eb0a4fb53\") " pod="openstack/swift-storage-0" Feb 19 10:01:59 crc kubenswrapper[4965]: I0219 10:01:59.767168 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kt75\" (UniqueName: \"kubernetes.io/projected/2c3ae050-b164-4fbc-9e5b-392eb0a4fb53-kube-api-access-2kt75\") pod \"swift-storage-0\" (UID: \"2c3ae050-b164-4fbc-9e5b-392eb0a4fb53\") " pod="openstack/swift-storage-0" Feb 19 10:01:59 crc kubenswrapper[4965]: I0219 10:01:59.787710 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ba432415-8382-4bbb-a3bc-3abd6cc45e1a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba432415-8382-4bbb-a3bc-3abd6cc45e1a\") pod \"swift-storage-0\" (UID: \"2c3ae050-b164-4fbc-9e5b-392eb0a4fb53\") " pod="openstack/swift-storage-0" Feb 19 10:01:59 crc kubenswrapper[4965]: I0219 10:01:59.821657 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mwlb6" event={"ID":"0a44ec63-c497-4874-b0ca-ecb9d6c9bc2a","Type":"ContainerStarted","Data":"fb55521888f2dc10580ef2c13aaafdb10096c2632ad8052efd1c2398c16ca672"} Feb 19 10:01:59 crc kubenswrapper[4965]: I0219 10:01:59.821920 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-mwlb6" Feb 19 10:01:59 crc kubenswrapper[4965]: I0219 10:01:59.825009 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"45105c9e-db96-41c5-ba42-d56027ca318c","Type":"ContainerStarted","Data":"02993a91a1408ce11ec6d584ec04a0e8d5b85a028c36deb5e503b7c53c5a96be"} Feb 19 10:01:59 crc kubenswrapper[4965]: I0219 10:01:59.825231 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Feb 19 10:01:59 crc kubenswrapper[4965]: I0219 10:01:59.827814 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Feb 19 10:01:59 crc kubenswrapper[4965]: I0219 10:01:59.829756 4965 generic.go:334] "Generic (PLEG): container finished" podID="6279de0c-ae0a-49f5-983b-aed9821a1c6f" containerID="b2e9188738538db9a8df45c5b8ba9be8ce2b388e4150833015d02c983e937550" exitCode=0 Feb 19 10:01:59 crc kubenswrapper[4965]: I0219 10:01:59.829858 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-n99vb" event={"ID":"6279de0c-ae0a-49f5-983b-aed9821a1c6f","Type":"ContainerDied","Data":"b2e9188738538db9a8df45c5b8ba9be8ce2b388e4150833015d02c983e937550"} Feb 19 10:01:59 crc kubenswrapper[4965]: I0219 10:01:59.829902 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-n99vb" event={"ID":"6279de0c-ae0a-49f5-983b-aed9821a1c6f","Type":"ContainerStarted","Data":"01276af75e3bac56a091aa0a1ab118a1071a1770af55b6a2072ccec67ffee323"} Feb 19 10:01:59 crc kubenswrapper[4965]: I0219 10:01:59.831717 4965 generic.go:334] "Generic (PLEG): container finished" podID="59e04a77-6c47-4906-86c7-72e8a36e120c" containerID="2fc31f0ae4a307f5ef9cd950c3fd4308423d9cdf3157dfd12804e1e247595f7d" exitCode=0 Feb 19 10:01:59 crc kubenswrapper[4965]: I0219 10:01:59.832885 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-7nzqs" event={"ID":"59e04a77-6c47-4906-86c7-72e8a36e120c","Type":"ContainerDied","Data":"2fc31f0ae4a307f5ef9cd950c3fd4308423d9cdf3157dfd12804e1e247595f7d"} Feb 19 10:01:59 crc kubenswrapper[4965]: I0219 10:01:59.832919 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-7nzqs" event={"ID":"59e04a77-6c47-4906-86c7-72e8a36e120c","Type":"ContainerStarted","Data":"c6b7c57ae0381e69b560b0869d128239d6aad0a50a623cdaf0dbd386bb8ab27c"} Feb 19 10:01:59 crc kubenswrapper[4965]: I0219 10:01:59.835585 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-wwgl6" event={"ID":"154fb9e1-1e52-4338-964c-8210b8bbbc57","Type":"ContainerStarted","Data":"5d106a4db41ee81403566f9957008fc3edfdf0a3e89c1c75f0c6df75ed8ba87e"} Feb 19 10:01:59 crc kubenswrapper[4965]: I0219 10:01:59.842924 4965 generic.go:334] "Generic (PLEG): container finished" podID="89777380-e0ef-43b5-b247-4033b38bfaba" containerID="9539470165ac2fef05418c883c5450cafcc79bf8b13c7686843087710cb85574" exitCode=0 Feb 19 10:01:59 crc kubenswrapper[4965]: I0219 10:01:59.843457 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-v5nw7" event={"ID":"89777380-e0ef-43b5-b247-4033b38bfaba","Type":"ContainerDied","Data":"9539470165ac2fef05418c883c5450cafcc79bf8b13c7686843087710cb85574"} Feb 19 10:01:59 crc kubenswrapper[4965]: I0219 10:01:59.843553 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-v5nw7" event={"ID":"89777380-e0ef-43b5-b247-4033b38bfaba","Type":"ContainerStarted","Data":"4c328ae00bc7d46b53a733f83b9620665d8311a92e03c1c2cb97898ebcf06962"} Feb 19 10:01:59 crc kubenswrapper[4965]: I0219 10:01:59.868980 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=14.111384524 podStartE2EDuration="41.868958074s" podCreationTimestamp="2026-02-19 10:01:18 +0000 UTC" firstStartedPulling="2026-02-19 10:01:27.90480665 +0000 UTC m=+1143.526127960" lastFinishedPulling="2026-02-19 10:01:55.6623802 +0000 UTC m=+1171.283701510" observedRunningTime="2026-02-19 10:01:59.868067442 +0000 UTC m=+1175.489388762" watchObservedRunningTime="2026-02-19 10:01:59.868958074 +0000 UTC m=+1175.490279384" Feb 19 10:01:59 crc kubenswrapper[4965]: I0219 10:01:59.873065 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-mwlb6" podStartSLOduration=7.462102651 podStartE2EDuration="37.873051204s" podCreationTimestamp="2026-02-19 10:01:22 +0000 UTC" firstStartedPulling="2026-02-19 10:01:28.316298771 +0000 UTC m=+1143.937620081" lastFinishedPulling="2026-02-19 10:01:58.727247324 +0000 UTC m=+1174.348568634" observedRunningTime="2026-02-19 10:01:59.839989661 +0000 UTC m=+1175.461310981" watchObservedRunningTime="2026-02-19 10:01:59.873051204 +0000 UTC m=+1175.494372514" Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.049701 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-wwgl6" podStartSLOduration=3.049683632 podStartE2EDuration="3.049683632s" podCreationTimestamp="2026-02-19 10:01:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:01:59.928164402 +0000 UTC m=+1175.549485702" watchObservedRunningTime="2026-02-19 10:02:00.049683632 +0000 UTC m=+1175.671004942" Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.141844 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-kx9jd"] Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.143218 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-kx9jd" Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.146457 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.146700 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.146870 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.154787 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-kx9jd"] Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.259536 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f2a6db35-796d-485d-9b96-5c03b7d7725b-dispersionconf\") pod \"swift-ring-rebalance-kx9jd\" (UID: \"f2a6db35-796d-485d-9b96-5c03b7d7725b\") " pod="openstack/swift-ring-rebalance-kx9jd" Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.259599 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2a6db35-796d-485d-9b96-5c03b7d7725b-combined-ca-bundle\") pod \"swift-ring-rebalance-kx9jd\" (UID: \"f2a6db35-796d-485d-9b96-5c03b7d7725b\") " pod="openstack/swift-ring-rebalance-kx9jd" Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.259693 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2df6g\" (UniqueName: \"kubernetes.io/projected/f2a6db35-796d-485d-9b96-5c03b7d7725b-kube-api-access-2df6g\") pod \"swift-ring-rebalance-kx9jd\" (UID: \"f2a6db35-796d-485d-9b96-5c03b7d7725b\") " pod="openstack/swift-ring-rebalance-kx9jd" Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.259732 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2c3ae050-b164-4fbc-9e5b-392eb0a4fb53-etc-swift\") pod \"swift-storage-0\" (UID: \"2c3ae050-b164-4fbc-9e5b-392eb0a4fb53\") " pod="openstack/swift-storage-0" Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.259788 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f2a6db35-796d-485d-9b96-5c03b7d7725b-swiftconf\") pod \"swift-ring-rebalance-kx9jd\" (UID: \"f2a6db35-796d-485d-9b96-5c03b7d7725b\") " pod="openstack/swift-ring-rebalance-kx9jd" Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.259817 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f2a6db35-796d-485d-9b96-5c03b7d7725b-scripts\") pod \"swift-ring-rebalance-kx9jd\" (UID: \"f2a6db35-796d-485d-9b96-5c03b7d7725b\") " pod="openstack/swift-ring-rebalance-kx9jd" Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.259937 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f2a6db35-796d-485d-9b96-5c03b7d7725b-etc-swift\") pod \"swift-ring-rebalance-kx9jd\" (UID: \"f2a6db35-796d-485d-9b96-5c03b7d7725b\") " pod="openstack/swift-ring-rebalance-kx9jd" Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.259993 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f2a6db35-796d-485d-9b96-5c03b7d7725b-ring-data-devices\") pod \"swift-ring-rebalance-kx9jd\" (UID: \"f2a6db35-796d-485d-9b96-5c03b7d7725b\") " pod="openstack/swift-ring-rebalance-kx9jd" Feb 19 10:02:00 crc kubenswrapper[4965]: E0219 10:02:00.260177 4965 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 10:02:00 crc kubenswrapper[4965]: E0219 10:02:00.260232 4965 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 10:02:00 crc kubenswrapper[4965]: E0219 10:02:00.260369 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2c3ae050-b164-4fbc-9e5b-392eb0a4fb53-etc-swift podName:2c3ae050-b164-4fbc-9e5b-392eb0a4fb53 nodeName:}" failed. No retries permitted until 2026-02-19 10:02:01.260346007 +0000 UTC m=+1176.881667387 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/2c3ae050-b164-4fbc-9e5b-392eb0a4fb53-etc-swift") pod "swift-storage-0" (UID: "2c3ae050-b164-4fbc-9e5b-392eb0a4fb53") : configmap "swift-ring-files" not found Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.361510 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f2a6db35-796d-485d-9b96-5c03b7d7725b-ring-data-devices\") pod \"swift-ring-rebalance-kx9jd\" (UID: \"f2a6db35-796d-485d-9b96-5c03b7d7725b\") " pod="openstack/swift-ring-rebalance-kx9jd" Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.361608 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f2a6db35-796d-485d-9b96-5c03b7d7725b-dispersionconf\") pod \"swift-ring-rebalance-kx9jd\" (UID: \"f2a6db35-796d-485d-9b96-5c03b7d7725b\") " pod="openstack/swift-ring-rebalance-kx9jd" Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.361636 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2a6db35-796d-485d-9b96-5c03b7d7725b-combined-ca-bundle\") pod \"swift-ring-rebalance-kx9jd\" (UID: \"f2a6db35-796d-485d-9b96-5c03b7d7725b\") " pod="openstack/swift-ring-rebalance-kx9jd" Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.361683 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2df6g\" (UniqueName: \"kubernetes.io/projected/f2a6db35-796d-485d-9b96-5c03b7d7725b-kube-api-access-2df6g\") pod \"swift-ring-rebalance-kx9jd\" (UID: \"f2a6db35-796d-485d-9b96-5c03b7d7725b\") " pod="openstack/swift-ring-rebalance-kx9jd" Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.361742 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f2a6db35-796d-485d-9b96-5c03b7d7725b-swiftconf\") pod \"swift-ring-rebalance-kx9jd\" (UID: \"f2a6db35-796d-485d-9b96-5c03b7d7725b\") " pod="openstack/swift-ring-rebalance-kx9jd" Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.361772 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f2a6db35-796d-485d-9b96-5c03b7d7725b-scripts\") pod \"swift-ring-rebalance-kx9jd\" (UID: \"f2a6db35-796d-485d-9b96-5c03b7d7725b\") " pod="openstack/swift-ring-rebalance-kx9jd" Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.361853 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f2a6db35-796d-485d-9b96-5c03b7d7725b-etc-swift\") pod \"swift-ring-rebalance-kx9jd\" (UID: \"f2a6db35-796d-485d-9b96-5c03b7d7725b\") " pod="openstack/swift-ring-rebalance-kx9jd" Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.363390 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f2a6db35-796d-485d-9b96-5c03b7d7725b-etc-swift\") pod \"swift-ring-rebalance-kx9jd\" (UID: \"f2a6db35-796d-485d-9b96-5c03b7d7725b\") " pod="openstack/swift-ring-rebalance-kx9jd" Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.363991 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f2a6db35-796d-485d-9b96-5c03b7d7725b-ring-data-devices\") pod \"swift-ring-rebalance-kx9jd\" (UID: \"f2a6db35-796d-485d-9b96-5c03b7d7725b\") " pod="openstack/swift-ring-rebalance-kx9jd" Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.365301 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f2a6db35-796d-485d-9b96-5c03b7d7725b-scripts\") pod \"swift-ring-rebalance-kx9jd\" (UID: \"f2a6db35-796d-485d-9b96-5c03b7d7725b\") " pod="openstack/swift-ring-rebalance-kx9jd" Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.375333 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f2a6db35-796d-485d-9b96-5c03b7d7725b-swiftconf\") pod \"swift-ring-rebalance-kx9jd\" (UID: \"f2a6db35-796d-485d-9b96-5c03b7d7725b\") " pod="openstack/swift-ring-rebalance-kx9jd" Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.390551 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f2a6db35-796d-485d-9b96-5c03b7d7725b-dispersionconf\") pod \"swift-ring-rebalance-kx9jd\" (UID: \"f2a6db35-796d-485d-9b96-5c03b7d7725b\") " pod="openstack/swift-ring-rebalance-kx9jd" Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.392052 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2a6db35-796d-485d-9b96-5c03b7d7725b-combined-ca-bundle\") pod \"swift-ring-rebalance-kx9jd\" (UID: \"f2a6db35-796d-485d-9b96-5c03b7d7725b\") " pod="openstack/swift-ring-rebalance-kx9jd" Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.402140 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2df6g\" (UniqueName: \"kubernetes.io/projected/f2a6db35-796d-485d-9b96-5c03b7d7725b-kube-api-access-2df6g\") pod \"swift-ring-rebalance-kx9jd\" (UID: \"f2a6db35-796d-485d-9b96-5c03b7d7725b\") " pod="openstack/swift-ring-rebalance-kx9jd" Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.497407 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-kx9jd" Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.558881 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.588596 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-n99vb" Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.596896 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-v5nw7" Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.745082 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.772629 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6279de0c-ae0a-49f5-983b-aed9821a1c6f-dns-svc\") pod \"6279de0c-ae0a-49f5-983b-aed9821a1c6f\" (UID: \"6279de0c-ae0a-49f5-983b-aed9821a1c6f\") " Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.773439 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/89777380-e0ef-43b5-b247-4033b38bfaba-ovsdbserver-sb\") pod \"89777380-e0ef-43b5-b247-4033b38bfaba\" (UID: \"89777380-e0ef-43b5-b247-4033b38bfaba\") " Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.773499 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89777380-e0ef-43b5-b247-4033b38bfaba-dns-svc\") pod \"89777380-e0ef-43b5-b247-4033b38bfaba\" (UID: \"89777380-e0ef-43b5-b247-4033b38bfaba\") " Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.773515 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/89777380-e0ef-43b5-b247-4033b38bfaba-ovsdbserver-nb\") pod \"89777380-e0ef-43b5-b247-4033b38bfaba\" (UID: \"89777380-e0ef-43b5-b247-4033b38bfaba\") " Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.773542 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbfns\" (UniqueName: \"kubernetes.io/projected/6279de0c-ae0a-49f5-983b-aed9821a1c6f-kube-api-access-mbfns\") pod \"6279de0c-ae0a-49f5-983b-aed9821a1c6f\" (UID: \"6279de0c-ae0a-49f5-983b-aed9821a1c6f\") " Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.773602 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89777380-e0ef-43b5-b247-4033b38bfaba-config\") pod \"89777380-e0ef-43b5-b247-4033b38bfaba\" (UID: \"89777380-e0ef-43b5-b247-4033b38bfaba\") " Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.773636 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6279de0c-ae0a-49f5-983b-aed9821a1c6f-ovsdbserver-nb\") pod \"6279de0c-ae0a-49f5-983b-aed9821a1c6f\" (UID: \"6279de0c-ae0a-49f5-983b-aed9821a1c6f\") " Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.773713 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6279de0c-ae0a-49f5-983b-aed9821a1c6f-config\") pod \"6279de0c-ae0a-49f5-983b-aed9821a1c6f\" (UID: \"6279de0c-ae0a-49f5-983b-aed9821a1c6f\") " Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.773730 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tsg8b\" (UniqueName: \"kubernetes.io/projected/89777380-e0ef-43b5-b247-4033b38bfaba-kube-api-access-tsg8b\") pod \"89777380-e0ef-43b5-b247-4033b38bfaba\" (UID: \"89777380-e0ef-43b5-b247-4033b38bfaba\") " Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.796836 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6279de0c-ae0a-49f5-983b-aed9821a1c6f-kube-api-access-mbfns" (OuterVolumeSpecName: "kube-api-access-mbfns") pod "6279de0c-ae0a-49f5-983b-aed9821a1c6f" (UID: "6279de0c-ae0a-49f5-983b-aed9821a1c6f"). InnerVolumeSpecName "kube-api-access-mbfns". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.801798 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbfns\" (UniqueName: \"kubernetes.io/projected/6279de0c-ae0a-49f5-983b-aed9821a1c6f-kube-api-access-mbfns\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.804179 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89777380-e0ef-43b5-b247-4033b38bfaba-kube-api-access-tsg8b" (OuterVolumeSpecName: "kube-api-access-tsg8b") pod "89777380-e0ef-43b5-b247-4033b38bfaba" (UID: "89777380-e0ef-43b5-b247-4033b38bfaba"). InnerVolumeSpecName "kube-api-access-tsg8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.834552 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89777380-e0ef-43b5-b247-4033b38bfaba-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "89777380-e0ef-43b5-b247-4033b38bfaba" (UID: "89777380-e0ef-43b5-b247-4033b38bfaba"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.840992 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89777380-e0ef-43b5-b247-4033b38bfaba-config" (OuterVolumeSpecName: "config") pod "89777380-e0ef-43b5-b247-4033b38bfaba" (UID: "89777380-e0ef-43b5-b247-4033b38bfaba"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.844792 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6279de0c-ae0a-49f5-983b-aed9821a1c6f-config" (OuterVolumeSpecName: "config") pod "6279de0c-ae0a-49f5-983b-aed9821a1c6f" (UID: "6279de0c-ae0a-49f5-983b-aed9821a1c6f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.862520 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89777380-e0ef-43b5-b247-4033b38bfaba-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "89777380-e0ef-43b5-b247-4033b38bfaba" (UID: "89777380-e0ef-43b5-b247-4033b38bfaba"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.888723 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6279de0c-ae0a-49f5-983b-aed9821a1c6f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6279de0c-ae0a-49f5-983b-aed9821a1c6f" (UID: "6279de0c-ae0a-49f5-983b-aed9821a1c6f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.892381 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-n99vb" event={"ID":"6279de0c-ae0a-49f5-983b-aed9821a1c6f","Type":"ContainerDied","Data":"01276af75e3bac56a091aa0a1ab118a1071a1770af55b6a2072ccec67ffee323"} Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.892440 4965 scope.go:117] "RemoveContainer" containerID="b2e9188738538db9a8df45c5b8ba9be8ce2b388e4150833015d02c983e937550" Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.892551 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-n99vb" Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.904559 4965 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/89777380-e0ef-43b5-b247-4033b38bfaba-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.904584 4965 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89777380-e0ef-43b5-b247-4033b38bfaba-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.904592 4965 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89777380-e0ef-43b5-b247-4033b38bfaba-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.904600 4965 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6279de0c-ae0a-49f5-983b-aed9821a1c6f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.904610 4965 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6279de0c-ae0a-49f5-983b-aed9821a1c6f-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.904618 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tsg8b\" (UniqueName: \"kubernetes.io/projected/89777380-e0ef-43b5-b247-4033b38bfaba-kube-api-access-tsg8b\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.905370 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-7nzqs" event={"ID":"59e04a77-6c47-4906-86c7-72e8a36e120c","Type":"ContainerStarted","Data":"feeae2cc21bdacee5d2b3b592d0dc5c61c7070aee632a7ecbf902939ce3970f4"} Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.905484 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-7nzqs" Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.905865 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6279de0c-ae0a-49f5-983b-aed9821a1c6f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6279de0c-ae0a-49f5-983b-aed9821a1c6f" (UID: "6279de0c-ae0a-49f5-983b-aed9821a1c6f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.911372 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-v5nw7" Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.912062 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-v5nw7" event={"ID":"89777380-e0ef-43b5-b247-4033b38bfaba","Type":"ContainerDied","Data":"4c328ae00bc7d46b53a733f83b9620665d8311a92e03c1c2cb97898ebcf06962"} Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.912873 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89777380-e0ef-43b5-b247-4033b38bfaba-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "89777380-e0ef-43b5-b247-4033b38bfaba" (UID: "89777380-e0ef-43b5-b247-4033b38bfaba"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.932243 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-7nzqs" podStartSLOduration=2.93222544 podStartE2EDuration="2.93222544s" podCreationTimestamp="2026-02-19 10:01:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:02:00.921348005 +0000 UTC m=+1176.542669325" watchObservedRunningTime="2026-02-19 10:02:00.93222544 +0000 UTC m=+1176.553546750" Feb 19 10:02:00 crc kubenswrapper[4965]: I0219 10:02:00.939338 4965 scope.go:117] "RemoveContainer" containerID="9539470165ac2fef05418c883c5450cafcc79bf8b13c7686843087710cb85574" Feb 19 10:02:01 crc kubenswrapper[4965]: I0219 10:02:01.007146 4965 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6279de0c-ae0a-49f5-983b-aed9821a1c6f-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:01 crc kubenswrapper[4965]: I0219 10:02:01.007173 4965 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/89777380-e0ef-43b5-b247-4033b38bfaba-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:01 crc kubenswrapper[4965]: I0219 10:02:01.171756 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-kx9jd"] Feb 19 10:02:01 crc kubenswrapper[4965]: W0219 10:02:01.182698 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2a6db35_796d_485d_9b96_5c03b7d7725b.slice/crio-3384f9e65f37e9156662312afa4d4f1a3f8f46638d8f28a38e22273137eca4ee WatchSource:0}: Error finding container 3384f9e65f37e9156662312afa4d4f1a3f8f46638d8f28a38e22273137eca4ee: Status 404 returned error can't find the container with id 3384f9e65f37e9156662312afa4d4f1a3f8f46638d8f28a38e22273137eca4ee Feb 19 10:02:01 crc kubenswrapper[4965]: I0219 10:02:01.311371 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2c3ae050-b164-4fbc-9e5b-392eb0a4fb53-etc-swift\") pod \"swift-storage-0\" (UID: \"2c3ae050-b164-4fbc-9e5b-392eb0a4fb53\") " pod="openstack/swift-storage-0" Feb 19 10:02:01 crc kubenswrapper[4965]: E0219 10:02:01.312912 4965 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 10:02:01 crc kubenswrapper[4965]: E0219 10:02:01.312956 4965 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 10:02:01 crc kubenswrapper[4965]: E0219 10:02:01.313026 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2c3ae050-b164-4fbc-9e5b-392eb0a4fb53-etc-swift podName:2c3ae050-b164-4fbc-9e5b-392eb0a4fb53 nodeName:}" failed. No retries permitted until 2026-02-19 10:02:03.313003845 +0000 UTC m=+1178.934325155 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/2c3ae050-b164-4fbc-9e5b-392eb0a4fb53-etc-swift") pod "swift-storage-0" (UID: "2c3ae050-b164-4fbc-9e5b-392eb0a4fb53") : configmap "swift-ring-files" not found Feb 19 10:02:01 crc kubenswrapper[4965]: I0219 10:02:01.329534 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-v5nw7"] Feb 19 10:02:01 crc kubenswrapper[4965]: I0219 10:02:01.344556 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-v5nw7"] Feb 19 10:02:01 crc kubenswrapper[4965]: I0219 10:02:01.369061 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-n99vb"] Feb 19 10:02:01 crc kubenswrapper[4965]: I0219 10:02:01.376487 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-n99vb"] Feb 19 10:02:01 crc kubenswrapper[4965]: I0219 10:02:01.925116 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-kx9jd" event={"ID":"f2a6db35-796d-485d-9b96-5c03b7d7725b","Type":"ContainerStarted","Data":"3384f9e65f37e9156662312afa4d4f1a3f8f46638d8f28a38e22273137eca4ee"} Feb 19 10:02:03 crc kubenswrapper[4965]: I0219 10:02:03.209070 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6279de0c-ae0a-49f5-983b-aed9821a1c6f" path="/var/lib/kubelet/pods/6279de0c-ae0a-49f5-983b-aed9821a1c6f/volumes" Feb 19 10:02:03 crc kubenswrapper[4965]: I0219 10:02:03.210208 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89777380-e0ef-43b5-b247-4033b38bfaba" path="/var/lib/kubelet/pods/89777380-e0ef-43b5-b247-4033b38bfaba/volumes" Feb 19 10:02:03 crc kubenswrapper[4965]: I0219 10:02:03.358614 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2c3ae050-b164-4fbc-9e5b-392eb0a4fb53-etc-swift\") pod \"swift-storage-0\" (UID: \"2c3ae050-b164-4fbc-9e5b-392eb0a4fb53\") " pod="openstack/swift-storage-0" Feb 19 10:02:03 crc kubenswrapper[4965]: E0219 10:02:03.359013 4965 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 10:02:03 crc kubenswrapper[4965]: E0219 10:02:03.359036 4965 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 10:02:03 crc kubenswrapper[4965]: E0219 10:02:03.359086 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2c3ae050-b164-4fbc-9e5b-392eb0a4fb53-etc-swift podName:2c3ae050-b164-4fbc-9e5b-392eb0a4fb53 nodeName:}" failed. No retries permitted until 2026-02-19 10:02:07.359066902 +0000 UTC m=+1182.980388212 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/2c3ae050-b164-4fbc-9e5b-392eb0a4fb53-etc-swift") pod "swift-storage-0" (UID: "2c3ae050-b164-4fbc-9e5b-392eb0a4fb53") : configmap "swift-ring-files" not found Feb 19 10:02:03 crc kubenswrapper[4965]: I0219 10:02:03.364696 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-jrhtd"] Feb 19 10:02:03 crc kubenswrapper[4965]: E0219 10:02:03.365037 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6279de0c-ae0a-49f5-983b-aed9821a1c6f" containerName="init" Feb 19 10:02:03 crc kubenswrapper[4965]: I0219 10:02:03.365054 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="6279de0c-ae0a-49f5-983b-aed9821a1c6f" containerName="init" Feb 19 10:02:03 crc kubenswrapper[4965]: E0219 10:02:03.365078 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89777380-e0ef-43b5-b247-4033b38bfaba" containerName="init" Feb 19 10:02:03 crc kubenswrapper[4965]: I0219 10:02:03.365084 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="89777380-e0ef-43b5-b247-4033b38bfaba" containerName="init" Feb 19 10:02:03 crc kubenswrapper[4965]: I0219 10:02:03.365278 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="89777380-e0ef-43b5-b247-4033b38bfaba" containerName="init" Feb 19 10:02:03 crc kubenswrapper[4965]: I0219 10:02:03.365292 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="6279de0c-ae0a-49f5-983b-aed9821a1c6f" containerName="init" Feb 19 10:02:03 crc kubenswrapper[4965]: I0219 10:02:03.365918 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-jrhtd" Feb 19 10:02:03 crc kubenswrapper[4965]: I0219 10:02:03.372305 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-jrhtd"] Feb 19 10:02:03 crc kubenswrapper[4965]: I0219 10:02:03.409166 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 19 10:02:03 crc kubenswrapper[4965]: I0219 10:02:03.460597 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt758\" (UniqueName: \"kubernetes.io/projected/22c8c4ab-703d-43c6-8007-a06089a42fc5-kube-api-access-vt758\") pod \"root-account-create-update-jrhtd\" (UID: \"22c8c4ab-703d-43c6-8007-a06089a42fc5\") " pod="openstack/root-account-create-update-jrhtd" Feb 19 10:02:03 crc kubenswrapper[4965]: I0219 10:02:03.460676 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22c8c4ab-703d-43c6-8007-a06089a42fc5-operator-scripts\") pod \"root-account-create-update-jrhtd\" (UID: \"22c8c4ab-703d-43c6-8007-a06089a42fc5\") " pod="openstack/root-account-create-update-jrhtd" Feb 19 10:02:03 crc kubenswrapper[4965]: I0219 10:02:03.562588 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vt758\" (UniqueName: \"kubernetes.io/projected/22c8c4ab-703d-43c6-8007-a06089a42fc5-kube-api-access-vt758\") pod \"root-account-create-update-jrhtd\" (UID: \"22c8c4ab-703d-43c6-8007-a06089a42fc5\") " pod="openstack/root-account-create-update-jrhtd" Feb 19 10:02:03 crc kubenswrapper[4965]: I0219 10:02:03.562656 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22c8c4ab-703d-43c6-8007-a06089a42fc5-operator-scripts\") pod \"root-account-create-update-jrhtd\" (UID: \"22c8c4ab-703d-43c6-8007-a06089a42fc5\") " pod="openstack/root-account-create-update-jrhtd" Feb 19 10:02:03 crc kubenswrapper[4965]: I0219 10:02:03.563636 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22c8c4ab-703d-43c6-8007-a06089a42fc5-operator-scripts\") pod \"root-account-create-update-jrhtd\" (UID: \"22c8c4ab-703d-43c6-8007-a06089a42fc5\") " pod="openstack/root-account-create-update-jrhtd" Feb 19 10:02:03 crc kubenswrapper[4965]: I0219 10:02:03.582472 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vt758\" (UniqueName: \"kubernetes.io/projected/22c8c4ab-703d-43c6-8007-a06089a42fc5-kube-api-access-vt758\") pod \"root-account-create-update-jrhtd\" (UID: \"22c8c4ab-703d-43c6-8007-a06089a42fc5\") " pod="openstack/root-account-create-update-jrhtd" Feb 19 10:02:03 crc kubenswrapper[4965]: I0219 10:02:03.731765 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-jrhtd" Feb 19 10:02:06 crc kubenswrapper[4965]: I0219 10:02:06.536287 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-pl6lh"] Feb 19 10:02:06 crc kubenswrapper[4965]: I0219 10:02:06.538036 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-pl6lh" Feb 19 10:02:06 crc kubenswrapper[4965]: I0219 10:02:06.545086 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-pl6lh"] Feb 19 10:02:06 crc kubenswrapper[4965]: I0219 10:02:06.624022 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-jrhtd"] Feb 19 10:02:06 crc kubenswrapper[4965]: W0219 10:02:06.629573 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22c8c4ab_703d_43c6_8007_a06089a42fc5.slice/crio-2977f673b87593c75f0d7de17b2641a7c7add47dd8656364e7acd367e4196b06 WatchSource:0}: Error finding container 2977f673b87593c75f0d7de17b2641a7c7add47dd8656364e7acd367e4196b06: Status 404 returned error can't find the container with id 2977f673b87593c75f0d7de17b2641a7c7add47dd8656364e7acd367e4196b06 Feb 19 10:02:06 crc kubenswrapper[4965]: I0219 10:02:06.631490 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f4a6564-b3dd-48b8-8f45-b89155f4ddbf-operator-scripts\") pod \"glance-db-create-pl6lh\" (UID: \"5f4a6564-b3dd-48b8-8f45-b89155f4ddbf\") " pod="openstack/glance-db-create-pl6lh" Feb 19 10:02:06 crc kubenswrapper[4965]: I0219 10:02:06.631566 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kwdd\" (UniqueName: \"kubernetes.io/projected/5f4a6564-b3dd-48b8-8f45-b89155f4ddbf-kube-api-access-5kwdd\") pod \"glance-db-create-pl6lh\" (UID: \"5f4a6564-b3dd-48b8-8f45-b89155f4ddbf\") " pod="openstack/glance-db-create-pl6lh" Feb 19 10:02:06 crc kubenswrapper[4965]: I0219 10:02:06.690730 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-596f-account-create-update-l526t"] Feb 19 10:02:06 crc kubenswrapper[4965]: I0219 10:02:06.696129 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-596f-account-create-update-l526t" Feb 19 10:02:06 crc kubenswrapper[4965]: I0219 10:02:06.698712 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 19 10:02:06 crc kubenswrapper[4965]: I0219 10:02:06.712239 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-596f-account-create-update-l526t"] Feb 19 10:02:06 crc kubenswrapper[4965]: I0219 10:02:06.734394 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f4a6564-b3dd-48b8-8f45-b89155f4ddbf-operator-scripts\") pod \"glance-db-create-pl6lh\" (UID: \"5f4a6564-b3dd-48b8-8f45-b89155f4ddbf\") " pod="openstack/glance-db-create-pl6lh" Feb 19 10:02:06 crc kubenswrapper[4965]: I0219 10:02:06.734762 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kwdd\" (UniqueName: \"kubernetes.io/projected/5f4a6564-b3dd-48b8-8f45-b89155f4ddbf-kube-api-access-5kwdd\") pod \"glance-db-create-pl6lh\" (UID: \"5f4a6564-b3dd-48b8-8f45-b89155f4ddbf\") " pod="openstack/glance-db-create-pl6lh" Feb 19 10:02:06 crc kubenswrapper[4965]: I0219 10:02:06.735382 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f4a6564-b3dd-48b8-8f45-b89155f4ddbf-operator-scripts\") pod \"glance-db-create-pl6lh\" (UID: \"5f4a6564-b3dd-48b8-8f45-b89155f4ddbf\") " pod="openstack/glance-db-create-pl6lh" Feb 19 10:02:06 crc kubenswrapper[4965]: I0219 10:02:06.754012 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kwdd\" (UniqueName: \"kubernetes.io/projected/5f4a6564-b3dd-48b8-8f45-b89155f4ddbf-kube-api-access-5kwdd\") pod \"glance-db-create-pl6lh\" (UID: \"5f4a6564-b3dd-48b8-8f45-b89155f4ddbf\") " pod="openstack/glance-db-create-pl6lh" Feb 19 10:02:06 crc kubenswrapper[4965]: I0219 10:02:06.836401 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87f25999-5c83-4b40-9d6e-c32d88532e00-operator-scripts\") pod \"glance-596f-account-create-update-l526t\" (UID: \"87f25999-5c83-4b40-9d6e-c32d88532e00\") " pod="openstack/glance-596f-account-create-update-l526t" Feb 19 10:02:06 crc kubenswrapper[4965]: I0219 10:02:06.836447 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm4t9\" (UniqueName: \"kubernetes.io/projected/87f25999-5c83-4b40-9d6e-c32d88532e00-kube-api-access-wm4t9\") pod \"glance-596f-account-create-update-l526t\" (UID: \"87f25999-5c83-4b40-9d6e-c32d88532e00\") " pod="openstack/glance-596f-account-create-update-l526t" Feb 19 10:02:06 crc kubenswrapper[4965]: I0219 10:02:06.854955 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-pl6lh" Feb 19 10:02:06 crc kubenswrapper[4965]: I0219 10:02:06.945867 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87f25999-5c83-4b40-9d6e-c32d88532e00-operator-scripts\") pod \"glance-596f-account-create-update-l526t\" (UID: \"87f25999-5c83-4b40-9d6e-c32d88532e00\") " pod="openstack/glance-596f-account-create-update-l526t" Feb 19 10:02:06 crc kubenswrapper[4965]: I0219 10:02:06.945922 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm4t9\" (UniqueName: \"kubernetes.io/projected/87f25999-5c83-4b40-9d6e-c32d88532e00-kube-api-access-wm4t9\") pod \"glance-596f-account-create-update-l526t\" (UID: \"87f25999-5c83-4b40-9d6e-c32d88532e00\") " pod="openstack/glance-596f-account-create-update-l526t" Feb 19 10:02:06 crc kubenswrapper[4965]: I0219 10:02:06.947540 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87f25999-5c83-4b40-9d6e-c32d88532e00-operator-scripts\") pod \"glance-596f-account-create-update-l526t\" (UID: \"87f25999-5c83-4b40-9d6e-c32d88532e00\") " pod="openstack/glance-596f-account-create-update-l526t" Feb 19 10:02:06 crc kubenswrapper[4965]: I0219 10:02:06.969476 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm4t9\" (UniqueName: \"kubernetes.io/projected/87f25999-5c83-4b40-9d6e-c32d88532e00-kube-api-access-wm4t9\") pod \"glance-596f-account-create-update-l526t\" (UID: \"87f25999-5c83-4b40-9d6e-c32d88532e00\") " pod="openstack/glance-596f-account-create-update-l526t" Feb 19 10:02:06 crc kubenswrapper[4965]: I0219 10:02:06.992764 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"1520d7ba-9d74-47f8-9c7a-9731ae9ff49e","Type":"ContainerStarted","Data":"54184166e050a61032085adcce115c99d222d74d08bc42ca7675a75728e23682"} Feb 19 10:02:07 crc kubenswrapper[4965]: I0219 10:02:07.001159 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e7a4c9f4-b898-43b4-812d-ab4f17c2124d","Type":"ContainerStarted","Data":"aa93fa5b35f82a12b5464edbef0d2c83247a37aa0f0773f3ba6e0779f26c0282"} Feb 19 10:02:07 crc kubenswrapper[4965]: I0219 10:02:07.020037 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-kx9jd" event={"ID":"f2a6db35-796d-485d-9b96-5c03b7d7725b","Type":"ContainerStarted","Data":"5ae0b07800d5705fef55b740493646265c68d9b9ae4149c8b2af57424e2c01fb"} Feb 19 10:02:07 crc kubenswrapper[4965]: I0219 10:02:07.024764 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-jrhtd" event={"ID":"22c8c4ab-703d-43c6-8007-a06089a42fc5","Type":"ContainerStarted","Data":"69d0ce7d229f21da5b7a38cdd07621d4accb6a0beddfe52b80b2c8d758e82c75"} Feb 19 10:02:07 crc kubenswrapper[4965]: I0219 10:02:07.024797 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-jrhtd" event={"ID":"22c8c4ab-703d-43c6-8007-a06089a42fc5","Type":"ContainerStarted","Data":"2977f673b87593c75f0d7de17b2641a7c7add47dd8656364e7acd367e4196b06"} Feb 19 10:02:07 crc kubenswrapper[4965]: I0219 10:02:07.028284 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=4.251442854 podStartE2EDuration="42.028259829s" podCreationTimestamp="2026-02-19 10:01:25 +0000 UTC" firstStartedPulling="2026-02-19 10:01:28.418267585 +0000 UTC m=+1144.039588895" lastFinishedPulling="2026-02-19 10:02:06.19508456 +0000 UTC m=+1181.816405870" observedRunningTime="2026-02-19 10:02:07.019817024 +0000 UTC m=+1182.641138334" watchObservedRunningTime="2026-02-19 10:02:07.028259829 +0000 UTC m=+1182.649581169" Feb 19 10:02:07 crc kubenswrapper[4965]: I0219 10:02:07.031905 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-596f-account-create-update-l526t" Feb 19 10:02:07 crc kubenswrapper[4965]: I0219 10:02:07.038466 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-kx9jd" podStartSLOduration=2.028166008 podStartE2EDuration="7.038448736s" podCreationTimestamp="2026-02-19 10:02:00 +0000 UTC" firstStartedPulling="2026-02-19 10:02:01.186104113 +0000 UTC m=+1176.807425423" lastFinishedPulling="2026-02-19 10:02:06.196386841 +0000 UTC m=+1181.817708151" observedRunningTime="2026-02-19 10:02:07.036659843 +0000 UTC m=+1182.657981153" watchObservedRunningTime="2026-02-19 10:02:07.038448736 +0000 UTC m=+1182.659770046" Feb 19 10:02:07 crc kubenswrapper[4965]: I0219 10:02:07.236870 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-25kh7"] Feb 19 10:02:07 crc kubenswrapper[4965]: I0219 10:02:07.239313 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-25kh7" Feb 19 10:02:07 crc kubenswrapper[4965]: I0219 10:02:07.259874 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-25kh7"] Feb 19 10:02:07 crc kubenswrapper[4965]: I0219 10:02:07.313122 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-pl6lh"] Feb 19 10:02:07 crc kubenswrapper[4965]: W0219 10:02:07.332500 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f4a6564_b3dd_48b8_8f45_b89155f4ddbf.slice/crio-614f6e99934aaf09cada18f2d777b98c577187bbd42bdd86e21b542265b1a70b WatchSource:0}: Error finding container 614f6e99934aaf09cada18f2d777b98c577187bbd42bdd86e21b542265b1a70b: Status 404 returned error can't find the container with id 614f6e99934aaf09cada18f2d777b98c577187bbd42bdd86e21b542265b1a70b Feb 19 10:02:07 crc kubenswrapper[4965]: I0219 10:02:07.350114 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7351-account-create-update-8ssjj"] Feb 19 10:02:07 crc kubenswrapper[4965]: I0219 10:02:07.351558 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7351-account-create-update-8ssjj" Feb 19 10:02:07 crc kubenswrapper[4965]: I0219 10:02:07.353046 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/504af3e1-9b2c-4c21-8243-00e8b011c665-operator-scripts\") pod \"keystone-db-create-25kh7\" (UID: \"504af3e1-9b2c-4c21-8243-00e8b011c665\") " pod="openstack/keystone-db-create-25kh7" Feb 19 10:02:07 crc kubenswrapper[4965]: I0219 10:02:07.353134 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztwdb\" (UniqueName: \"kubernetes.io/projected/504af3e1-9b2c-4c21-8243-00e8b011c665-kube-api-access-ztwdb\") pod \"keystone-db-create-25kh7\" (UID: \"504af3e1-9b2c-4c21-8243-00e8b011c665\") " pod="openstack/keystone-db-create-25kh7" Feb 19 10:02:07 crc kubenswrapper[4965]: I0219 10:02:07.359745 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 19 10:02:07 crc kubenswrapper[4965]: I0219 10:02:07.364653 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7351-account-create-update-8ssjj"] Feb 19 10:02:07 crc kubenswrapper[4965]: I0219 10:02:07.450185 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-kldd9"] Feb 19 10:02:07 crc kubenswrapper[4965]: I0219 10:02:07.451569 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-kldd9" Feb 19 10:02:07 crc kubenswrapper[4965]: I0219 10:02:07.455066 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/504af3e1-9b2c-4c21-8243-00e8b011c665-operator-scripts\") pod \"keystone-db-create-25kh7\" (UID: \"504af3e1-9b2c-4c21-8243-00e8b011c665\") " pod="openstack/keystone-db-create-25kh7" Feb 19 10:02:07 crc kubenswrapper[4965]: I0219 10:02:07.455122 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/baca6400-0fa5-49f2-8eb2-54a774607cc3-operator-scripts\") pod \"keystone-7351-account-create-update-8ssjj\" (UID: \"baca6400-0fa5-49f2-8eb2-54a774607cc3\") " pod="openstack/keystone-7351-account-create-update-8ssjj" Feb 19 10:02:07 crc kubenswrapper[4965]: I0219 10:02:07.455214 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztwdb\" (UniqueName: \"kubernetes.io/projected/504af3e1-9b2c-4c21-8243-00e8b011c665-kube-api-access-ztwdb\") pod \"keystone-db-create-25kh7\" (UID: \"504af3e1-9b2c-4c21-8243-00e8b011c665\") " pod="openstack/keystone-db-create-25kh7" Feb 19 10:02:07 crc kubenswrapper[4965]: I0219 10:02:07.455295 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d2q2\" (UniqueName: \"kubernetes.io/projected/baca6400-0fa5-49f2-8eb2-54a774607cc3-kube-api-access-8d2q2\") pod \"keystone-7351-account-create-update-8ssjj\" (UID: \"baca6400-0fa5-49f2-8eb2-54a774607cc3\") " pod="openstack/keystone-7351-account-create-update-8ssjj" Feb 19 10:02:07 crc kubenswrapper[4965]: I0219 10:02:07.455331 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2c3ae050-b164-4fbc-9e5b-392eb0a4fb53-etc-swift\") pod \"swift-storage-0\" (UID: \"2c3ae050-b164-4fbc-9e5b-392eb0a4fb53\") " pod="openstack/swift-storage-0" Feb 19 10:02:07 crc kubenswrapper[4965]: E0219 10:02:07.455468 4965 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 10:02:07 crc kubenswrapper[4965]: E0219 10:02:07.455488 4965 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 10:02:07 crc kubenswrapper[4965]: E0219 10:02:07.455527 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2c3ae050-b164-4fbc-9e5b-392eb0a4fb53-etc-swift podName:2c3ae050-b164-4fbc-9e5b-392eb0a4fb53 nodeName:}" failed. No retries permitted until 2026-02-19 10:02:15.455513272 +0000 UTC m=+1191.076834582 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/2c3ae050-b164-4fbc-9e5b-392eb0a4fb53-etc-swift") pod "swift-storage-0" (UID: "2c3ae050-b164-4fbc-9e5b-392eb0a4fb53") : configmap "swift-ring-files" not found Feb 19 10:02:07 crc kubenswrapper[4965]: I0219 10:02:07.456412 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/504af3e1-9b2c-4c21-8243-00e8b011c665-operator-scripts\") pod \"keystone-db-create-25kh7\" (UID: \"504af3e1-9b2c-4c21-8243-00e8b011c665\") " pod="openstack/keystone-db-create-25kh7" Feb 19 10:02:07 crc kubenswrapper[4965]: I0219 10:02:07.460838 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-kldd9"] Feb 19 10:02:07 crc kubenswrapper[4965]: I0219 10:02:07.500502 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztwdb\" (UniqueName: \"kubernetes.io/projected/504af3e1-9b2c-4c21-8243-00e8b011c665-kube-api-access-ztwdb\") pod \"keystone-db-create-25kh7\" (UID: \"504af3e1-9b2c-4c21-8243-00e8b011c665\") " pod="openstack/keystone-db-create-25kh7" Feb 19 10:02:07 crc kubenswrapper[4965]: I0219 10:02:07.535101 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-596f-account-create-update-l526t"] Feb 19 10:02:07 crc kubenswrapper[4965]: I0219 10:02:07.555358 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-ff7d-account-create-update-wx6sr"] Feb 19 10:02:07 crc kubenswrapper[4965]: I0219 10:02:07.556290 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9a5f937-3184-4cef-a4ac-8f7205952bbc-operator-scripts\") pod \"placement-db-create-kldd9\" (UID: \"e9a5f937-3184-4cef-a4ac-8f7205952bbc\") " pod="openstack/placement-db-create-kldd9" Feb 19 10:02:07 crc kubenswrapper[4965]: I0219 10:02:07.556362 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8d2q2\" (UniqueName: \"kubernetes.io/projected/baca6400-0fa5-49f2-8eb2-54a774607cc3-kube-api-access-8d2q2\") pod \"keystone-7351-account-create-update-8ssjj\" (UID: \"baca6400-0fa5-49f2-8eb2-54a774607cc3\") " pod="openstack/keystone-7351-account-create-update-8ssjj" Feb 19 10:02:07 crc kubenswrapper[4965]: I0219 10:02:07.556436 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/baca6400-0fa5-49f2-8eb2-54a774607cc3-operator-scripts\") pod \"keystone-7351-account-create-update-8ssjj\" (UID: \"baca6400-0fa5-49f2-8eb2-54a774607cc3\") " pod="openstack/keystone-7351-account-create-update-8ssjj" Feb 19 10:02:07 crc kubenswrapper[4965]: I0219 10:02:07.556479 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzwr8\" (UniqueName: \"kubernetes.io/projected/e9a5f937-3184-4cef-a4ac-8f7205952bbc-kube-api-access-vzwr8\") pod \"placement-db-create-kldd9\" (UID: \"e9a5f937-3184-4cef-a4ac-8f7205952bbc\") " pod="openstack/placement-db-create-kldd9" Feb 19 10:02:07 crc kubenswrapper[4965]: I0219 10:02:07.556642 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ff7d-account-create-update-wx6sr" Feb 19 10:02:07 crc kubenswrapper[4965]: I0219 10:02:07.557586 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/baca6400-0fa5-49f2-8eb2-54a774607cc3-operator-scripts\") pod \"keystone-7351-account-create-update-8ssjj\" (UID: \"baca6400-0fa5-49f2-8eb2-54a774607cc3\") " pod="openstack/keystone-7351-account-create-update-8ssjj" Feb 19 10:02:07 crc kubenswrapper[4965]: I0219 10:02:07.560636 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-ff7d-account-create-update-wx6sr"] Feb 19 10:02:07 crc kubenswrapper[4965]: I0219 10:02:07.561037 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 19 10:02:07 crc kubenswrapper[4965]: I0219 10:02:07.565975 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-25kh7" Feb 19 10:02:07 crc kubenswrapper[4965]: I0219 10:02:07.597631 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8d2q2\" (UniqueName: \"kubernetes.io/projected/baca6400-0fa5-49f2-8eb2-54a774607cc3-kube-api-access-8d2q2\") pod \"keystone-7351-account-create-update-8ssjj\" (UID: \"baca6400-0fa5-49f2-8eb2-54a774607cc3\") " pod="openstack/keystone-7351-account-create-update-8ssjj" Feb 19 10:02:07 crc kubenswrapper[4965]: I0219 10:02:07.658613 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzwr8\" (UniqueName: \"kubernetes.io/projected/e9a5f937-3184-4cef-a4ac-8f7205952bbc-kube-api-access-vzwr8\") pod \"placement-db-create-kldd9\" (UID: \"e9a5f937-3184-4cef-a4ac-8f7205952bbc\") " pod="openstack/placement-db-create-kldd9" Feb 19 10:02:07 crc kubenswrapper[4965]: I0219 10:02:07.658674 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7b581c7-5ca4-4e60-bea9-db65839ed46c-operator-scripts\") pod \"placement-ff7d-account-create-update-wx6sr\" (UID: \"b7b581c7-5ca4-4e60-bea9-db65839ed46c\") " pod="openstack/placement-ff7d-account-create-update-wx6sr" Feb 19 10:02:07 crc kubenswrapper[4965]: I0219 10:02:07.658702 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7mcc\" (UniqueName: \"kubernetes.io/projected/b7b581c7-5ca4-4e60-bea9-db65839ed46c-kube-api-access-p7mcc\") pod \"placement-ff7d-account-create-update-wx6sr\" (UID: \"b7b581c7-5ca4-4e60-bea9-db65839ed46c\") " pod="openstack/placement-ff7d-account-create-update-wx6sr" Feb 19 10:02:07 crc kubenswrapper[4965]: I0219 10:02:07.658778 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9a5f937-3184-4cef-a4ac-8f7205952bbc-operator-scripts\") pod \"placement-db-create-kldd9\" (UID: \"e9a5f937-3184-4cef-a4ac-8f7205952bbc\") " pod="openstack/placement-db-create-kldd9" Feb 19 10:02:07 crc kubenswrapper[4965]: I0219 10:02:07.660412 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9a5f937-3184-4cef-a4ac-8f7205952bbc-operator-scripts\") pod \"placement-db-create-kldd9\" (UID: \"e9a5f937-3184-4cef-a4ac-8f7205952bbc\") " pod="openstack/placement-db-create-kldd9" Feb 19 10:02:07 crc kubenswrapper[4965]: I0219 10:02:07.681883 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzwr8\" (UniqueName: \"kubernetes.io/projected/e9a5f937-3184-4cef-a4ac-8f7205952bbc-kube-api-access-vzwr8\") pod \"placement-db-create-kldd9\" (UID: \"e9a5f937-3184-4cef-a4ac-8f7205952bbc\") " pod="openstack/placement-db-create-kldd9" Feb 19 10:02:07 crc kubenswrapper[4965]: I0219 10:02:07.750085 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7351-account-create-update-8ssjj" Feb 19 10:02:07 crc kubenswrapper[4965]: I0219 10:02:07.765478 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7b581c7-5ca4-4e60-bea9-db65839ed46c-operator-scripts\") pod \"placement-ff7d-account-create-update-wx6sr\" (UID: \"b7b581c7-5ca4-4e60-bea9-db65839ed46c\") " pod="openstack/placement-ff7d-account-create-update-wx6sr" Feb 19 10:02:07 crc kubenswrapper[4965]: I0219 10:02:07.765587 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7mcc\" (UniqueName: \"kubernetes.io/projected/b7b581c7-5ca4-4e60-bea9-db65839ed46c-kube-api-access-p7mcc\") pod \"placement-ff7d-account-create-update-wx6sr\" (UID: \"b7b581c7-5ca4-4e60-bea9-db65839ed46c\") " pod="openstack/placement-ff7d-account-create-update-wx6sr" Feb 19 10:02:07 crc kubenswrapper[4965]: I0219 10:02:07.766756 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7b581c7-5ca4-4e60-bea9-db65839ed46c-operator-scripts\") pod \"placement-ff7d-account-create-update-wx6sr\" (UID: \"b7b581c7-5ca4-4e60-bea9-db65839ed46c\") " pod="openstack/placement-ff7d-account-create-update-wx6sr" Feb 19 10:02:07 crc kubenswrapper[4965]: I0219 10:02:07.769755 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-kldd9" Feb 19 10:02:07 crc kubenswrapper[4965]: I0219 10:02:07.789012 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7mcc\" (UniqueName: \"kubernetes.io/projected/b7b581c7-5ca4-4e60-bea9-db65839ed46c-kube-api-access-p7mcc\") pod \"placement-ff7d-account-create-update-wx6sr\" (UID: \"b7b581c7-5ca4-4e60-bea9-db65839ed46c\") " pod="openstack/placement-ff7d-account-create-update-wx6sr" Feb 19 10:02:07 crc kubenswrapper[4965]: I0219 10:02:07.885639 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ff7d-account-create-update-wx6sr" Feb 19 10:02:08 crc kubenswrapper[4965]: I0219 10:02:08.049871 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-596f-account-create-update-l526t" event={"ID":"87f25999-5c83-4b40-9d6e-c32d88532e00","Type":"ContainerStarted","Data":"37b1a3213ee9697bb90c072ab7d08ac6fb1373b0c45fe8c561e027c524da23ec"} Feb 19 10:02:08 crc kubenswrapper[4965]: I0219 10:02:08.052080 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-596f-account-create-update-l526t" event={"ID":"87f25999-5c83-4b40-9d6e-c32d88532e00","Type":"ContainerStarted","Data":"e069b30b052e99f0fb149e84ce7826d58ae6f75a34177118c49918b34ddf62f5"} Feb 19 10:02:08 crc kubenswrapper[4965]: I0219 10:02:08.052640 4965 generic.go:334] "Generic (PLEG): container finished" podID="5f4a6564-b3dd-48b8-8f45-b89155f4ddbf" containerID="d456e2f2a99a581533e374b7b37765945723f234c8eba5b3534a15b6418180e8" exitCode=0 Feb 19 10:02:08 crc kubenswrapper[4965]: I0219 10:02:08.052747 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-pl6lh" event={"ID":"5f4a6564-b3dd-48b8-8f45-b89155f4ddbf","Type":"ContainerDied","Data":"d456e2f2a99a581533e374b7b37765945723f234c8eba5b3534a15b6418180e8"} Feb 19 10:02:08 crc kubenswrapper[4965]: I0219 10:02:08.052769 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-pl6lh" event={"ID":"5f4a6564-b3dd-48b8-8f45-b89155f4ddbf","Type":"ContainerStarted","Data":"614f6e99934aaf09cada18f2d777b98c577187bbd42bdd86e21b542265b1a70b"} Feb 19 10:02:08 crc kubenswrapper[4965]: I0219 10:02:08.058303 4965 generic.go:334] "Generic (PLEG): container finished" podID="22c8c4ab-703d-43c6-8007-a06089a42fc5" containerID="69d0ce7d229f21da5b7a38cdd07621d4accb6a0beddfe52b80b2c8d758e82c75" exitCode=0 Feb 19 10:02:08 crc kubenswrapper[4965]: I0219 10:02:08.059014 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-jrhtd" event={"ID":"22c8c4ab-703d-43c6-8007-a06089a42fc5","Type":"ContainerDied","Data":"69d0ce7d229f21da5b7a38cdd07621d4accb6a0beddfe52b80b2c8d758e82c75"} Feb 19 10:02:08 crc kubenswrapper[4965]: I0219 10:02:08.075238 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-596f-account-create-update-l526t" podStartSLOduration=2.075215779 podStartE2EDuration="2.075215779s" podCreationTimestamp="2026-02-19 10:02:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:02:08.069426498 +0000 UTC m=+1183.690747808" watchObservedRunningTime="2026-02-19 10:02:08.075215779 +0000 UTC m=+1183.696537089" Feb 19 10:02:08 crc kubenswrapper[4965]: I0219 10:02:08.210290 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-25kh7"] Feb 19 10:02:08 crc kubenswrapper[4965]: W0219 10:02:08.253388 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod504af3e1_9b2c_4c21_8243_00e8b011c665.slice/crio-1ce6319a8a32729c7a9cf71ebd8bb0f492f1b937136035a18c3c93f1b5a0a103 WatchSource:0}: Error finding container 1ce6319a8a32729c7a9cf71ebd8bb0f492f1b937136035a18c3c93f1b5a0a103: Status 404 returned error can't find the container with id 1ce6319a8a32729c7a9cf71ebd8bb0f492f1b937136035a18c3c93f1b5a0a103 Feb 19 10:02:08 crc kubenswrapper[4965]: I0219 10:02:08.356851 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7351-account-create-update-8ssjj"] Feb 19 10:02:08 crc kubenswrapper[4965]: I0219 10:02:08.593522 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-kldd9"] Feb 19 10:02:08 crc kubenswrapper[4965]: I0219 10:02:08.798045 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 19 10:02:08 crc kubenswrapper[4965]: I0219 10:02:08.799348 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-7nzqs" Feb 19 10:02:08 crc kubenswrapper[4965]: I0219 10:02:08.887472 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-jrhtd" Feb 19 10:02:08 crc kubenswrapper[4965]: I0219 10:02:08.903650 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-5xvv9"] Feb 19 10:02:08 crc kubenswrapper[4965]: I0219 10:02:08.903872 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-5xvv9" podUID="3f73b9d2-a434-4638-bce4-6c710166a455" containerName="dnsmasq-dns" containerID="cri-o://16b56c3d65bf036b7a875d7b26a3cbe7d7b1ab5b8e5737919cbd729aa3139250" gracePeriod=10 Feb 19 10:02:08 crc kubenswrapper[4965]: I0219 10:02:08.929905 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt758\" (UniqueName: \"kubernetes.io/projected/22c8c4ab-703d-43c6-8007-a06089a42fc5-kube-api-access-vt758\") pod \"22c8c4ab-703d-43c6-8007-a06089a42fc5\" (UID: \"22c8c4ab-703d-43c6-8007-a06089a42fc5\") " Feb 19 10:02:08 crc kubenswrapper[4965]: I0219 10:02:08.929979 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22c8c4ab-703d-43c6-8007-a06089a42fc5-operator-scripts\") pod \"22c8c4ab-703d-43c6-8007-a06089a42fc5\" (UID: \"22c8c4ab-703d-43c6-8007-a06089a42fc5\") " Feb 19 10:02:08 crc kubenswrapper[4965]: I0219 10:02:08.931463 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c8c4ab-703d-43c6-8007-a06089a42fc5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "22c8c4ab-703d-43c6-8007-a06089a42fc5" (UID: "22c8c4ab-703d-43c6-8007-a06089a42fc5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:08 crc kubenswrapper[4965]: I0219 10:02:08.943961 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c8c4ab-703d-43c6-8007-a06089a42fc5-kube-api-access-vt758" (OuterVolumeSpecName: "kube-api-access-vt758") pod "22c8c4ab-703d-43c6-8007-a06089a42fc5" (UID: "22c8c4ab-703d-43c6-8007-a06089a42fc5"). InnerVolumeSpecName "kube-api-access-vt758". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:02:08 crc kubenswrapper[4965]: I0219 10:02:08.962828 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-ff7d-account-create-update-wx6sr"] Feb 19 10:02:09 crc kubenswrapper[4965]: I0219 10:02:09.034067 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt758\" (UniqueName: \"kubernetes.io/projected/22c8c4ab-703d-43c6-8007-a06089a42fc5-kube-api-access-vt758\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:09 crc kubenswrapper[4965]: I0219 10:02:09.034098 4965 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22c8c4ab-703d-43c6-8007-a06089a42fc5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:09 crc kubenswrapper[4965]: I0219 10:02:09.092891 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7351-account-create-update-8ssjj" event={"ID":"baca6400-0fa5-49f2-8eb2-54a774607cc3","Type":"ContainerStarted","Data":"b2c13e1059d076cb67bc06414f3afa17cf1b7857d675093db6bea1e5de55dc5c"} Feb 19 10:02:09 crc kubenswrapper[4965]: I0219 10:02:09.092936 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7351-account-create-update-8ssjj" event={"ID":"baca6400-0fa5-49f2-8eb2-54a774607cc3","Type":"ContainerStarted","Data":"c6f6040d1361f469c13e1bc6368c7c76eb47c56fbd03cb368f59a16636d1d19a"} Feb 19 10:02:09 crc kubenswrapper[4965]: I0219 10:02:09.100685 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-jrhtd" Feb 19 10:02:09 crc kubenswrapper[4965]: I0219 10:02:09.100687 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-jrhtd" event={"ID":"22c8c4ab-703d-43c6-8007-a06089a42fc5","Type":"ContainerDied","Data":"2977f673b87593c75f0d7de17b2641a7c7add47dd8656364e7acd367e4196b06"} Feb 19 10:02:09 crc kubenswrapper[4965]: I0219 10:02:09.101246 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2977f673b87593c75f0d7de17b2641a7c7add47dd8656364e7acd367e4196b06" Feb 19 10:02:09 crc kubenswrapper[4965]: I0219 10:02:09.103147 4965 generic.go:334] "Generic (PLEG): container finished" podID="87f25999-5c83-4b40-9d6e-c32d88532e00" containerID="37b1a3213ee9697bb90c072ab7d08ac6fb1373b0c45fe8c561e027c524da23ec" exitCode=0 Feb 19 10:02:09 crc kubenswrapper[4965]: I0219 10:02:09.103210 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-596f-account-create-update-l526t" event={"ID":"87f25999-5c83-4b40-9d6e-c32d88532e00","Type":"ContainerDied","Data":"37b1a3213ee9697bb90c072ab7d08ac6fb1373b0c45fe8c561e027c524da23ec"} Feb 19 10:02:09 crc kubenswrapper[4965]: I0219 10:02:09.110628 4965 generic.go:334] "Generic (PLEG): container finished" podID="3f73b9d2-a434-4638-bce4-6c710166a455" containerID="16b56c3d65bf036b7a875d7b26a3cbe7d7b1ab5b8e5737919cbd729aa3139250" exitCode=0 Feb 19 10:02:09 crc kubenswrapper[4965]: I0219 10:02:09.110722 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-5xvv9" event={"ID":"3f73b9d2-a434-4638-bce4-6c710166a455","Type":"ContainerDied","Data":"16b56c3d65bf036b7a875d7b26a3cbe7d7b1ab5b8e5737919cbd729aa3139250"} Feb 19 10:02:09 crc kubenswrapper[4965]: I0219 10:02:09.117308 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7351-account-create-update-8ssjj" podStartSLOduration=2.117290689 podStartE2EDuration="2.117290689s" podCreationTimestamp="2026-02-19 10:02:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:02:09.115845244 +0000 UTC m=+1184.737166554" watchObservedRunningTime="2026-02-19 10:02:09.117290689 +0000 UTC m=+1184.738611989" Feb 19 10:02:09 crc kubenswrapper[4965]: I0219 10:02:09.118902 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ff7d-account-create-update-wx6sr" event={"ID":"b7b581c7-5ca4-4e60-bea9-db65839ed46c","Type":"ContainerStarted","Data":"d5486fe4292be6acda2492d77c1e36346fa471c50c6e6a99bb7855be3a6549b1"} Feb 19 10:02:09 crc kubenswrapper[4965]: I0219 10:02:09.120657 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-kldd9" event={"ID":"e9a5f937-3184-4cef-a4ac-8f7205952bbc","Type":"ContainerStarted","Data":"526001e016d3f1ecd82f92520a4c5184147503fbabed0e6f8703f6924e37a45b"} Feb 19 10:02:09 crc kubenswrapper[4965]: I0219 10:02:09.120683 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-kldd9" event={"ID":"e9a5f937-3184-4cef-a4ac-8f7205952bbc","Type":"ContainerStarted","Data":"2e9b7693a7fa099d15eb12923813ee63bb69c891136b6a4127b08c57b264b488"} Feb 19 10:02:09 crc kubenswrapper[4965]: I0219 10:02:09.145474 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e7a4c9f4-b898-43b4-812d-ab4f17c2124d","Type":"ContainerStarted","Data":"67d21e069f590b62a698ca97a479f7c757d71c1567f4f456cd32f54e49f48f83"} Feb 19 10:02:09 crc kubenswrapper[4965]: I0219 10:02:09.156510 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-kldd9" podStartSLOduration=2.156488371 podStartE2EDuration="2.156488371s" podCreationTimestamp="2026-02-19 10:02:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:02:09.150621409 +0000 UTC m=+1184.771942719" watchObservedRunningTime="2026-02-19 10:02:09.156488371 +0000 UTC m=+1184.777809681" Feb 19 10:02:09 crc kubenswrapper[4965]: I0219 10:02:09.156916 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-25kh7" event={"ID":"504af3e1-9b2c-4c21-8243-00e8b011c665","Type":"ContainerStarted","Data":"3de348a507e057bdc27188e8836a0f27d6fb5f564743fa24e567ce6c24a7c27b"} Feb 19 10:02:09 crc kubenswrapper[4965]: I0219 10:02:09.156980 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-25kh7" event={"ID":"504af3e1-9b2c-4c21-8243-00e8b011c665","Type":"ContainerStarted","Data":"1ce6319a8a32729c7a9cf71ebd8bb0f492f1b937136035a18c3c93f1b5a0a103"} Feb 19 10:02:09 crc kubenswrapper[4965]: I0219 10:02:09.399683 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 19 10:02:09 crc kubenswrapper[4965]: I0219 10:02:09.457572 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-5xvv9" Feb 19 10:02:09 crc kubenswrapper[4965]: I0219 10:02:09.544003 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f73b9d2-a434-4638-bce4-6c710166a455-config\") pod \"3f73b9d2-a434-4638-bce4-6c710166a455\" (UID: \"3f73b9d2-a434-4638-bce4-6c710166a455\") " Feb 19 10:02:09 crc kubenswrapper[4965]: I0219 10:02:09.544212 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f73b9d2-a434-4638-bce4-6c710166a455-dns-svc\") pod \"3f73b9d2-a434-4638-bce4-6c710166a455\" (UID: \"3f73b9d2-a434-4638-bce4-6c710166a455\") " Feb 19 10:02:09 crc kubenswrapper[4965]: I0219 10:02:09.544288 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2ptb\" (UniqueName: \"kubernetes.io/projected/3f73b9d2-a434-4638-bce4-6c710166a455-kube-api-access-r2ptb\") pod \"3f73b9d2-a434-4638-bce4-6c710166a455\" (UID: \"3f73b9d2-a434-4638-bce4-6c710166a455\") " Feb 19 10:02:09 crc kubenswrapper[4965]: I0219 10:02:09.553668 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f73b9d2-a434-4638-bce4-6c710166a455-kube-api-access-r2ptb" (OuterVolumeSpecName: "kube-api-access-r2ptb") pod "3f73b9d2-a434-4638-bce4-6c710166a455" (UID: "3f73b9d2-a434-4638-bce4-6c710166a455"). InnerVolumeSpecName "kube-api-access-r2ptb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:02:09 crc kubenswrapper[4965]: I0219 10:02:09.646770 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2ptb\" (UniqueName: \"kubernetes.io/projected/3f73b9d2-a434-4638-bce4-6c710166a455-kube-api-access-r2ptb\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:09 crc kubenswrapper[4965]: I0219 10:02:09.652321 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f73b9d2-a434-4638-bce4-6c710166a455-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3f73b9d2-a434-4638-bce4-6c710166a455" (UID: "3f73b9d2-a434-4638-bce4-6c710166a455"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:09 crc kubenswrapper[4965]: I0219 10:02:09.668947 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f73b9d2-a434-4638-bce4-6c710166a455-config" (OuterVolumeSpecName: "config") pod "3f73b9d2-a434-4638-bce4-6c710166a455" (UID: "3f73b9d2-a434-4638-bce4-6c710166a455"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:09 crc kubenswrapper[4965]: I0219 10:02:09.748860 4965 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f73b9d2-a434-4638-bce4-6c710166a455-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:09 crc kubenswrapper[4965]: I0219 10:02:09.748894 4965 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f73b9d2-a434-4638-bce4-6c710166a455-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:09 crc kubenswrapper[4965]: I0219 10:02:09.794222 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-pl6lh" Feb 19 10:02:09 crc kubenswrapper[4965]: I0219 10:02:09.850082 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f4a6564-b3dd-48b8-8f45-b89155f4ddbf-operator-scripts\") pod \"5f4a6564-b3dd-48b8-8f45-b89155f4ddbf\" (UID: \"5f4a6564-b3dd-48b8-8f45-b89155f4ddbf\") " Feb 19 10:02:09 crc kubenswrapper[4965]: I0219 10:02:09.850135 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kwdd\" (UniqueName: \"kubernetes.io/projected/5f4a6564-b3dd-48b8-8f45-b89155f4ddbf-kube-api-access-5kwdd\") pod \"5f4a6564-b3dd-48b8-8f45-b89155f4ddbf\" (UID: \"5f4a6564-b3dd-48b8-8f45-b89155f4ddbf\") " Feb 19 10:02:09 crc kubenswrapper[4965]: I0219 10:02:09.850537 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f4a6564-b3dd-48b8-8f45-b89155f4ddbf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5f4a6564-b3dd-48b8-8f45-b89155f4ddbf" (UID: "5f4a6564-b3dd-48b8-8f45-b89155f4ddbf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:09 crc kubenswrapper[4965]: I0219 10:02:09.850823 4965 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f4a6564-b3dd-48b8-8f45-b89155f4ddbf-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:09 crc kubenswrapper[4965]: I0219 10:02:09.853941 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f4a6564-b3dd-48b8-8f45-b89155f4ddbf-kube-api-access-5kwdd" (OuterVolumeSpecName: "kube-api-access-5kwdd") pod "5f4a6564-b3dd-48b8-8f45-b89155f4ddbf" (UID: "5f4a6564-b3dd-48b8-8f45-b89155f4ddbf"). InnerVolumeSpecName "kube-api-access-5kwdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:02:09 crc kubenswrapper[4965]: I0219 10:02:09.952457 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kwdd\" (UniqueName: \"kubernetes.io/projected/5f4a6564-b3dd-48b8-8f45-b89155f4ddbf-kube-api-access-5kwdd\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:10 crc kubenswrapper[4965]: I0219 10:02:10.048601 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-ktrzq" Feb 19 10:02:10 crc kubenswrapper[4965]: I0219 10:02:10.169960 4965 generic.go:334] "Generic (PLEG): container finished" podID="b7b581c7-5ca4-4e60-bea9-db65839ed46c" containerID="a4e325c54e3b3515aaa7fe72f19d2cda2462a7f90e24a8802365a26273561f24" exitCode=0 Feb 19 10:02:10 crc kubenswrapper[4965]: I0219 10:02:10.170050 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ff7d-account-create-update-wx6sr" event={"ID":"b7b581c7-5ca4-4e60-bea9-db65839ed46c","Type":"ContainerDied","Data":"a4e325c54e3b3515aaa7fe72f19d2cda2462a7f90e24a8802365a26273561f24"} Feb 19 10:02:10 crc kubenswrapper[4965]: I0219 10:02:10.172268 4965 generic.go:334] "Generic (PLEG): container finished" podID="e9a5f937-3184-4cef-a4ac-8f7205952bbc" containerID="526001e016d3f1ecd82f92520a4c5184147503fbabed0e6f8703f6924e37a45b" exitCode=0 Feb 19 10:02:10 crc kubenswrapper[4965]: I0219 10:02:10.172321 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-kldd9" event={"ID":"e9a5f937-3184-4cef-a4ac-8f7205952bbc","Type":"ContainerDied","Data":"526001e016d3f1ecd82f92520a4c5184147503fbabed0e6f8703f6924e37a45b"} Feb 19 10:02:10 crc kubenswrapper[4965]: I0219 10:02:10.174113 4965 generic.go:334] "Generic (PLEG): container finished" podID="504af3e1-9b2c-4c21-8243-00e8b011c665" containerID="3de348a507e057bdc27188e8836a0f27d6fb5f564743fa24e567ce6c24a7c27b" exitCode=0 Feb 19 10:02:10 crc kubenswrapper[4965]: I0219 10:02:10.174310 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-25kh7" event={"ID":"504af3e1-9b2c-4c21-8243-00e8b011c665","Type":"ContainerDied","Data":"3de348a507e057bdc27188e8836a0f27d6fb5f564743fa24e567ce6c24a7c27b"} Feb 19 10:02:10 crc kubenswrapper[4965]: I0219 10:02:10.189466 4965 generic.go:334] "Generic (PLEG): container finished" podID="baca6400-0fa5-49f2-8eb2-54a774607cc3" containerID="b2c13e1059d076cb67bc06414f3afa17cf1b7857d675093db6bea1e5de55dc5c" exitCode=0 Feb 19 10:02:10 crc kubenswrapper[4965]: I0219 10:02:10.190408 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7351-account-create-update-8ssjj" event={"ID":"baca6400-0fa5-49f2-8eb2-54a774607cc3","Type":"ContainerDied","Data":"b2c13e1059d076cb67bc06414f3afa17cf1b7857d675093db6bea1e5de55dc5c"} Feb 19 10:02:10 crc kubenswrapper[4965]: I0219 10:02:10.197431 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-pl6lh" Feb 19 10:02:10 crc kubenswrapper[4965]: I0219 10:02:10.197459 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-pl6lh" event={"ID":"5f4a6564-b3dd-48b8-8f45-b89155f4ddbf","Type":"ContainerDied","Data":"614f6e99934aaf09cada18f2d777b98c577187bbd42bdd86e21b542265b1a70b"} Feb 19 10:02:10 crc kubenswrapper[4965]: I0219 10:02:10.197477 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="614f6e99934aaf09cada18f2d777b98c577187bbd42bdd86e21b542265b1a70b" Feb 19 10:02:10 crc kubenswrapper[4965]: I0219 10:02:10.213772 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-5xvv9" Feb 19 10:02:10 crc kubenswrapper[4965]: I0219 10:02:10.222274 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-5xvv9" event={"ID":"3f73b9d2-a434-4638-bce4-6c710166a455","Type":"ContainerDied","Data":"268d6d4b5946ce229529d7ac053c81513f5f2f8d4df0774bee1035b3f0b1fd88"} Feb 19 10:02:10 crc kubenswrapper[4965]: I0219 10:02:10.226546 4965 scope.go:117] "RemoveContainer" containerID="16b56c3d65bf036b7a875d7b26a3cbe7d7b1ab5b8e5737919cbd729aa3139250" Feb 19 10:02:10 crc kubenswrapper[4965]: I0219 10:02:10.247714 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-hb4c6" Feb 19 10:02:10 crc kubenswrapper[4965]: I0219 10:02:10.259707 4965 scope.go:117] "RemoveContainer" containerID="35e15293de36b307843697bb1831d782f7c419a79a2a97d2244ac5438bb7255b" Feb 19 10:02:10 crc kubenswrapper[4965]: I0219 10:02:10.347143 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-5xvv9"] Feb 19 10:02:10 crc kubenswrapper[4965]: I0219 10:02:10.353687 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-5xvv9"] Feb 19 10:02:10 crc kubenswrapper[4965]: I0219 10:02:10.381895 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-4bmcg" Feb 19 10:02:10 crc kubenswrapper[4965]: I0219 10:02:10.623052 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-25kh7" Feb 19 10:02:10 crc kubenswrapper[4965]: I0219 10:02:10.683398 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztwdb\" (UniqueName: \"kubernetes.io/projected/504af3e1-9b2c-4c21-8243-00e8b011c665-kube-api-access-ztwdb\") pod \"504af3e1-9b2c-4c21-8243-00e8b011c665\" (UID: \"504af3e1-9b2c-4c21-8243-00e8b011c665\") " Feb 19 10:02:10 crc kubenswrapper[4965]: I0219 10:02:10.683556 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/504af3e1-9b2c-4c21-8243-00e8b011c665-operator-scripts\") pod \"504af3e1-9b2c-4c21-8243-00e8b011c665\" (UID: \"504af3e1-9b2c-4c21-8243-00e8b011c665\") " Feb 19 10:02:10 crc kubenswrapper[4965]: I0219 10:02:10.684634 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/504af3e1-9b2c-4c21-8243-00e8b011c665-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "504af3e1-9b2c-4c21-8243-00e8b011c665" (UID: "504af3e1-9b2c-4c21-8243-00e8b011c665"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:10 crc kubenswrapper[4965]: I0219 10:02:10.690223 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/504af3e1-9b2c-4c21-8243-00e8b011c665-kube-api-access-ztwdb" (OuterVolumeSpecName: "kube-api-access-ztwdb") pod "504af3e1-9b2c-4c21-8243-00e8b011c665" (UID: "504af3e1-9b2c-4c21-8243-00e8b011c665"). InnerVolumeSpecName "kube-api-access-ztwdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:02:10 crc kubenswrapper[4965]: I0219 10:02:10.699141 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-596f-account-create-update-l526t" Feb 19 10:02:10 crc kubenswrapper[4965]: I0219 10:02:10.784682 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87f25999-5c83-4b40-9d6e-c32d88532e00-operator-scripts\") pod \"87f25999-5c83-4b40-9d6e-c32d88532e00\" (UID: \"87f25999-5c83-4b40-9d6e-c32d88532e00\") " Feb 19 10:02:10 crc kubenswrapper[4965]: I0219 10:02:10.784732 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wm4t9\" (UniqueName: \"kubernetes.io/projected/87f25999-5c83-4b40-9d6e-c32d88532e00-kube-api-access-wm4t9\") pod \"87f25999-5c83-4b40-9d6e-c32d88532e00\" (UID: \"87f25999-5c83-4b40-9d6e-c32d88532e00\") " Feb 19 10:02:10 crc kubenswrapper[4965]: I0219 10:02:10.785061 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87f25999-5c83-4b40-9d6e-c32d88532e00-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "87f25999-5c83-4b40-9d6e-c32d88532e00" (UID: "87f25999-5c83-4b40-9d6e-c32d88532e00"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:10 crc kubenswrapper[4965]: I0219 10:02:10.785221 4965 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/504af3e1-9b2c-4c21-8243-00e8b011c665-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:10 crc kubenswrapper[4965]: I0219 10:02:10.785234 4965 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87f25999-5c83-4b40-9d6e-c32d88532e00-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:10 crc kubenswrapper[4965]: I0219 10:02:10.785243 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztwdb\" (UniqueName: \"kubernetes.io/projected/504af3e1-9b2c-4c21-8243-00e8b011c665-kube-api-access-ztwdb\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:10 crc kubenswrapper[4965]: I0219 10:02:10.789645 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87f25999-5c83-4b40-9d6e-c32d88532e00-kube-api-access-wm4t9" (OuterVolumeSpecName: "kube-api-access-wm4t9") pod "87f25999-5c83-4b40-9d6e-c32d88532e00" (UID: "87f25999-5c83-4b40-9d6e-c32d88532e00"). InnerVolumeSpecName "kube-api-access-wm4t9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:02:10 crc kubenswrapper[4965]: I0219 10:02:10.887696 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wm4t9\" (UniqueName: \"kubernetes.io/projected/87f25999-5c83-4b40-9d6e-c32d88532e00-kube-api-access-wm4t9\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:11 crc kubenswrapper[4965]: I0219 10:02:11.211044 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f73b9d2-a434-4638-bce4-6c710166a455" path="/var/lib/kubelet/pods/3f73b9d2-a434-4638-bce4-6c710166a455/volumes" Feb 19 10:02:11 crc kubenswrapper[4965]: I0219 10:02:11.226404 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-596f-account-create-update-l526t" Feb 19 10:02:11 crc kubenswrapper[4965]: I0219 10:02:11.227093 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-596f-account-create-update-l526t" event={"ID":"87f25999-5c83-4b40-9d6e-c32d88532e00","Type":"ContainerDied","Data":"e069b30b052e99f0fb149e84ce7826d58ae6f75a34177118c49918b34ddf62f5"} Feb 19 10:02:11 crc kubenswrapper[4965]: I0219 10:02:11.227148 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e069b30b052e99f0fb149e84ce7826d58ae6f75a34177118c49918b34ddf62f5" Feb 19 10:02:11 crc kubenswrapper[4965]: I0219 10:02:11.233389 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-25kh7" Feb 19 10:02:11 crc kubenswrapper[4965]: I0219 10:02:11.234416 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-25kh7" event={"ID":"504af3e1-9b2c-4c21-8243-00e8b011c665","Type":"ContainerDied","Data":"1ce6319a8a32729c7a9cf71ebd8bb0f492f1b937136035a18c3c93f1b5a0a103"} Feb 19 10:02:11 crc kubenswrapper[4965]: I0219 10:02:11.234473 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ce6319a8a32729c7a9cf71ebd8bb0f492f1b937136035a18c3c93f1b5a0a103" Feb 19 10:02:11 crc kubenswrapper[4965]: I0219 10:02:11.250876 4965 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="faab82f2-bc31-438d-b329-9a31d6ba5040" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 19 10:02:11 crc kubenswrapper[4965]: I0219 10:02:11.304231 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 19 10:02:11 crc kubenswrapper[4965]: I0219 10:02:11.394214 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 19 10:02:11 crc kubenswrapper[4965]: I0219 10:02:11.661357 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-kldd9" Feb 19 10:02:11 crc kubenswrapper[4965]: I0219 10:02:11.714376 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9a5f937-3184-4cef-a4ac-8f7205952bbc-operator-scripts\") pod \"e9a5f937-3184-4cef-a4ac-8f7205952bbc\" (UID: \"e9a5f937-3184-4cef-a4ac-8f7205952bbc\") " Feb 19 10:02:11 crc kubenswrapper[4965]: I0219 10:02:11.714586 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzwr8\" (UniqueName: \"kubernetes.io/projected/e9a5f937-3184-4cef-a4ac-8f7205952bbc-kube-api-access-vzwr8\") pod \"e9a5f937-3184-4cef-a4ac-8f7205952bbc\" (UID: \"e9a5f937-3184-4cef-a4ac-8f7205952bbc\") " Feb 19 10:02:11 crc kubenswrapper[4965]: I0219 10:02:11.716091 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9a5f937-3184-4cef-a4ac-8f7205952bbc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e9a5f937-3184-4cef-a4ac-8f7205952bbc" (UID: "e9a5f937-3184-4cef-a4ac-8f7205952bbc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:11 crc kubenswrapper[4965]: I0219 10:02:11.742920 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9a5f937-3184-4cef-a4ac-8f7205952bbc-kube-api-access-vzwr8" (OuterVolumeSpecName: "kube-api-access-vzwr8") pod "e9a5f937-3184-4cef-a4ac-8f7205952bbc" (UID: "e9a5f937-3184-4cef-a4ac-8f7205952bbc"). InnerVolumeSpecName "kube-api-access-vzwr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:02:11 crc kubenswrapper[4965]: I0219 10:02:11.799062 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 19 10:02:11 crc kubenswrapper[4965]: I0219 10:02:11.802561 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ff7d-account-create-update-wx6sr" Feb 19 10:02:11 crc kubenswrapper[4965]: I0219 10:02:11.816234 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7mcc\" (UniqueName: \"kubernetes.io/projected/b7b581c7-5ca4-4e60-bea9-db65839ed46c-kube-api-access-p7mcc\") pod \"b7b581c7-5ca4-4e60-bea9-db65839ed46c\" (UID: \"b7b581c7-5ca4-4e60-bea9-db65839ed46c\") " Feb 19 10:02:11 crc kubenswrapper[4965]: I0219 10:02:11.816273 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7b581c7-5ca4-4e60-bea9-db65839ed46c-operator-scripts\") pod \"b7b581c7-5ca4-4e60-bea9-db65839ed46c\" (UID: \"b7b581c7-5ca4-4e60-bea9-db65839ed46c\") " Feb 19 10:02:11 crc kubenswrapper[4965]: I0219 10:02:11.816898 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzwr8\" (UniqueName: \"kubernetes.io/projected/e9a5f937-3184-4cef-a4ac-8f7205952bbc-kube-api-access-vzwr8\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:11 crc kubenswrapper[4965]: I0219 10:02:11.816910 4965 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9a5f937-3184-4cef-a4ac-8f7205952bbc-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:11 crc kubenswrapper[4965]: I0219 10:02:11.818447 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7b581c7-5ca4-4e60-bea9-db65839ed46c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b7b581c7-5ca4-4e60-bea9-db65839ed46c" (UID: "b7b581c7-5ca4-4e60-bea9-db65839ed46c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:11 crc kubenswrapper[4965]: I0219 10:02:11.822117 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7b581c7-5ca4-4e60-bea9-db65839ed46c-kube-api-access-p7mcc" (OuterVolumeSpecName: "kube-api-access-p7mcc") pod "b7b581c7-5ca4-4e60-bea9-db65839ed46c" (UID: "b7b581c7-5ca4-4e60-bea9-db65839ed46c"). InnerVolumeSpecName "kube-api-access-p7mcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:02:11 crc kubenswrapper[4965]: I0219 10:02:11.822559 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7351-account-create-update-8ssjj" Feb 19 10:02:11 crc kubenswrapper[4965]: I0219 10:02:11.894371 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 19 10:02:11 crc kubenswrapper[4965]: I0219 10:02:11.902974 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-vkkc7"] Feb 19 10:02:11 crc kubenswrapper[4965]: E0219 10:02:11.903617 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baca6400-0fa5-49f2-8eb2-54a774607cc3" containerName="mariadb-account-create-update" Feb 19 10:02:11 crc kubenswrapper[4965]: I0219 10:02:11.903642 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="baca6400-0fa5-49f2-8eb2-54a774607cc3" containerName="mariadb-account-create-update" Feb 19 10:02:11 crc kubenswrapper[4965]: E0219 10:02:11.903659 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="504af3e1-9b2c-4c21-8243-00e8b011c665" containerName="mariadb-database-create" Feb 19 10:02:11 crc kubenswrapper[4965]: I0219 10:02:11.903669 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="504af3e1-9b2c-4c21-8243-00e8b011c665" containerName="mariadb-database-create" Feb 19 10:02:11 crc kubenswrapper[4965]: E0219 10:02:11.903686 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f4a6564-b3dd-48b8-8f45-b89155f4ddbf" containerName="mariadb-database-create" Feb 19 10:02:11 crc kubenswrapper[4965]: I0219 10:02:11.903693 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f4a6564-b3dd-48b8-8f45-b89155f4ddbf" containerName="mariadb-database-create" Feb 19 10:02:11 crc kubenswrapper[4965]: E0219 10:02:11.903705 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9a5f937-3184-4cef-a4ac-8f7205952bbc" containerName="mariadb-database-create" Feb 19 10:02:11 crc kubenswrapper[4965]: I0219 10:02:11.903714 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9a5f937-3184-4cef-a4ac-8f7205952bbc" containerName="mariadb-database-create" Feb 19 10:02:11 crc kubenswrapper[4965]: E0219 10:02:11.903743 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87f25999-5c83-4b40-9d6e-c32d88532e00" containerName="mariadb-account-create-update" Feb 19 10:02:11 crc kubenswrapper[4965]: I0219 10:02:11.903756 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f25999-5c83-4b40-9d6e-c32d88532e00" containerName="mariadb-account-create-update" Feb 19 10:02:11 crc kubenswrapper[4965]: E0219 10:02:11.903773 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7b581c7-5ca4-4e60-bea9-db65839ed46c" containerName="mariadb-account-create-update" Feb 19 10:02:11 crc kubenswrapper[4965]: I0219 10:02:11.903781 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7b581c7-5ca4-4e60-bea9-db65839ed46c" containerName="mariadb-account-create-update" Feb 19 10:02:11 crc kubenswrapper[4965]: E0219 10:02:11.903795 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22c8c4ab-703d-43c6-8007-a06089a42fc5" containerName="mariadb-account-create-update" Feb 19 10:02:11 crc kubenswrapper[4965]: I0219 10:02:11.903806 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="22c8c4ab-703d-43c6-8007-a06089a42fc5" containerName="mariadb-account-create-update" Feb 19 10:02:11 crc kubenswrapper[4965]: E0219 10:02:11.903823 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f73b9d2-a434-4638-bce4-6c710166a455" containerName="init" Feb 19 10:02:11 crc kubenswrapper[4965]: I0219 10:02:11.903832 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f73b9d2-a434-4638-bce4-6c710166a455" containerName="init" Feb 19 10:02:11 crc kubenswrapper[4965]: E0219 10:02:11.903848 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f73b9d2-a434-4638-bce4-6c710166a455" containerName="dnsmasq-dns" Feb 19 10:02:11 crc kubenswrapper[4965]: I0219 10:02:11.903855 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f73b9d2-a434-4638-bce4-6c710166a455" containerName="dnsmasq-dns" Feb 19 10:02:11 crc kubenswrapper[4965]: I0219 10:02:11.904069 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="504af3e1-9b2c-4c21-8243-00e8b011c665" containerName="mariadb-database-create" Feb 19 10:02:11 crc kubenswrapper[4965]: I0219 10:02:11.904089 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f4a6564-b3dd-48b8-8f45-b89155f4ddbf" containerName="mariadb-database-create" Feb 19 10:02:11 crc kubenswrapper[4965]: I0219 10:02:11.904104 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7b581c7-5ca4-4e60-bea9-db65839ed46c" containerName="mariadb-account-create-update" Feb 19 10:02:11 crc kubenswrapper[4965]: I0219 10:02:11.904113 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="87f25999-5c83-4b40-9d6e-c32d88532e00" containerName="mariadb-account-create-update" Feb 19 10:02:11 crc kubenswrapper[4965]: I0219 10:02:11.904125 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f73b9d2-a434-4638-bce4-6c710166a455" containerName="dnsmasq-dns" Feb 19 10:02:11 crc kubenswrapper[4965]: I0219 10:02:11.904137 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="22c8c4ab-703d-43c6-8007-a06089a42fc5" containerName="mariadb-account-create-update" Feb 19 10:02:11 crc kubenswrapper[4965]: I0219 10:02:11.904144 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="baca6400-0fa5-49f2-8eb2-54a774607cc3" containerName="mariadb-account-create-update" Feb 19 10:02:11 crc kubenswrapper[4965]: I0219 10:02:11.904152 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9a5f937-3184-4cef-a4ac-8f7205952bbc" containerName="mariadb-database-create" Feb 19 10:02:11 crc kubenswrapper[4965]: I0219 10:02:11.905167 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-vkkc7" Feb 19 10:02:11 crc kubenswrapper[4965]: I0219 10:02:11.906955 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-66d72" Feb 19 10:02:11 crc kubenswrapper[4965]: I0219 10:02:11.915704 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 19 10:02:11 crc kubenswrapper[4965]: I0219 10:02:11.919675 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8d2q2\" (UniqueName: \"kubernetes.io/projected/baca6400-0fa5-49f2-8eb2-54a774607cc3-kube-api-access-8d2q2\") pod \"baca6400-0fa5-49f2-8eb2-54a774607cc3\" (UID: \"baca6400-0fa5-49f2-8eb2-54a774607cc3\") " Feb 19 10:02:11 crc kubenswrapper[4965]: I0219 10:02:11.919805 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/baca6400-0fa5-49f2-8eb2-54a774607cc3-operator-scripts\") pod \"baca6400-0fa5-49f2-8eb2-54a774607cc3\" (UID: \"baca6400-0fa5-49f2-8eb2-54a774607cc3\") " Feb 19 10:02:11 crc kubenswrapper[4965]: I0219 10:02:11.920023 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/18ba8da5-8c18-4dad-91ce-dc34ef3fc6e4-db-sync-config-data\") pod \"glance-db-sync-vkkc7\" (UID: \"18ba8da5-8c18-4dad-91ce-dc34ef3fc6e4\") " pod="openstack/glance-db-sync-vkkc7" Feb 19 10:02:11 crc kubenswrapper[4965]: I0219 10:02:11.920092 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18ba8da5-8c18-4dad-91ce-dc34ef3fc6e4-config-data\") pod \"glance-db-sync-vkkc7\" (UID: \"18ba8da5-8c18-4dad-91ce-dc34ef3fc6e4\") " pod="openstack/glance-db-sync-vkkc7" Feb 19 10:02:11 crc kubenswrapper[4965]: I0219 10:02:11.920153 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18ba8da5-8c18-4dad-91ce-dc34ef3fc6e4-combined-ca-bundle\") pod \"glance-db-sync-vkkc7\" (UID: \"18ba8da5-8c18-4dad-91ce-dc34ef3fc6e4\") " pod="openstack/glance-db-sync-vkkc7" Feb 19 10:02:11 crc kubenswrapper[4965]: I0219 10:02:11.920236 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/baca6400-0fa5-49f2-8eb2-54a774607cc3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "baca6400-0fa5-49f2-8eb2-54a774607cc3" (UID: "baca6400-0fa5-49f2-8eb2-54a774607cc3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:11 crc kubenswrapper[4965]: I0219 10:02:11.920456 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h4gf\" (UniqueName: \"kubernetes.io/projected/18ba8da5-8c18-4dad-91ce-dc34ef3fc6e4-kube-api-access-2h4gf\") pod \"glance-db-sync-vkkc7\" (UID: \"18ba8da5-8c18-4dad-91ce-dc34ef3fc6e4\") " pod="openstack/glance-db-sync-vkkc7" Feb 19 10:02:11 crc kubenswrapper[4965]: I0219 10:02:11.920605 4965 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/baca6400-0fa5-49f2-8eb2-54a774607cc3-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:11 crc kubenswrapper[4965]: I0219 10:02:11.920623 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7mcc\" (UniqueName: \"kubernetes.io/projected/b7b581c7-5ca4-4e60-bea9-db65839ed46c-kube-api-access-p7mcc\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:11 crc kubenswrapper[4965]: I0219 10:02:11.920648 4965 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7b581c7-5ca4-4e60-bea9-db65839ed46c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:11 crc kubenswrapper[4965]: I0219 10:02:11.924680 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-vkkc7"] Feb 19 10:02:11 crc kubenswrapper[4965]: I0219 10:02:11.927516 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/baca6400-0fa5-49f2-8eb2-54a774607cc3-kube-api-access-8d2q2" (OuterVolumeSpecName: "kube-api-access-8d2q2") pod "baca6400-0fa5-49f2-8eb2-54a774607cc3" (UID: "baca6400-0fa5-49f2-8eb2-54a774607cc3"). InnerVolumeSpecName "kube-api-access-8d2q2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:02:12 crc kubenswrapper[4965]: I0219 10:02:12.021726 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2h4gf\" (UniqueName: \"kubernetes.io/projected/18ba8da5-8c18-4dad-91ce-dc34ef3fc6e4-kube-api-access-2h4gf\") pod \"glance-db-sync-vkkc7\" (UID: \"18ba8da5-8c18-4dad-91ce-dc34ef3fc6e4\") " pod="openstack/glance-db-sync-vkkc7" Feb 19 10:02:12 crc kubenswrapper[4965]: I0219 10:02:12.021807 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/18ba8da5-8c18-4dad-91ce-dc34ef3fc6e4-db-sync-config-data\") pod \"glance-db-sync-vkkc7\" (UID: \"18ba8da5-8c18-4dad-91ce-dc34ef3fc6e4\") " pod="openstack/glance-db-sync-vkkc7" Feb 19 10:02:12 crc kubenswrapper[4965]: I0219 10:02:12.021905 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18ba8da5-8c18-4dad-91ce-dc34ef3fc6e4-config-data\") pod \"glance-db-sync-vkkc7\" (UID: \"18ba8da5-8c18-4dad-91ce-dc34ef3fc6e4\") " pod="openstack/glance-db-sync-vkkc7" Feb 19 10:02:12 crc kubenswrapper[4965]: I0219 10:02:12.021969 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18ba8da5-8c18-4dad-91ce-dc34ef3fc6e4-combined-ca-bundle\") pod \"glance-db-sync-vkkc7\" (UID: \"18ba8da5-8c18-4dad-91ce-dc34ef3fc6e4\") " pod="openstack/glance-db-sync-vkkc7" Feb 19 10:02:12 crc kubenswrapper[4965]: I0219 10:02:12.022105 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8d2q2\" (UniqueName: \"kubernetes.io/projected/baca6400-0fa5-49f2-8eb2-54a774607cc3-kube-api-access-8d2q2\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:12 crc kubenswrapper[4965]: I0219 10:02:12.025942 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/18ba8da5-8c18-4dad-91ce-dc34ef3fc6e4-db-sync-config-data\") pod \"glance-db-sync-vkkc7\" (UID: \"18ba8da5-8c18-4dad-91ce-dc34ef3fc6e4\") " pod="openstack/glance-db-sync-vkkc7" Feb 19 10:02:12 crc kubenswrapper[4965]: I0219 10:02:12.026134 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18ba8da5-8c18-4dad-91ce-dc34ef3fc6e4-combined-ca-bundle\") pod \"glance-db-sync-vkkc7\" (UID: \"18ba8da5-8c18-4dad-91ce-dc34ef3fc6e4\") " pod="openstack/glance-db-sync-vkkc7" Feb 19 10:02:12 crc kubenswrapper[4965]: I0219 10:02:12.026582 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18ba8da5-8c18-4dad-91ce-dc34ef3fc6e4-config-data\") pod \"glance-db-sync-vkkc7\" (UID: \"18ba8da5-8c18-4dad-91ce-dc34ef3fc6e4\") " pod="openstack/glance-db-sync-vkkc7" Feb 19 10:02:12 crc kubenswrapper[4965]: I0219 10:02:12.039209 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2h4gf\" (UniqueName: \"kubernetes.io/projected/18ba8da5-8c18-4dad-91ce-dc34ef3fc6e4-kube-api-access-2h4gf\") pod \"glance-db-sync-vkkc7\" (UID: \"18ba8da5-8c18-4dad-91ce-dc34ef3fc6e4\") " pod="openstack/glance-db-sync-vkkc7" Feb 19 10:02:12 crc kubenswrapper[4965]: I0219 10:02:12.243825 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-vkkc7" Feb 19 10:02:12 crc kubenswrapper[4965]: I0219 10:02:12.255622 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7351-account-create-update-8ssjj" event={"ID":"baca6400-0fa5-49f2-8eb2-54a774607cc3","Type":"ContainerDied","Data":"c6f6040d1361f469c13e1bc6368c7c76eb47c56fbd03cb368f59a16636d1d19a"} Feb 19 10:02:12 crc kubenswrapper[4965]: I0219 10:02:12.255660 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6f6040d1361f469c13e1bc6368c7c76eb47c56fbd03cb368f59a16636d1d19a" Feb 19 10:02:12 crc kubenswrapper[4965]: I0219 10:02:12.255728 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7351-account-create-update-8ssjj" Feb 19 10:02:12 crc kubenswrapper[4965]: I0219 10:02:12.257577 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ff7d-account-create-update-wx6sr" Feb 19 10:02:12 crc kubenswrapper[4965]: I0219 10:02:12.257597 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ff7d-account-create-update-wx6sr" event={"ID":"b7b581c7-5ca4-4e60-bea9-db65839ed46c","Type":"ContainerDied","Data":"d5486fe4292be6acda2492d77c1e36346fa471c50c6e6a99bb7855be3a6549b1"} Feb 19 10:02:12 crc kubenswrapper[4965]: I0219 10:02:12.257636 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5486fe4292be6acda2492d77c1e36346fa471c50c6e6a99bb7855be3a6549b1" Feb 19 10:02:12 crc kubenswrapper[4965]: I0219 10:02:12.262466 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-kldd9" Feb 19 10:02:12 crc kubenswrapper[4965]: I0219 10:02:12.262280 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-kldd9" event={"ID":"e9a5f937-3184-4cef-a4ac-8f7205952bbc","Type":"ContainerDied","Data":"2e9b7693a7fa099d15eb12923813ee63bb69c891136b6a4127b08c57b264b488"} Feb 19 10:02:12 crc kubenswrapper[4965]: I0219 10:02:12.262545 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e9b7693a7fa099d15eb12923813ee63bb69c891136b6a4127b08c57b264b488" Feb 19 10:02:12 crc kubenswrapper[4965]: I0219 10:02:12.310756 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 19 10:02:12 crc kubenswrapper[4965]: I0219 10:02:12.537975 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 19 10:02:12 crc kubenswrapper[4965]: I0219 10:02:12.541164 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 19 10:02:12 crc kubenswrapper[4965]: I0219 10:02:12.544141 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 19 10:02:12 crc kubenswrapper[4965]: I0219 10:02:12.544463 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-6rwpd" Feb 19 10:02:12 crc kubenswrapper[4965]: I0219 10:02:12.544629 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 19 10:02:12 crc kubenswrapper[4965]: I0219 10:02:12.544837 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 19 10:02:12 crc kubenswrapper[4965]: I0219 10:02:12.545563 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 19 10:02:12 crc kubenswrapper[4965]: I0219 10:02:12.636348 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89c93b76-069c-4c94-aa84-a77d7e4c8e26-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"89c93b76-069c-4c94-aa84-a77d7e4c8e26\") " pod="openstack/ovn-northd-0" Feb 19 10:02:12 crc kubenswrapper[4965]: I0219 10:02:12.636459 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/89c93b76-069c-4c94-aa84-a77d7e4c8e26-scripts\") pod \"ovn-northd-0\" (UID: \"89c93b76-069c-4c94-aa84-a77d7e4c8e26\") " pod="openstack/ovn-northd-0" Feb 19 10:02:12 crc kubenswrapper[4965]: I0219 10:02:12.636487 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/89c93b76-069c-4c94-aa84-a77d7e4c8e26-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"89c93b76-069c-4c94-aa84-a77d7e4c8e26\") " pod="openstack/ovn-northd-0" Feb 19 10:02:12 crc kubenswrapper[4965]: I0219 10:02:12.636523 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/89c93b76-069c-4c94-aa84-a77d7e4c8e26-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"89c93b76-069c-4c94-aa84-a77d7e4c8e26\") " pod="openstack/ovn-northd-0" Feb 19 10:02:12 crc kubenswrapper[4965]: I0219 10:02:12.636581 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89c93b76-069c-4c94-aa84-a77d7e4c8e26-config\") pod \"ovn-northd-0\" (UID: \"89c93b76-069c-4c94-aa84-a77d7e4c8e26\") " pod="openstack/ovn-northd-0" Feb 19 10:02:12 crc kubenswrapper[4965]: I0219 10:02:12.636598 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59sfr\" (UniqueName: \"kubernetes.io/projected/89c93b76-069c-4c94-aa84-a77d7e4c8e26-kube-api-access-59sfr\") pod \"ovn-northd-0\" (UID: \"89c93b76-069c-4c94-aa84-a77d7e4c8e26\") " pod="openstack/ovn-northd-0" Feb 19 10:02:12 crc kubenswrapper[4965]: I0219 10:02:12.636660 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/89c93b76-069c-4c94-aa84-a77d7e4c8e26-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"89c93b76-069c-4c94-aa84-a77d7e4c8e26\") " pod="openstack/ovn-northd-0" Feb 19 10:02:12 crc kubenswrapper[4965]: I0219 10:02:12.739240 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/89c93b76-069c-4c94-aa84-a77d7e4c8e26-scripts\") pod \"ovn-northd-0\" (UID: \"89c93b76-069c-4c94-aa84-a77d7e4c8e26\") " pod="openstack/ovn-northd-0" Feb 19 10:02:12 crc kubenswrapper[4965]: I0219 10:02:12.739308 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/89c93b76-069c-4c94-aa84-a77d7e4c8e26-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"89c93b76-069c-4c94-aa84-a77d7e4c8e26\") " pod="openstack/ovn-northd-0" Feb 19 10:02:12 crc kubenswrapper[4965]: I0219 10:02:12.739359 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/89c93b76-069c-4c94-aa84-a77d7e4c8e26-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"89c93b76-069c-4c94-aa84-a77d7e4c8e26\") " pod="openstack/ovn-northd-0" Feb 19 10:02:12 crc kubenswrapper[4965]: I0219 10:02:12.739397 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89c93b76-069c-4c94-aa84-a77d7e4c8e26-config\") pod \"ovn-northd-0\" (UID: \"89c93b76-069c-4c94-aa84-a77d7e4c8e26\") " pod="openstack/ovn-northd-0" Feb 19 10:02:12 crc kubenswrapper[4965]: I0219 10:02:12.739426 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59sfr\" (UniqueName: \"kubernetes.io/projected/89c93b76-069c-4c94-aa84-a77d7e4c8e26-kube-api-access-59sfr\") pod \"ovn-northd-0\" (UID: \"89c93b76-069c-4c94-aa84-a77d7e4c8e26\") " pod="openstack/ovn-northd-0" Feb 19 10:02:12 crc kubenswrapper[4965]: I0219 10:02:12.739478 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/89c93b76-069c-4c94-aa84-a77d7e4c8e26-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"89c93b76-069c-4c94-aa84-a77d7e4c8e26\") " pod="openstack/ovn-northd-0" Feb 19 10:02:12 crc kubenswrapper[4965]: I0219 10:02:12.739518 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89c93b76-069c-4c94-aa84-a77d7e4c8e26-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"89c93b76-069c-4c94-aa84-a77d7e4c8e26\") " pod="openstack/ovn-northd-0" Feb 19 10:02:12 crc kubenswrapper[4965]: I0219 10:02:12.739971 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/89c93b76-069c-4c94-aa84-a77d7e4c8e26-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"89c93b76-069c-4c94-aa84-a77d7e4c8e26\") " pod="openstack/ovn-northd-0" Feb 19 10:02:12 crc kubenswrapper[4965]: I0219 10:02:12.740171 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/89c93b76-069c-4c94-aa84-a77d7e4c8e26-scripts\") pod \"ovn-northd-0\" (UID: \"89c93b76-069c-4c94-aa84-a77d7e4c8e26\") " pod="openstack/ovn-northd-0" Feb 19 10:02:12 crc kubenswrapper[4965]: I0219 10:02:12.741261 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89c93b76-069c-4c94-aa84-a77d7e4c8e26-config\") pod \"ovn-northd-0\" (UID: \"89c93b76-069c-4c94-aa84-a77d7e4c8e26\") " pod="openstack/ovn-northd-0" Feb 19 10:02:12 crc kubenswrapper[4965]: I0219 10:02:12.742677 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/89c93b76-069c-4c94-aa84-a77d7e4c8e26-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"89c93b76-069c-4c94-aa84-a77d7e4c8e26\") " pod="openstack/ovn-northd-0" Feb 19 10:02:12 crc kubenswrapper[4965]: I0219 10:02:12.743799 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/89c93b76-069c-4c94-aa84-a77d7e4c8e26-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"89c93b76-069c-4c94-aa84-a77d7e4c8e26\") " pod="openstack/ovn-northd-0" Feb 19 10:02:12 crc kubenswrapper[4965]: I0219 10:02:12.745735 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89c93b76-069c-4c94-aa84-a77d7e4c8e26-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"89c93b76-069c-4c94-aa84-a77d7e4c8e26\") " pod="openstack/ovn-northd-0" Feb 19 10:02:12 crc kubenswrapper[4965]: I0219 10:02:12.755250 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59sfr\" (UniqueName: \"kubernetes.io/projected/89c93b76-069c-4c94-aa84-a77d7e4c8e26-kube-api-access-59sfr\") pod \"ovn-northd-0\" (UID: \"89c93b76-069c-4c94-aa84-a77d7e4c8e26\") " pod="openstack/ovn-northd-0" Feb 19 10:02:12 crc kubenswrapper[4965]: I0219 10:02:12.864010 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 19 10:02:14 crc kubenswrapper[4965]: I0219 10:02:14.744252 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-jrhtd"] Feb 19 10:02:14 crc kubenswrapper[4965]: I0219 10:02:14.757612 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-jrhtd"] Feb 19 10:02:15 crc kubenswrapper[4965]: I0219 10:02:15.226064 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c8c4ab-703d-43c6-8007-a06089a42fc5" path="/var/lib/kubelet/pods/22c8c4ab-703d-43c6-8007-a06089a42fc5/volumes" Feb 19 10:02:15 crc kubenswrapper[4965]: I0219 10:02:15.309543 4965 generic.go:334] "Generic (PLEG): container finished" podID="f2a6db35-796d-485d-9b96-5c03b7d7725b" containerID="5ae0b07800d5705fef55b740493646265c68d9b9ae4149c8b2af57424e2c01fb" exitCode=0 Feb 19 10:02:15 crc kubenswrapper[4965]: I0219 10:02:15.309616 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-kx9jd" event={"ID":"f2a6db35-796d-485d-9b96-5c03b7d7725b","Type":"ContainerDied","Data":"5ae0b07800d5705fef55b740493646265c68d9b9ae4149c8b2af57424e2c01fb"} Feb 19 10:02:15 crc kubenswrapper[4965]: I0219 10:02:15.495849 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2c3ae050-b164-4fbc-9e5b-392eb0a4fb53-etc-swift\") pod \"swift-storage-0\" (UID: \"2c3ae050-b164-4fbc-9e5b-392eb0a4fb53\") " pod="openstack/swift-storage-0" Feb 19 10:02:15 crc kubenswrapper[4965]: I0219 10:02:15.503532 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2c3ae050-b164-4fbc-9e5b-392eb0a4fb53-etc-swift\") pod \"swift-storage-0\" (UID: \"2c3ae050-b164-4fbc-9e5b-392eb0a4fb53\") " pod="openstack/swift-storage-0" Feb 19 10:02:15 crc kubenswrapper[4965]: I0219 10:02:15.555658 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 19 10:02:15 crc kubenswrapper[4965]: I0219 10:02:15.817621 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 19 10:02:16 crc kubenswrapper[4965]: I0219 10:02:16.110810 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-vkkc7"] Feb 19 10:02:16 crc kubenswrapper[4965]: W0219 10:02:16.119619 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod18ba8da5_8c18_4dad_91ce_dc34ef3fc6e4.slice/crio-b1addb71c18b15579fddf937c2541f6b592e47c287bae2b82729913628462a5d WatchSource:0}: Error finding container b1addb71c18b15579fddf937c2541f6b592e47c287bae2b82729913628462a5d: Status 404 returned error can't find the container with id b1addb71c18b15579fddf937c2541f6b592e47c287bae2b82729913628462a5d Feb 19 10:02:16 crc kubenswrapper[4965]: I0219 10:02:16.321787 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"89c93b76-069c-4c94-aa84-a77d7e4c8e26","Type":"ContainerStarted","Data":"92383a2e753b20c052330e8dd5e3cf39bb6ae3659cdadd963311fb47d8571ca8"} Feb 19 10:02:16 crc kubenswrapper[4965]: I0219 10:02:16.323841 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-vkkc7" event={"ID":"18ba8da5-8c18-4dad-91ce-dc34ef3fc6e4","Type":"ContainerStarted","Data":"b1addb71c18b15579fddf937c2541f6b592e47c287bae2b82729913628462a5d"} Feb 19 10:02:16 crc kubenswrapper[4965]: I0219 10:02:16.328412 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e7a4c9f4-b898-43b4-812d-ab4f17c2124d","Type":"ContainerStarted","Data":"0bb62c2d1a56a0ddc8f5f5cf45fd1135591f3bf224388ba4dcd9e2cf75d4a1a4"} Feb 19 10:02:16 crc kubenswrapper[4965]: I0219 10:02:16.375517 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=11.237031608 podStartE2EDuration="58.375493935s" podCreationTimestamp="2026-02-19 10:01:18 +0000 UTC" firstStartedPulling="2026-02-19 10:01:28.342767486 +0000 UTC m=+1143.964088796" lastFinishedPulling="2026-02-19 10:02:15.481229813 +0000 UTC m=+1191.102551123" observedRunningTime="2026-02-19 10:02:16.361434204 +0000 UTC m=+1191.982755654" watchObservedRunningTime="2026-02-19 10:02:16.375493935 +0000 UTC m=+1191.996815245" Feb 19 10:02:16 crc kubenswrapper[4965]: I0219 10:02:16.744335 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-kx9jd" Feb 19 10:02:16 crc kubenswrapper[4965]: I0219 10:02:16.822260 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f2a6db35-796d-485d-9b96-5c03b7d7725b-scripts\") pod \"f2a6db35-796d-485d-9b96-5c03b7d7725b\" (UID: \"f2a6db35-796d-485d-9b96-5c03b7d7725b\") " Feb 19 10:02:16 crc kubenswrapper[4965]: I0219 10:02:16.822518 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f2a6db35-796d-485d-9b96-5c03b7d7725b-swiftconf\") pod \"f2a6db35-796d-485d-9b96-5c03b7d7725b\" (UID: \"f2a6db35-796d-485d-9b96-5c03b7d7725b\") " Feb 19 10:02:16 crc kubenswrapper[4965]: I0219 10:02:16.822559 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f2a6db35-796d-485d-9b96-5c03b7d7725b-etc-swift\") pod \"f2a6db35-796d-485d-9b96-5c03b7d7725b\" (UID: \"f2a6db35-796d-485d-9b96-5c03b7d7725b\") " Feb 19 10:02:16 crc kubenswrapper[4965]: I0219 10:02:16.822608 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f2a6db35-796d-485d-9b96-5c03b7d7725b-ring-data-devices\") pod \"f2a6db35-796d-485d-9b96-5c03b7d7725b\" (UID: \"f2a6db35-796d-485d-9b96-5c03b7d7725b\") " Feb 19 10:02:16 crc kubenswrapper[4965]: I0219 10:02:16.822636 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2a6db35-796d-485d-9b96-5c03b7d7725b-combined-ca-bundle\") pod \"f2a6db35-796d-485d-9b96-5c03b7d7725b\" (UID: \"f2a6db35-796d-485d-9b96-5c03b7d7725b\") " Feb 19 10:02:16 crc kubenswrapper[4965]: I0219 10:02:16.822743 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2df6g\" (UniqueName: \"kubernetes.io/projected/f2a6db35-796d-485d-9b96-5c03b7d7725b-kube-api-access-2df6g\") pod \"f2a6db35-796d-485d-9b96-5c03b7d7725b\" (UID: \"f2a6db35-796d-485d-9b96-5c03b7d7725b\") " Feb 19 10:02:16 crc kubenswrapper[4965]: I0219 10:02:16.822762 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f2a6db35-796d-485d-9b96-5c03b7d7725b-dispersionconf\") pod \"f2a6db35-796d-485d-9b96-5c03b7d7725b\" (UID: \"f2a6db35-796d-485d-9b96-5c03b7d7725b\") " Feb 19 10:02:16 crc kubenswrapper[4965]: I0219 10:02:16.823486 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2a6db35-796d-485d-9b96-5c03b7d7725b-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "f2a6db35-796d-485d-9b96-5c03b7d7725b" (UID: "f2a6db35-796d-485d-9b96-5c03b7d7725b"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:16 crc kubenswrapper[4965]: I0219 10:02:16.824185 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2a6db35-796d-485d-9b96-5c03b7d7725b-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "f2a6db35-796d-485d-9b96-5c03b7d7725b" (UID: "f2a6db35-796d-485d-9b96-5c03b7d7725b"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:02:16 crc kubenswrapper[4965]: I0219 10:02:16.829027 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2a6db35-796d-485d-9b96-5c03b7d7725b-kube-api-access-2df6g" (OuterVolumeSpecName: "kube-api-access-2df6g") pod "f2a6db35-796d-485d-9b96-5c03b7d7725b" (UID: "f2a6db35-796d-485d-9b96-5c03b7d7725b"). InnerVolumeSpecName "kube-api-access-2df6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:02:16 crc kubenswrapper[4965]: I0219 10:02:16.850034 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2a6db35-796d-485d-9b96-5c03b7d7725b-scripts" (OuterVolumeSpecName: "scripts") pod "f2a6db35-796d-485d-9b96-5c03b7d7725b" (UID: "f2a6db35-796d-485d-9b96-5c03b7d7725b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:16 crc kubenswrapper[4965]: I0219 10:02:16.852161 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2a6db35-796d-485d-9b96-5c03b7d7725b-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "f2a6db35-796d-485d-9b96-5c03b7d7725b" (UID: "f2a6db35-796d-485d-9b96-5c03b7d7725b"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:16 crc kubenswrapper[4965]: I0219 10:02:16.873607 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2a6db35-796d-485d-9b96-5c03b7d7725b-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "f2a6db35-796d-485d-9b96-5c03b7d7725b" (UID: "f2a6db35-796d-485d-9b96-5c03b7d7725b"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:16 crc kubenswrapper[4965]: I0219 10:02:16.885240 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2a6db35-796d-485d-9b96-5c03b7d7725b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f2a6db35-796d-485d-9b96-5c03b7d7725b" (UID: "f2a6db35-796d-485d-9b96-5c03b7d7725b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:16 crc kubenswrapper[4965]: I0219 10:02:16.924636 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2df6g\" (UniqueName: \"kubernetes.io/projected/f2a6db35-796d-485d-9b96-5c03b7d7725b-kube-api-access-2df6g\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:16 crc kubenswrapper[4965]: I0219 10:02:16.924684 4965 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f2a6db35-796d-485d-9b96-5c03b7d7725b-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:16 crc kubenswrapper[4965]: I0219 10:02:16.924696 4965 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f2a6db35-796d-485d-9b96-5c03b7d7725b-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:16 crc kubenswrapper[4965]: I0219 10:02:16.924706 4965 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f2a6db35-796d-485d-9b96-5c03b7d7725b-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:16 crc kubenswrapper[4965]: I0219 10:02:16.924714 4965 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f2a6db35-796d-485d-9b96-5c03b7d7725b-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:16 crc kubenswrapper[4965]: I0219 10:02:16.924721 4965 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f2a6db35-796d-485d-9b96-5c03b7d7725b-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:16 crc kubenswrapper[4965]: I0219 10:02:16.924729 4965 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2a6db35-796d-485d-9b96-5c03b7d7725b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:16 crc kubenswrapper[4965]: I0219 10:02:16.944575 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 19 10:02:17 crc kubenswrapper[4965]: I0219 10:02:17.393506 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-kx9jd" event={"ID":"f2a6db35-796d-485d-9b96-5c03b7d7725b","Type":"ContainerDied","Data":"3384f9e65f37e9156662312afa4d4f1a3f8f46638d8f28a38e22273137eca4ee"} Feb 19 10:02:17 crc kubenswrapper[4965]: I0219 10:02:17.393825 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3384f9e65f37e9156662312afa4d4f1a3f8f46638d8f28a38e22273137eca4ee" Feb 19 10:02:17 crc kubenswrapper[4965]: I0219 10:02:17.393898 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-kx9jd" Feb 19 10:02:17 crc kubenswrapper[4965]: I0219 10:02:17.419506 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"89c93b76-069c-4c94-aa84-a77d7e4c8e26","Type":"ContainerStarted","Data":"bf872af5dc0e7b1ef59352a14e51d427d34c49d97f240f7d20e2aafff3bd2bb0"} Feb 19 10:02:17 crc kubenswrapper[4965]: I0219 10:02:17.449594 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2c3ae050-b164-4fbc-9e5b-392eb0a4fb53","Type":"ContainerStarted","Data":"ae4110fbbb37b8cac68a401872176da8f1df1d104c1fce1c1cadc3619c1f4b1c"} Feb 19 10:02:18 crc kubenswrapper[4965]: I0219 10:02:18.461328 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"89c93b76-069c-4c94-aa84-a77d7e4c8e26","Type":"ContainerStarted","Data":"f8914a5879228d15c51f7d565c508fad7f7b217f05b5cfbb9cda2457002c47cc"} Feb 19 10:02:18 crc kubenswrapper[4965]: I0219 10:02:18.461776 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 19 10:02:18 crc kubenswrapper[4965]: I0219 10:02:18.463134 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2c3ae050-b164-4fbc-9e5b-392eb0a4fb53","Type":"ContainerStarted","Data":"5f803760983393f77a732a86e2cc6318bddfa3bd6454f08826793efb56d58854"} Feb 19 10:02:18 crc kubenswrapper[4965]: I0219 10:02:18.465610 4965 generic.go:334] "Generic (PLEG): container finished" podID="bbd64606-53f8-484e-b8d2-c0fef4acb1bd" containerID="5a58c40e549604532d6c9dd9b699ffe7b8b46ce4c58064a2f3ecc8b63cbc14f1" exitCode=0 Feb 19 10:02:18 crc kubenswrapper[4965]: I0219 10:02:18.465648 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bbd64606-53f8-484e-b8d2-c0fef4acb1bd","Type":"ContainerDied","Data":"5a58c40e549604532d6c9dd9b699ffe7b8b46ce4c58064a2f3ecc8b63cbc14f1"} Feb 19 10:02:18 crc kubenswrapper[4965]: I0219 10:02:18.500145 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=5.188458894 podStartE2EDuration="6.50012107s" podCreationTimestamp="2026-02-19 10:02:12 +0000 UTC" firstStartedPulling="2026-02-19 10:02:15.820114121 +0000 UTC m=+1191.441435441" lastFinishedPulling="2026-02-19 10:02:17.131776307 +0000 UTC m=+1192.753097617" observedRunningTime="2026-02-19 10:02:18.490737043 +0000 UTC m=+1194.112058373" watchObservedRunningTime="2026-02-19 10:02:18.50012107 +0000 UTC m=+1194.121442390" Feb 19 10:02:19 crc kubenswrapper[4965]: I0219 10:02:19.479875 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bbd64606-53f8-484e-b8d2-c0fef4acb1bd","Type":"ContainerStarted","Data":"8fcc1af5d793f894765ce202de58987a948a9b64eb12e1b6d30caabf8608dd9d"} Feb 19 10:02:19 crc kubenswrapper[4965]: I0219 10:02:19.481256 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:02:19 crc kubenswrapper[4965]: I0219 10:02:19.484643 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2c3ae050-b164-4fbc-9e5b-392eb0a4fb53","Type":"ContainerStarted","Data":"0f602064a072f1faa2d42f7d8f65a11ef2297d2b538ec8c53a1ebbc6d1eb42dc"} Feb 19 10:02:19 crc kubenswrapper[4965]: I0219 10:02:19.484677 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2c3ae050-b164-4fbc-9e5b-392eb0a4fb53","Type":"ContainerStarted","Data":"40f092e2bf37a2f71c841b74f8a0dc2b37f1a530642e2ffbe97c9baaa27adad3"} Feb 19 10:02:19 crc kubenswrapper[4965]: I0219 10:02:19.484692 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2c3ae050-b164-4fbc-9e5b-392eb0a4fb53","Type":"ContainerStarted","Data":"6403d01d668c9a7d98d5d0f4371ccd4e26def3865109fca2a9369291b614a01e"} Feb 19 10:02:19 crc kubenswrapper[4965]: I0219 10:02:19.511946 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=51.978157341 podStartE2EDuration="1m7.511924296s" podCreationTimestamp="2026-02-19 10:01:12 +0000 UTC" firstStartedPulling="2026-02-19 10:01:28.347446041 +0000 UTC m=+1143.968767351" lastFinishedPulling="2026-02-19 10:01:43.881212996 +0000 UTC m=+1159.502534306" observedRunningTime="2026-02-19 10:02:19.506840652 +0000 UTC m=+1195.128161962" watchObservedRunningTime="2026-02-19 10:02:19.511924296 +0000 UTC m=+1195.133245606" Feb 19 10:02:19 crc kubenswrapper[4965]: I0219 10:02:19.677914 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 19 10:02:19 crc kubenswrapper[4965]: I0219 10:02:19.679346 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 19 10:02:19 crc kubenswrapper[4965]: I0219 10:02:19.683095 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 19 10:02:19 crc kubenswrapper[4965]: I0219 10:02:19.729226 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-pr7kx"] Feb 19 10:02:19 crc kubenswrapper[4965]: E0219 10:02:19.729579 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2a6db35-796d-485d-9b96-5c03b7d7725b" containerName="swift-ring-rebalance" Feb 19 10:02:19 crc kubenswrapper[4965]: I0219 10:02:19.729594 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2a6db35-796d-485d-9b96-5c03b7d7725b" containerName="swift-ring-rebalance" Feb 19 10:02:19 crc kubenswrapper[4965]: I0219 10:02:19.729755 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2a6db35-796d-485d-9b96-5c03b7d7725b" containerName="swift-ring-rebalance" Feb 19 10:02:19 crc kubenswrapper[4965]: I0219 10:02:19.730670 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pr7kx" Feb 19 10:02:19 crc kubenswrapper[4965]: I0219 10:02:19.732439 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 19 10:02:19 crc kubenswrapper[4965]: I0219 10:02:19.742562 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-pr7kx"] Feb 19 10:02:19 crc kubenswrapper[4965]: I0219 10:02:19.783896 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22cqr\" (UniqueName: \"kubernetes.io/projected/3bd8d85c-9a7d-4f54-a589-330a68d04f51-kube-api-access-22cqr\") pod \"root-account-create-update-pr7kx\" (UID: \"3bd8d85c-9a7d-4f54-a589-330a68d04f51\") " pod="openstack/root-account-create-update-pr7kx" Feb 19 10:02:19 crc kubenswrapper[4965]: I0219 10:02:19.784286 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bd8d85c-9a7d-4f54-a589-330a68d04f51-operator-scripts\") pod \"root-account-create-update-pr7kx\" (UID: \"3bd8d85c-9a7d-4f54-a589-330a68d04f51\") " pod="openstack/root-account-create-update-pr7kx" Feb 19 10:02:19 crc kubenswrapper[4965]: I0219 10:02:19.885950 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22cqr\" (UniqueName: \"kubernetes.io/projected/3bd8d85c-9a7d-4f54-a589-330a68d04f51-kube-api-access-22cqr\") pod \"root-account-create-update-pr7kx\" (UID: \"3bd8d85c-9a7d-4f54-a589-330a68d04f51\") " pod="openstack/root-account-create-update-pr7kx" Feb 19 10:02:19 crc kubenswrapper[4965]: I0219 10:02:19.886075 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bd8d85c-9a7d-4f54-a589-330a68d04f51-operator-scripts\") pod \"root-account-create-update-pr7kx\" (UID: \"3bd8d85c-9a7d-4f54-a589-330a68d04f51\") " pod="openstack/root-account-create-update-pr7kx" Feb 19 10:02:19 crc kubenswrapper[4965]: I0219 10:02:19.886697 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bd8d85c-9a7d-4f54-a589-330a68d04f51-operator-scripts\") pod \"root-account-create-update-pr7kx\" (UID: \"3bd8d85c-9a7d-4f54-a589-330a68d04f51\") " pod="openstack/root-account-create-update-pr7kx" Feb 19 10:02:19 crc kubenswrapper[4965]: I0219 10:02:19.905610 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22cqr\" (UniqueName: \"kubernetes.io/projected/3bd8d85c-9a7d-4f54-a589-330a68d04f51-kube-api-access-22cqr\") pod \"root-account-create-update-pr7kx\" (UID: \"3bd8d85c-9a7d-4f54-a589-330a68d04f51\") " pod="openstack/root-account-create-update-pr7kx" Feb 19 10:02:20 crc kubenswrapper[4965]: I0219 10:02:20.057419 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pr7kx" Feb 19 10:02:20 crc kubenswrapper[4965]: I0219 10:02:20.500163 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 19 10:02:20 crc kubenswrapper[4965]: I0219 10:02:20.554342 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-pr7kx"] Feb 19 10:02:21 crc kubenswrapper[4965]: I0219 10:02:21.245411 4965 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="faab82f2-bc31-438d-b329-9a31d6ba5040" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 19 10:02:21 crc kubenswrapper[4965]: I0219 10:02:21.505442 4965 generic.go:334] "Generic (PLEG): container finished" podID="305a32d6-c9f8-4494-b356-75d6c54c7467" containerID="a1a65257004b242a15769f37a6af84d074317c1c847f8be584c5160b609c9964" exitCode=0 Feb 19 10:02:21 crc kubenswrapper[4965]: I0219 10:02:21.505499 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"305a32d6-c9f8-4494-b356-75d6c54c7467","Type":"ContainerDied","Data":"a1a65257004b242a15769f37a6af84d074317c1c847f8be584c5160b609c9964"} Feb 19 10:02:21 crc kubenswrapper[4965]: I0219 10:02:21.519528 4965 generic.go:334] "Generic (PLEG): container finished" podID="3bd8d85c-9a7d-4f54-a589-330a68d04f51" containerID="d9b57422079ea1c76777a981c29de019e18e721210b7a1402aa522902e89d98a" exitCode=0 Feb 19 10:02:21 crc kubenswrapper[4965]: I0219 10:02:21.520521 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-pr7kx" event={"ID":"3bd8d85c-9a7d-4f54-a589-330a68d04f51","Type":"ContainerDied","Data":"d9b57422079ea1c76777a981c29de019e18e721210b7a1402aa522902e89d98a"} Feb 19 10:02:21 crc kubenswrapper[4965]: I0219 10:02:21.520545 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-pr7kx" event={"ID":"3bd8d85c-9a7d-4f54-a589-330a68d04f51","Type":"ContainerStarted","Data":"2124482f487d37c231729dd09cc6af7b7d6d0c38b55a22d83badfe14d87b6340"} Feb 19 10:02:22 crc kubenswrapper[4965]: I0219 10:02:22.543423 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2c3ae050-b164-4fbc-9e5b-392eb0a4fb53","Type":"ContainerStarted","Data":"098a9cb1c3cd2f074205cb2e01bf245519ed0c5fd5fd168cea96f4fbc2ebc136"} Feb 19 10:02:22 crc kubenswrapper[4965]: I0219 10:02:22.543777 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2c3ae050-b164-4fbc-9e5b-392eb0a4fb53","Type":"ContainerStarted","Data":"4e75dbc1720318c8118881541cd998f9fb82c717ee8f56c59d1b3a22939d7278"} Feb 19 10:02:22 crc kubenswrapper[4965]: I0219 10:02:22.543787 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2c3ae050-b164-4fbc-9e5b-392eb0a4fb53","Type":"ContainerStarted","Data":"b751079e14ca1ada9aab54fc25d3b4198f596872ea2e3328679418a85830eafa"} Feb 19 10:02:22 crc kubenswrapper[4965]: I0219 10:02:22.543799 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2c3ae050-b164-4fbc-9e5b-392eb0a4fb53","Type":"ContainerStarted","Data":"db7f7bb4efd0812ea2b9d7f6c199b3e90f1025b5f534afffaffb3a992b574fc9"} Feb 19 10:02:22 crc kubenswrapper[4965]: I0219 10:02:22.548764 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"305a32d6-c9f8-4494-b356-75d6c54c7467","Type":"ContainerStarted","Data":"b1bf88bec9e961815551bc486611bc2a7542f58e859808003d94fcd9f5c7cb21"} Feb 19 10:02:22 crc kubenswrapper[4965]: I0219 10:02:22.575551 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=54.269574809 podStartE2EDuration="1m10.575534109s" podCreationTimestamp="2026-02-19 10:01:12 +0000 UTC" firstStartedPulling="2026-02-19 10:01:28.335898039 +0000 UTC m=+1143.957219349" lastFinishedPulling="2026-02-19 10:01:44.641857349 +0000 UTC m=+1160.263178649" observedRunningTime="2026-02-19 10:02:22.567234967 +0000 UTC m=+1198.188556297" watchObservedRunningTime="2026-02-19 10:02:22.575534109 +0000 UTC m=+1198.196855419" Feb 19 10:02:22 crc kubenswrapper[4965]: I0219 10:02:22.626343 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-jlns7" Feb 19 10:02:22 crc kubenswrapper[4965]: I0219 10:02:22.631812 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-jlns7" Feb 19 10:02:22 crc kubenswrapper[4965]: I0219 10:02:22.900666 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-mwlb6-config-x7cx5"] Feb 19 10:02:22 crc kubenswrapper[4965]: I0219 10:02:22.902829 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mwlb6-config-x7cx5" Feb 19 10:02:22 crc kubenswrapper[4965]: I0219 10:02:22.905009 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 19 10:02:22 crc kubenswrapper[4965]: I0219 10:02:22.919688 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mwlb6-config-x7cx5"] Feb 19 10:02:22 crc kubenswrapper[4965]: I0219 10:02:22.951237 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/58d6fed8-aa40-446d-897c-03103683edd7-var-log-ovn\") pod \"ovn-controller-mwlb6-config-x7cx5\" (UID: \"58d6fed8-aa40-446d-897c-03103683edd7\") " pod="openstack/ovn-controller-mwlb6-config-x7cx5" Feb 19 10:02:22 crc kubenswrapper[4965]: I0219 10:02:22.951337 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt6rr\" (UniqueName: \"kubernetes.io/projected/58d6fed8-aa40-446d-897c-03103683edd7-kube-api-access-zt6rr\") pod \"ovn-controller-mwlb6-config-x7cx5\" (UID: \"58d6fed8-aa40-446d-897c-03103683edd7\") " pod="openstack/ovn-controller-mwlb6-config-x7cx5" Feb 19 10:02:22 crc kubenswrapper[4965]: I0219 10:02:22.951398 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/58d6fed8-aa40-446d-897c-03103683edd7-var-run-ovn\") pod \"ovn-controller-mwlb6-config-x7cx5\" (UID: \"58d6fed8-aa40-446d-897c-03103683edd7\") " pod="openstack/ovn-controller-mwlb6-config-x7cx5" Feb 19 10:02:22 crc kubenswrapper[4965]: I0219 10:02:22.951425 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/58d6fed8-aa40-446d-897c-03103683edd7-scripts\") pod \"ovn-controller-mwlb6-config-x7cx5\" (UID: \"58d6fed8-aa40-446d-897c-03103683edd7\") " pod="openstack/ovn-controller-mwlb6-config-x7cx5" Feb 19 10:02:22 crc kubenswrapper[4965]: I0219 10:02:22.951483 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/58d6fed8-aa40-446d-897c-03103683edd7-var-run\") pod \"ovn-controller-mwlb6-config-x7cx5\" (UID: \"58d6fed8-aa40-446d-897c-03103683edd7\") " pod="openstack/ovn-controller-mwlb6-config-x7cx5" Feb 19 10:02:22 crc kubenswrapper[4965]: I0219 10:02:22.951508 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/58d6fed8-aa40-446d-897c-03103683edd7-additional-scripts\") pod \"ovn-controller-mwlb6-config-x7cx5\" (UID: \"58d6fed8-aa40-446d-897c-03103683edd7\") " pod="openstack/ovn-controller-mwlb6-config-x7cx5" Feb 19 10:02:23 crc kubenswrapper[4965]: I0219 10:02:23.052747 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt6rr\" (UniqueName: \"kubernetes.io/projected/58d6fed8-aa40-446d-897c-03103683edd7-kube-api-access-zt6rr\") pod \"ovn-controller-mwlb6-config-x7cx5\" (UID: \"58d6fed8-aa40-446d-897c-03103683edd7\") " pod="openstack/ovn-controller-mwlb6-config-x7cx5" Feb 19 10:02:23 crc kubenswrapper[4965]: I0219 10:02:23.053086 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/58d6fed8-aa40-446d-897c-03103683edd7-var-run-ovn\") pod \"ovn-controller-mwlb6-config-x7cx5\" (UID: \"58d6fed8-aa40-446d-897c-03103683edd7\") " pod="openstack/ovn-controller-mwlb6-config-x7cx5" Feb 19 10:02:23 crc kubenswrapper[4965]: I0219 10:02:23.053115 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/58d6fed8-aa40-446d-897c-03103683edd7-scripts\") pod \"ovn-controller-mwlb6-config-x7cx5\" (UID: \"58d6fed8-aa40-446d-897c-03103683edd7\") " pod="openstack/ovn-controller-mwlb6-config-x7cx5" Feb 19 10:02:23 crc kubenswrapper[4965]: I0219 10:02:23.053173 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/58d6fed8-aa40-446d-897c-03103683edd7-var-run\") pod \"ovn-controller-mwlb6-config-x7cx5\" (UID: \"58d6fed8-aa40-446d-897c-03103683edd7\") " pod="openstack/ovn-controller-mwlb6-config-x7cx5" Feb 19 10:02:23 crc kubenswrapper[4965]: I0219 10:02:23.053211 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/58d6fed8-aa40-446d-897c-03103683edd7-additional-scripts\") pod \"ovn-controller-mwlb6-config-x7cx5\" (UID: \"58d6fed8-aa40-446d-897c-03103683edd7\") " pod="openstack/ovn-controller-mwlb6-config-x7cx5" Feb 19 10:02:23 crc kubenswrapper[4965]: I0219 10:02:23.053246 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/58d6fed8-aa40-446d-897c-03103683edd7-var-log-ovn\") pod \"ovn-controller-mwlb6-config-x7cx5\" (UID: \"58d6fed8-aa40-446d-897c-03103683edd7\") " pod="openstack/ovn-controller-mwlb6-config-x7cx5" Feb 19 10:02:23 crc kubenswrapper[4965]: I0219 10:02:23.053475 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/58d6fed8-aa40-446d-897c-03103683edd7-var-log-ovn\") pod \"ovn-controller-mwlb6-config-x7cx5\" (UID: \"58d6fed8-aa40-446d-897c-03103683edd7\") " pod="openstack/ovn-controller-mwlb6-config-x7cx5" Feb 19 10:02:23 crc kubenswrapper[4965]: I0219 10:02:23.053513 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/58d6fed8-aa40-446d-897c-03103683edd7-var-run\") pod \"ovn-controller-mwlb6-config-x7cx5\" (UID: \"58d6fed8-aa40-446d-897c-03103683edd7\") " pod="openstack/ovn-controller-mwlb6-config-x7cx5" Feb 19 10:02:23 crc kubenswrapper[4965]: I0219 10:02:23.053581 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/58d6fed8-aa40-446d-897c-03103683edd7-var-run-ovn\") pod \"ovn-controller-mwlb6-config-x7cx5\" (UID: \"58d6fed8-aa40-446d-897c-03103683edd7\") " pod="openstack/ovn-controller-mwlb6-config-x7cx5" Feb 19 10:02:23 crc kubenswrapper[4965]: I0219 10:02:23.054547 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/58d6fed8-aa40-446d-897c-03103683edd7-additional-scripts\") pod \"ovn-controller-mwlb6-config-x7cx5\" (UID: \"58d6fed8-aa40-446d-897c-03103683edd7\") " pod="openstack/ovn-controller-mwlb6-config-x7cx5" Feb 19 10:02:23 crc kubenswrapper[4965]: I0219 10:02:23.055150 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/58d6fed8-aa40-446d-897c-03103683edd7-scripts\") pod \"ovn-controller-mwlb6-config-x7cx5\" (UID: \"58d6fed8-aa40-446d-897c-03103683edd7\") " pod="openstack/ovn-controller-mwlb6-config-x7cx5" Feb 19 10:02:23 crc kubenswrapper[4965]: I0219 10:02:23.089149 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt6rr\" (UniqueName: \"kubernetes.io/projected/58d6fed8-aa40-446d-897c-03103683edd7-kube-api-access-zt6rr\") pod \"ovn-controller-mwlb6-config-x7cx5\" (UID: \"58d6fed8-aa40-446d-897c-03103683edd7\") " pod="openstack/ovn-controller-mwlb6-config-x7cx5" Feb 19 10:02:23 crc kubenswrapper[4965]: I0219 10:02:23.108168 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 10:02:23 crc kubenswrapper[4965]: I0219 10:02:23.137767 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pr7kx" Feb 19 10:02:23 crc kubenswrapper[4965]: I0219 10:02:23.248582 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mwlb6-config-x7cx5" Feb 19 10:02:23 crc kubenswrapper[4965]: I0219 10:02:23.257064 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22cqr\" (UniqueName: \"kubernetes.io/projected/3bd8d85c-9a7d-4f54-a589-330a68d04f51-kube-api-access-22cqr\") pod \"3bd8d85c-9a7d-4f54-a589-330a68d04f51\" (UID: \"3bd8d85c-9a7d-4f54-a589-330a68d04f51\") " Feb 19 10:02:23 crc kubenswrapper[4965]: I0219 10:02:23.257232 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bd8d85c-9a7d-4f54-a589-330a68d04f51-operator-scripts\") pod \"3bd8d85c-9a7d-4f54-a589-330a68d04f51\" (UID: \"3bd8d85c-9a7d-4f54-a589-330a68d04f51\") " Feb 19 10:02:23 crc kubenswrapper[4965]: I0219 10:02:23.258268 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bd8d85c-9a7d-4f54-a589-330a68d04f51-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3bd8d85c-9a7d-4f54-a589-330a68d04f51" (UID: "3bd8d85c-9a7d-4f54-a589-330a68d04f51"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:23 crc kubenswrapper[4965]: I0219 10:02:23.259054 4965 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bd8d85c-9a7d-4f54-a589-330a68d04f51-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:23 crc kubenswrapper[4965]: I0219 10:02:23.271467 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bd8d85c-9a7d-4f54-a589-330a68d04f51-kube-api-access-22cqr" (OuterVolumeSpecName: "kube-api-access-22cqr") pod "3bd8d85c-9a7d-4f54-a589-330a68d04f51" (UID: "3bd8d85c-9a7d-4f54-a589-330a68d04f51"). InnerVolumeSpecName "kube-api-access-22cqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:02:23 crc kubenswrapper[4965]: I0219 10:02:23.361686 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22cqr\" (UniqueName: \"kubernetes.io/projected/3bd8d85c-9a7d-4f54-a589-330a68d04f51-kube-api-access-22cqr\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:23 crc kubenswrapper[4965]: I0219 10:02:23.442410 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 19 10:02:23 crc kubenswrapper[4965]: I0219 10:02:23.564834 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="e7a4c9f4-b898-43b4-812d-ab4f17c2124d" containerName="prometheus" containerID="cri-o://aa93fa5b35f82a12b5464edbef0d2c83247a37aa0f0773f3ba6e0779f26c0282" gracePeriod=600 Feb 19 10:02:23 crc kubenswrapper[4965]: I0219 10:02:23.565280 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pr7kx" Feb 19 10:02:23 crc kubenswrapper[4965]: I0219 10:02:23.570113 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-pr7kx" event={"ID":"3bd8d85c-9a7d-4f54-a589-330a68d04f51","Type":"ContainerDied","Data":"2124482f487d37c231729dd09cc6af7b7d6d0c38b55a22d83badfe14d87b6340"} Feb 19 10:02:23 crc kubenswrapper[4965]: I0219 10:02:23.570155 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2124482f487d37c231729dd09cc6af7b7d6d0c38b55a22d83badfe14d87b6340" Feb 19 10:02:23 crc kubenswrapper[4965]: I0219 10:02:23.570282 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="e7a4c9f4-b898-43b4-812d-ab4f17c2124d" containerName="thanos-sidecar" containerID="cri-o://0bb62c2d1a56a0ddc8f5f5cf45fd1135591f3bf224388ba4dcd9e2cf75d4a1a4" gracePeriod=600 Feb 19 10:02:23 crc kubenswrapper[4965]: I0219 10:02:23.570321 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="e7a4c9f4-b898-43b4-812d-ab4f17c2124d" containerName="config-reloader" containerID="cri-o://67d21e069f590b62a698ca97a479f7c757d71c1567f4f456cd32f54e49f48f83" gracePeriod=600 Feb 19 10:02:23 crc kubenswrapper[4965]: I0219 10:02:23.779211 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mwlb6-config-x7cx5"] Feb 19 10:02:23 crc kubenswrapper[4965]: E0219 10:02:23.813020 4965 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7a4c9f4_b898_43b4_812d_ab4f17c2124d.slice/crio-conmon-0bb62c2d1a56a0ddc8f5f5cf45fd1135591f3bf224388ba4dcd9e2cf75d4a1a4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7a4c9f4_b898_43b4_812d_ab4f17c2124d.slice/crio-0bb62c2d1a56a0ddc8f5f5cf45fd1135591f3bf224388ba4dcd9e2cf75d4a1a4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7a4c9f4_b898_43b4_812d_ab4f17c2124d.slice/crio-aa93fa5b35f82a12b5464edbef0d2c83247a37aa0f0773f3ba6e0779f26c0282.scope\": RecentStats: unable to find data in memory cache]" Feb 19 10:02:24 crc kubenswrapper[4965]: W0219 10:02:24.119539 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58d6fed8_aa40_446d_897c_03103683edd7.slice/crio-1bc7dfb25b98223b30335006a229bdfc56f629fe92f639304eced86eb12d96b5 WatchSource:0}: Error finding container 1bc7dfb25b98223b30335006a229bdfc56f629fe92f639304eced86eb12d96b5: Status 404 returned error can't find the container with id 1bc7dfb25b98223b30335006a229bdfc56f629fe92f639304eced86eb12d96b5 Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.504248 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.589147 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2c3ae050-b164-4fbc-9e5b-392eb0a4fb53","Type":"ContainerStarted","Data":"f277f8f6a602c062245c249a545eda5902fc5bc14056e53966ca2825ab8db649"} Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.591308 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mwlb6-config-x7cx5" event={"ID":"58d6fed8-aa40-446d-897c-03103683edd7","Type":"ContainerStarted","Data":"608a21ed7bc143cd03932578a5956c3b61eb39b55c671064ae53cc1bffed4b79"} Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.591341 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mwlb6-config-x7cx5" event={"ID":"58d6fed8-aa40-446d-897c-03103683edd7","Type":"ContainerStarted","Data":"1bc7dfb25b98223b30335006a229bdfc56f629fe92f639304eced86eb12d96b5"} Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.597850 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e7a4c9f4-b898-43b4-812d-ab4f17c2124d-config\") pod \"e7a4c9f4-b898-43b4-812d-ab4f17c2124d\" (UID: \"e7a4c9f4-b898-43b4-812d-ab4f17c2124d\") " Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.597892 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9ks6\" (UniqueName: \"kubernetes.io/projected/e7a4c9f4-b898-43b4-812d-ab4f17c2124d-kube-api-access-m9ks6\") pod \"e7a4c9f4-b898-43b4-812d-ab4f17c2124d\" (UID: \"e7a4c9f4-b898-43b4-812d-ab4f17c2124d\") " Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.597929 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e7a4c9f4-b898-43b4-812d-ab4f17c2124d-thanos-prometheus-http-client-file\") pod \"e7a4c9f4-b898-43b4-812d-ab4f17c2124d\" (UID: \"e7a4c9f4-b898-43b4-812d-ab4f17c2124d\") " Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.597951 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e7a4c9f4-b898-43b4-812d-ab4f17c2124d-config-out\") pod \"e7a4c9f4-b898-43b4-812d-ab4f17c2124d\" (UID: \"e7a4c9f4-b898-43b4-812d-ab4f17c2124d\") " Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.597975 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e7a4c9f4-b898-43b4-812d-ab4f17c2124d-prometheus-metric-storage-rulefiles-0\") pod \"e7a4c9f4-b898-43b4-812d-ab4f17c2124d\" (UID: \"e7a4c9f4-b898-43b4-812d-ab4f17c2124d\") " Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.598081 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e7a4c9f4-b898-43b4-812d-ab4f17c2124d-web-config\") pod \"e7a4c9f4-b898-43b4-812d-ab4f17c2124d\" (UID: \"e7a4c9f4-b898-43b4-812d-ab4f17c2124d\") " Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.598133 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/e7a4c9f4-b898-43b4-812d-ab4f17c2124d-prometheus-metric-storage-rulefiles-2\") pod \"e7a4c9f4-b898-43b4-812d-ab4f17c2124d\" (UID: \"e7a4c9f4-b898-43b4-812d-ab4f17c2124d\") " Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.598159 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e7a4c9f4-b898-43b4-812d-ab4f17c2124d-tls-assets\") pod \"e7a4c9f4-b898-43b4-812d-ab4f17c2124d\" (UID: \"e7a4c9f4-b898-43b4-812d-ab4f17c2124d\") " Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.598305 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cd8d8a7f-775c-4ca9-8d07-5662d94d0fe3\") pod \"e7a4c9f4-b898-43b4-812d-ab4f17c2124d\" (UID: \"e7a4c9f4-b898-43b4-812d-ab4f17c2124d\") " Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.598410 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/e7a4c9f4-b898-43b4-812d-ab4f17c2124d-prometheus-metric-storage-rulefiles-1\") pod \"e7a4c9f4-b898-43b4-812d-ab4f17c2124d\" (UID: \"e7a4c9f4-b898-43b4-812d-ab4f17c2124d\") " Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.599210 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7a4c9f4-b898-43b4-812d-ab4f17c2124d-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "e7a4c9f4-b898-43b4-812d-ab4f17c2124d" (UID: "e7a4c9f4-b898-43b4-812d-ab4f17c2124d"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.600293 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7a4c9f4-b898-43b4-812d-ab4f17c2124d-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "e7a4c9f4-b898-43b4-812d-ab4f17c2124d" (UID: "e7a4c9f4-b898-43b4-812d-ab4f17c2124d"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.601305 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7a4c9f4-b898-43b4-812d-ab4f17c2124d-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "e7a4c9f4-b898-43b4-812d-ab4f17c2124d" (UID: "e7a4c9f4-b898-43b4-812d-ab4f17c2124d"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.605019 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7a4c9f4-b898-43b4-812d-ab4f17c2124d-config" (OuterVolumeSpecName: "config") pod "e7a4c9f4-b898-43b4-812d-ab4f17c2124d" (UID: "e7a4c9f4-b898-43b4-812d-ab4f17c2124d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.605100 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7a4c9f4-b898-43b4-812d-ab4f17c2124d-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "e7a4c9f4-b898-43b4-812d-ab4f17c2124d" (UID: "e7a4c9f4-b898-43b4-812d-ab4f17c2124d"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.605963 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7a4c9f4-b898-43b4-812d-ab4f17c2124d-kube-api-access-m9ks6" (OuterVolumeSpecName: "kube-api-access-m9ks6") pod "e7a4c9f4-b898-43b4-812d-ab4f17c2124d" (UID: "e7a4c9f4-b898-43b4-812d-ab4f17c2124d"). InnerVolumeSpecName "kube-api-access-m9ks6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.611742 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7a4c9f4-b898-43b4-812d-ab4f17c2124d-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "e7a4c9f4-b898-43b4-812d-ab4f17c2124d" (UID: "e7a4c9f4-b898-43b4-812d-ab4f17c2124d"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.611936 4965 generic.go:334] "Generic (PLEG): container finished" podID="e7a4c9f4-b898-43b4-812d-ab4f17c2124d" containerID="0bb62c2d1a56a0ddc8f5f5cf45fd1135591f3bf224388ba4dcd9e2cf75d4a1a4" exitCode=0 Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.611971 4965 generic.go:334] "Generic (PLEG): container finished" podID="e7a4c9f4-b898-43b4-812d-ab4f17c2124d" containerID="67d21e069f590b62a698ca97a479f7c757d71c1567f4f456cd32f54e49f48f83" exitCode=0 Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.611982 4965 generic.go:334] "Generic (PLEG): container finished" podID="e7a4c9f4-b898-43b4-812d-ab4f17c2124d" containerID="aa93fa5b35f82a12b5464edbef0d2c83247a37aa0f0773f3ba6e0779f26c0282" exitCode=0 Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.612009 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e7a4c9f4-b898-43b4-812d-ab4f17c2124d","Type":"ContainerDied","Data":"0bb62c2d1a56a0ddc8f5f5cf45fd1135591f3bf224388ba4dcd9e2cf75d4a1a4"} Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.612041 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e7a4c9f4-b898-43b4-812d-ab4f17c2124d","Type":"ContainerDied","Data":"67d21e069f590b62a698ca97a479f7c757d71c1567f4f456cd32f54e49f48f83"} Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.612055 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e7a4c9f4-b898-43b4-812d-ab4f17c2124d","Type":"ContainerDied","Data":"aa93fa5b35f82a12b5464edbef0d2c83247a37aa0f0773f3ba6e0779f26c0282"} Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.612068 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e7a4c9f4-b898-43b4-812d-ab4f17c2124d","Type":"ContainerDied","Data":"54b60bf8c3e09744a63228ccd8c5feb0566fcd2b2b377e8bc67a3ba6834884d6"} Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.612093 4965 scope.go:117] "RemoveContainer" containerID="0bb62c2d1a56a0ddc8f5f5cf45fd1135591f3bf224388ba4dcd9e2cf75d4a1a4" Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.612370 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.632302 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-mwlb6-config-x7cx5" podStartSLOduration=2.632286416 podStartE2EDuration="2.632286416s" podCreationTimestamp="2026-02-19 10:02:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:02:24.627880919 +0000 UTC m=+1200.249202229" watchObservedRunningTime="2026-02-19 10:02:24.632286416 +0000 UTC m=+1200.253607726" Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.639115 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7a4c9f4-b898-43b4-812d-ab4f17c2124d-config-out" (OuterVolumeSpecName: "config-out") pod "e7a4c9f4-b898-43b4-812d-ab4f17c2124d" (UID: "e7a4c9f4-b898-43b4-812d-ab4f17c2124d"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.669433 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cd8d8a7f-775c-4ca9-8d07-5662d94d0fe3" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "e7a4c9f4-b898-43b4-812d-ab4f17c2124d" (UID: "e7a4c9f4-b898-43b4-812d-ab4f17c2124d"). InnerVolumeSpecName "pvc-cd8d8a7f-775c-4ca9-8d07-5662d94d0fe3". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.684422 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7a4c9f4-b898-43b4-812d-ab4f17c2124d-web-config" (OuterVolumeSpecName: "web-config") pod "e7a4c9f4-b898-43b4-812d-ab4f17c2124d" (UID: "e7a4c9f4-b898-43b4-812d-ab4f17c2124d"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.700108 4965 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/e7a4c9f4-b898-43b4-812d-ab4f17c2124d-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.700132 4965 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e7a4c9f4-b898-43b4-812d-ab4f17c2124d-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.700154 4965 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-cd8d8a7f-775c-4ca9-8d07-5662d94d0fe3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cd8d8a7f-775c-4ca9-8d07-5662d94d0fe3\") on node \"crc\" " Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.700166 4965 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/e7a4c9f4-b898-43b4-812d-ab4f17c2124d-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.700178 4965 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/e7a4c9f4-b898-43b4-812d-ab4f17c2124d-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.700187 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9ks6\" (UniqueName: \"kubernetes.io/projected/e7a4c9f4-b898-43b4-812d-ab4f17c2124d-kube-api-access-m9ks6\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.700211 4965 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e7a4c9f4-b898-43b4-812d-ab4f17c2124d-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.700219 4965 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e7a4c9f4-b898-43b4-812d-ab4f17c2124d-config-out\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.700229 4965 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e7a4c9f4-b898-43b4-812d-ab4f17c2124d-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.700239 4965 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e7a4c9f4-b898-43b4-812d-ab4f17c2124d-web-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.723628 4965 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.724015 4965 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-cd8d8a7f-775c-4ca9-8d07-5662d94d0fe3" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cd8d8a7f-775c-4ca9-8d07-5662d94d0fe3") on node "crc" Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.802109 4965 reconciler_common.go:293] "Volume detached for volume \"pvc-cd8d8a7f-775c-4ca9-8d07-5662d94d0fe3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cd8d8a7f-775c-4ca9-8d07-5662d94d0fe3\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.824467 4965 scope.go:117] "RemoveContainer" containerID="67d21e069f590b62a698ca97a479f7c757d71c1567f4f456cd32f54e49f48f83" Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.843070 4965 scope.go:117] "RemoveContainer" containerID="aa93fa5b35f82a12b5464edbef0d2c83247a37aa0f0773f3ba6e0779f26c0282" Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.864126 4965 scope.go:117] "RemoveContainer" containerID="fcb4fa855945951961264a47f8ad49bbe1837e56000c48259f5d69179ff8e515" Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.882386 4965 scope.go:117] "RemoveContainer" containerID="0bb62c2d1a56a0ddc8f5f5cf45fd1135591f3bf224388ba4dcd9e2cf75d4a1a4" Feb 19 10:02:24 crc kubenswrapper[4965]: E0219 10:02:24.882931 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0bb62c2d1a56a0ddc8f5f5cf45fd1135591f3bf224388ba4dcd9e2cf75d4a1a4\": container with ID starting with 0bb62c2d1a56a0ddc8f5f5cf45fd1135591f3bf224388ba4dcd9e2cf75d4a1a4 not found: ID does not exist" containerID="0bb62c2d1a56a0ddc8f5f5cf45fd1135591f3bf224388ba4dcd9e2cf75d4a1a4" Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.882975 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bb62c2d1a56a0ddc8f5f5cf45fd1135591f3bf224388ba4dcd9e2cf75d4a1a4"} err="failed to get container status \"0bb62c2d1a56a0ddc8f5f5cf45fd1135591f3bf224388ba4dcd9e2cf75d4a1a4\": rpc error: code = NotFound desc = could not find container \"0bb62c2d1a56a0ddc8f5f5cf45fd1135591f3bf224388ba4dcd9e2cf75d4a1a4\": container with ID starting with 0bb62c2d1a56a0ddc8f5f5cf45fd1135591f3bf224388ba4dcd9e2cf75d4a1a4 not found: ID does not exist" Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.883000 4965 scope.go:117] "RemoveContainer" containerID="67d21e069f590b62a698ca97a479f7c757d71c1567f4f456cd32f54e49f48f83" Feb 19 10:02:24 crc kubenswrapper[4965]: E0219 10:02:24.883521 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67d21e069f590b62a698ca97a479f7c757d71c1567f4f456cd32f54e49f48f83\": container with ID starting with 67d21e069f590b62a698ca97a479f7c757d71c1567f4f456cd32f54e49f48f83 not found: ID does not exist" containerID="67d21e069f590b62a698ca97a479f7c757d71c1567f4f456cd32f54e49f48f83" Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.883558 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67d21e069f590b62a698ca97a479f7c757d71c1567f4f456cd32f54e49f48f83"} err="failed to get container status \"67d21e069f590b62a698ca97a479f7c757d71c1567f4f456cd32f54e49f48f83\": rpc error: code = NotFound desc = could not find container \"67d21e069f590b62a698ca97a479f7c757d71c1567f4f456cd32f54e49f48f83\": container with ID starting with 67d21e069f590b62a698ca97a479f7c757d71c1567f4f456cd32f54e49f48f83 not found: ID does not exist" Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.883580 4965 scope.go:117] "RemoveContainer" containerID="aa93fa5b35f82a12b5464edbef0d2c83247a37aa0f0773f3ba6e0779f26c0282" Feb 19 10:02:24 crc kubenswrapper[4965]: E0219 10:02:24.883906 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa93fa5b35f82a12b5464edbef0d2c83247a37aa0f0773f3ba6e0779f26c0282\": container with ID starting with aa93fa5b35f82a12b5464edbef0d2c83247a37aa0f0773f3ba6e0779f26c0282 not found: ID does not exist" containerID="aa93fa5b35f82a12b5464edbef0d2c83247a37aa0f0773f3ba6e0779f26c0282" Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.883955 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa93fa5b35f82a12b5464edbef0d2c83247a37aa0f0773f3ba6e0779f26c0282"} err="failed to get container status \"aa93fa5b35f82a12b5464edbef0d2c83247a37aa0f0773f3ba6e0779f26c0282\": rpc error: code = NotFound desc = could not find container \"aa93fa5b35f82a12b5464edbef0d2c83247a37aa0f0773f3ba6e0779f26c0282\": container with ID starting with aa93fa5b35f82a12b5464edbef0d2c83247a37aa0f0773f3ba6e0779f26c0282 not found: ID does not exist" Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.883993 4965 scope.go:117] "RemoveContainer" containerID="fcb4fa855945951961264a47f8ad49bbe1837e56000c48259f5d69179ff8e515" Feb 19 10:02:24 crc kubenswrapper[4965]: E0219 10:02:24.884388 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcb4fa855945951961264a47f8ad49bbe1837e56000c48259f5d69179ff8e515\": container with ID starting with fcb4fa855945951961264a47f8ad49bbe1837e56000c48259f5d69179ff8e515 not found: ID does not exist" containerID="fcb4fa855945951961264a47f8ad49bbe1837e56000c48259f5d69179ff8e515" Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.884411 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcb4fa855945951961264a47f8ad49bbe1837e56000c48259f5d69179ff8e515"} err="failed to get container status \"fcb4fa855945951961264a47f8ad49bbe1837e56000c48259f5d69179ff8e515\": rpc error: code = NotFound desc = could not find container \"fcb4fa855945951961264a47f8ad49bbe1837e56000c48259f5d69179ff8e515\": container with ID starting with fcb4fa855945951961264a47f8ad49bbe1837e56000c48259f5d69179ff8e515 not found: ID does not exist" Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.884425 4965 scope.go:117] "RemoveContainer" containerID="0bb62c2d1a56a0ddc8f5f5cf45fd1135591f3bf224388ba4dcd9e2cf75d4a1a4" Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.884716 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bb62c2d1a56a0ddc8f5f5cf45fd1135591f3bf224388ba4dcd9e2cf75d4a1a4"} err="failed to get container status \"0bb62c2d1a56a0ddc8f5f5cf45fd1135591f3bf224388ba4dcd9e2cf75d4a1a4\": rpc error: code = NotFound desc = could not find container \"0bb62c2d1a56a0ddc8f5f5cf45fd1135591f3bf224388ba4dcd9e2cf75d4a1a4\": container with ID starting with 0bb62c2d1a56a0ddc8f5f5cf45fd1135591f3bf224388ba4dcd9e2cf75d4a1a4 not found: ID does not exist" Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.884747 4965 scope.go:117] "RemoveContainer" containerID="67d21e069f590b62a698ca97a479f7c757d71c1567f4f456cd32f54e49f48f83" Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.885090 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67d21e069f590b62a698ca97a479f7c757d71c1567f4f456cd32f54e49f48f83"} err="failed to get container status \"67d21e069f590b62a698ca97a479f7c757d71c1567f4f456cd32f54e49f48f83\": rpc error: code = NotFound desc = could not find container \"67d21e069f590b62a698ca97a479f7c757d71c1567f4f456cd32f54e49f48f83\": container with ID starting with 67d21e069f590b62a698ca97a479f7c757d71c1567f4f456cd32f54e49f48f83 not found: ID does not exist" Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.885121 4965 scope.go:117] "RemoveContainer" containerID="aa93fa5b35f82a12b5464edbef0d2c83247a37aa0f0773f3ba6e0779f26c0282" Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.885411 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa93fa5b35f82a12b5464edbef0d2c83247a37aa0f0773f3ba6e0779f26c0282"} err="failed to get container status \"aa93fa5b35f82a12b5464edbef0d2c83247a37aa0f0773f3ba6e0779f26c0282\": rpc error: code = NotFound desc = could not find container \"aa93fa5b35f82a12b5464edbef0d2c83247a37aa0f0773f3ba6e0779f26c0282\": container with ID starting with aa93fa5b35f82a12b5464edbef0d2c83247a37aa0f0773f3ba6e0779f26c0282 not found: ID does not exist" Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.885430 4965 scope.go:117] "RemoveContainer" containerID="fcb4fa855945951961264a47f8ad49bbe1837e56000c48259f5d69179ff8e515" Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.885635 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcb4fa855945951961264a47f8ad49bbe1837e56000c48259f5d69179ff8e515"} err="failed to get container status \"fcb4fa855945951961264a47f8ad49bbe1837e56000c48259f5d69179ff8e515\": rpc error: code = NotFound desc = could not find container \"fcb4fa855945951961264a47f8ad49bbe1837e56000c48259f5d69179ff8e515\": container with ID starting with fcb4fa855945951961264a47f8ad49bbe1837e56000c48259f5d69179ff8e515 not found: ID does not exist" Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.885671 4965 scope.go:117] "RemoveContainer" containerID="0bb62c2d1a56a0ddc8f5f5cf45fd1135591f3bf224388ba4dcd9e2cf75d4a1a4" Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.885931 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bb62c2d1a56a0ddc8f5f5cf45fd1135591f3bf224388ba4dcd9e2cf75d4a1a4"} err="failed to get container status \"0bb62c2d1a56a0ddc8f5f5cf45fd1135591f3bf224388ba4dcd9e2cf75d4a1a4\": rpc error: code = NotFound desc = could not find container \"0bb62c2d1a56a0ddc8f5f5cf45fd1135591f3bf224388ba4dcd9e2cf75d4a1a4\": container with ID starting with 0bb62c2d1a56a0ddc8f5f5cf45fd1135591f3bf224388ba4dcd9e2cf75d4a1a4 not found: ID does not exist" Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.885960 4965 scope.go:117] "RemoveContainer" containerID="67d21e069f590b62a698ca97a479f7c757d71c1567f4f456cd32f54e49f48f83" Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.886307 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67d21e069f590b62a698ca97a479f7c757d71c1567f4f456cd32f54e49f48f83"} err="failed to get container status \"67d21e069f590b62a698ca97a479f7c757d71c1567f4f456cd32f54e49f48f83\": rpc error: code = NotFound desc = could not find container \"67d21e069f590b62a698ca97a479f7c757d71c1567f4f456cd32f54e49f48f83\": container with ID starting with 67d21e069f590b62a698ca97a479f7c757d71c1567f4f456cd32f54e49f48f83 not found: ID does not exist" Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.886325 4965 scope.go:117] "RemoveContainer" containerID="aa93fa5b35f82a12b5464edbef0d2c83247a37aa0f0773f3ba6e0779f26c0282" Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.886554 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa93fa5b35f82a12b5464edbef0d2c83247a37aa0f0773f3ba6e0779f26c0282"} err="failed to get container status \"aa93fa5b35f82a12b5464edbef0d2c83247a37aa0f0773f3ba6e0779f26c0282\": rpc error: code = NotFound desc = could not find container \"aa93fa5b35f82a12b5464edbef0d2c83247a37aa0f0773f3ba6e0779f26c0282\": container with ID starting with aa93fa5b35f82a12b5464edbef0d2c83247a37aa0f0773f3ba6e0779f26c0282 not found: ID does not exist" Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.886566 4965 scope.go:117] "RemoveContainer" containerID="fcb4fa855945951961264a47f8ad49bbe1837e56000c48259f5d69179ff8e515" Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.887076 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcb4fa855945951961264a47f8ad49bbe1837e56000c48259f5d69179ff8e515"} err="failed to get container status \"fcb4fa855945951961264a47f8ad49bbe1837e56000c48259f5d69179ff8e515\": rpc error: code = NotFound desc = could not find container \"fcb4fa855945951961264a47f8ad49bbe1837e56000c48259f5d69179ff8e515\": container with ID starting with fcb4fa855945951961264a47f8ad49bbe1837e56000c48259f5d69179ff8e515 not found: ID does not exist" Feb 19 10:02:24 crc kubenswrapper[4965]: I0219 10:02:24.982782 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.002280 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.016404 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 10:02:25 crc kubenswrapper[4965]: E0219 10:02:25.017135 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7a4c9f4-b898-43b4-812d-ab4f17c2124d" containerName="thanos-sidecar" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.017158 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7a4c9f4-b898-43b4-812d-ab4f17c2124d" containerName="thanos-sidecar" Feb 19 10:02:25 crc kubenswrapper[4965]: E0219 10:02:25.017222 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bd8d85c-9a7d-4f54-a589-330a68d04f51" containerName="mariadb-account-create-update" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.017232 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bd8d85c-9a7d-4f54-a589-330a68d04f51" containerName="mariadb-account-create-update" Feb 19 10:02:25 crc kubenswrapper[4965]: E0219 10:02:25.017245 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7a4c9f4-b898-43b4-812d-ab4f17c2124d" containerName="prometheus" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.017279 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7a4c9f4-b898-43b4-812d-ab4f17c2124d" containerName="prometheus" Feb 19 10:02:25 crc kubenswrapper[4965]: E0219 10:02:25.017292 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7a4c9f4-b898-43b4-812d-ab4f17c2124d" containerName="config-reloader" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.017299 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7a4c9f4-b898-43b4-812d-ab4f17c2124d" containerName="config-reloader" Feb 19 10:02:25 crc kubenswrapper[4965]: E0219 10:02:25.017328 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7a4c9f4-b898-43b4-812d-ab4f17c2124d" containerName="init-config-reloader" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.017364 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7a4c9f4-b898-43b4-812d-ab4f17c2124d" containerName="init-config-reloader" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.017659 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7a4c9f4-b898-43b4-812d-ab4f17c2124d" containerName="thanos-sidecar" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.017698 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7a4c9f4-b898-43b4-812d-ab4f17c2124d" containerName="prometheus" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.017716 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bd8d85c-9a7d-4f54-a589-330a68d04f51" containerName="mariadb-account-create-update" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.017745 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7a4c9f4-b898-43b4-812d-ab4f17c2124d" containerName="config-reloader" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.019837 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.024637 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-rnpks" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.024843 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.024938 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.025039 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.025134 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.025279 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.025492 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.026358 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.027539 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.032635 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.106155 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/efd4d0e2-bc4c-4bac-9236-37338445f7c7-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"efd4d0e2-bc4c-4bac-9236-37338445f7c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.106449 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/efd4d0e2-bc4c-4bac-9236-37338445f7c7-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"efd4d0e2-bc4c-4bac-9236-37338445f7c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.106542 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/efd4d0e2-bc4c-4bac-9236-37338445f7c7-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"efd4d0e2-bc4c-4bac-9236-37338445f7c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.106627 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-cd8d8a7f-775c-4ca9-8d07-5662d94d0fe3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cd8d8a7f-775c-4ca9-8d07-5662d94d0fe3\") pod \"prometheus-metric-storage-0\" (UID: \"efd4d0e2-bc4c-4bac-9236-37338445f7c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.106733 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/efd4d0e2-bc4c-4bac-9236-37338445f7c7-config\") pod \"prometheus-metric-storage-0\" (UID: \"efd4d0e2-bc4c-4bac-9236-37338445f7c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.106828 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/efd4d0e2-bc4c-4bac-9236-37338445f7c7-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"efd4d0e2-bc4c-4bac-9236-37338445f7c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.106906 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/efd4d0e2-bc4c-4bac-9236-37338445f7c7-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"efd4d0e2-bc4c-4bac-9236-37338445f7c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.106986 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/efd4d0e2-bc4c-4bac-9236-37338445f7c7-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"efd4d0e2-bc4c-4bac-9236-37338445f7c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.107061 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/efd4d0e2-bc4c-4bac-9236-37338445f7c7-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"efd4d0e2-bc4c-4bac-9236-37338445f7c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.107132 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/efd4d0e2-bc4c-4bac-9236-37338445f7c7-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"efd4d0e2-bc4c-4bac-9236-37338445f7c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.107222 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d55q\" (UniqueName: \"kubernetes.io/projected/efd4d0e2-bc4c-4bac-9236-37338445f7c7-kube-api-access-7d55q\") pod \"prometheus-metric-storage-0\" (UID: \"efd4d0e2-bc4c-4bac-9236-37338445f7c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.107297 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efd4d0e2-bc4c-4bac-9236-37338445f7c7-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"efd4d0e2-bc4c-4bac-9236-37338445f7c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.107394 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/efd4d0e2-bc4c-4bac-9236-37338445f7c7-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"efd4d0e2-bc4c-4bac-9236-37338445f7c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.241400 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/efd4d0e2-bc4c-4bac-9236-37338445f7c7-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"efd4d0e2-bc4c-4bac-9236-37338445f7c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.241688 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/efd4d0e2-bc4c-4bac-9236-37338445f7c7-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"efd4d0e2-bc4c-4bac-9236-37338445f7c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.241733 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/efd4d0e2-bc4c-4bac-9236-37338445f7c7-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"efd4d0e2-bc4c-4bac-9236-37338445f7c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.241766 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/efd4d0e2-bc4c-4bac-9236-37338445f7c7-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"efd4d0e2-bc4c-4bac-9236-37338445f7c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.241804 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7d55q\" (UniqueName: \"kubernetes.io/projected/efd4d0e2-bc4c-4bac-9236-37338445f7c7-kube-api-access-7d55q\") pod \"prometheus-metric-storage-0\" (UID: \"efd4d0e2-bc4c-4bac-9236-37338445f7c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.241844 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efd4d0e2-bc4c-4bac-9236-37338445f7c7-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"efd4d0e2-bc4c-4bac-9236-37338445f7c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.242022 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/efd4d0e2-bc4c-4bac-9236-37338445f7c7-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"efd4d0e2-bc4c-4bac-9236-37338445f7c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.242112 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/efd4d0e2-bc4c-4bac-9236-37338445f7c7-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"efd4d0e2-bc4c-4bac-9236-37338445f7c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.242183 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/efd4d0e2-bc4c-4bac-9236-37338445f7c7-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"efd4d0e2-bc4c-4bac-9236-37338445f7c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.242236 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-cd8d8a7f-775c-4ca9-8d07-5662d94d0fe3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cd8d8a7f-775c-4ca9-8d07-5662d94d0fe3\") pod \"prometheus-metric-storage-0\" (UID: \"efd4d0e2-bc4c-4bac-9236-37338445f7c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.242568 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/efd4d0e2-bc4c-4bac-9236-37338445f7c7-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"efd4d0e2-bc4c-4bac-9236-37338445f7c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.242633 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/efd4d0e2-bc4c-4bac-9236-37338445f7c7-config\") pod \"prometheus-metric-storage-0\" (UID: \"efd4d0e2-bc4c-4bac-9236-37338445f7c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.242701 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/efd4d0e2-bc4c-4bac-9236-37338445f7c7-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"efd4d0e2-bc4c-4bac-9236-37338445f7c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.254735 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.255742 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.265190 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.266106 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.279252 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.279499 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/efd4d0e2-bc4c-4bac-9236-37338445f7c7-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"efd4d0e2-bc4c-4bac-9236-37338445f7c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.280252 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/efd4d0e2-bc4c-4bac-9236-37338445f7c7-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"efd4d0e2-bc4c-4bac-9236-37338445f7c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.281004 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.282334 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efd4d0e2-bc4c-4bac-9236-37338445f7c7-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"efd4d0e2-bc4c-4bac-9236-37338445f7c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.284886 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.289398 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/efd4d0e2-bc4c-4bac-9236-37338445f7c7-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"efd4d0e2-bc4c-4bac-9236-37338445f7c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.291598 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/efd4d0e2-bc4c-4bac-9236-37338445f7c7-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"efd4d0e2-bc4c-4bac-9236-37338445f7c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.293298 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/efd4d0e2-bc4c-4bac-9236-37338445f7c7-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"efd4d0e2-bc4c-4bac-9236-37338445f7c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.294877 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/efd4d0e2-bc4c-4bac-9236-37338445f7c7-config\") pod \"prometheus-metric-storage-0\" (UID: \"efd4d0e2-bc4c-4bac-9236-37338445f7c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.296110 4965 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.296184 4965 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-cd8d8a7f-775c-4ca9-8d07-5662d94d0fe3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cd8d8a7f-775c-4ca9-8d07-5662d94d0fe3\") pod \"prometheus-metric-storage-0\" (UID: \"efd4d0e2-bc4c-4bac-9236-37338445f7c7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/53b03fef949ae6dbd52b5860402ecf7cae38e33da69a528411b5e62a8cf74f89/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.297495 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.302279 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/efd4d0e2-bc4c-4bac-9236-37338445f7c7-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"efd4d0e2-bc4c-4bac-9236-37338445f7c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.310210 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7d55q\" (UniqueName: \"kubernetes.io/projected/efd4d0e2-bc4c-4bac-9236-37338445f7c7-kube-api-access-7d55q\") pod \"prometheus-metric-storage-0\" (UID: \"efd4d0e2-bc4c-4bac-9236-37338445f7c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.312783 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/efd4d0e2-bc4c-4bac-9236-37338445f7c7-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"efd4d0e2-bc4c-4bac-9236-37338445f7c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.321158 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/efd4d0e2-bc4c-4bac-9236-37338445f7c7-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"efd4d0e2-bc4c-4bac-9236-37338445f7c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.321791 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/efd4d0e2-bc4c-4bac-9236-37338445f7c7-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"efd4d0e2-bc4c-4bac-9236-37338445f7c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.322407 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7a4c9f4-b898-43b4-812d-ab4f17c2124d" path="/var/lib/kubelet/pods/e7a4c9f4-b898-43b4-812d-ab4f17c2124d/volumes" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.371280 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-cd8d8a7f-775c-4ca9-8d07-5662d94d0fe3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cd8d8a7f-775c-4ca9-8d07-5662d94d0fe3\") pod \"prometheus-metric-storage-0\" (UID: \"efd4d0e2-bc4c-4bac-9236-37338445f7c7\") " pod="openstack/prometheus-metric-storage-0" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.640186 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2c3ae050-b164-4fbc-9e5b-392eb0a4fb53","Type":"ContainerStarted","Data":"d1d25fbccbbaaa7355a15dad3b63074fbd1e69dc940624f1454d7fe0ded53622"} Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.640622 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2c3ae050-b164-4fbc-9e5b-392eb0a4fb53","Type":"ContainerStarted","Data":"0d2bd982cff895f8ec4b71d1db4de2047cc890ef3d90dc30c5145121759797ef"} Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.640635 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2c3ae050-b164-4fbc-9e5b-392eb0a4fb53","Type":"ContainerStarted","Data":"5bbaa664055be21bb20023f9908fecac66d65551eb40c05b8e2a4f703c776172"} Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.640643 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2c3ae050-b164-4fbc-9e5b-392eb0a4fb53","Type":"ContainerStarted","Data":"ef851b65623d9b19a37aa30b2e87955fa5688ce9b146ef97f7ac335ebf39e5b7"} Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.643767 4965 generic.go:334] "Generic (PLEG): container finished" podID="58d6fed8-aa40-446d-897c-03103683edd7" containerID="608a21ed7bc143cd03932578a5956c3b61eb39b55c671064ae53cc1bffed4b79" exitCode=0 Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.643864 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mwlb6-config-x7cx5" event={"ID":"58d6fed8-aa40-446d-897c-03103683edd7","Type":"ContainerDied","Data":"608a21ed7bc143cd03932578a5956c3b61eb39b55c671064ae53cc1bffed4b79"} Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.657774 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-rnpks" Feb 19 10:02:25 crc kubenswrapper[4965]: I0219 10:02:25.665569 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 10:02:31 crc kubenswrapper[4965]: I0219 10:02:31.246470 4965 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="faab82f2-bc31-438d-b329-9a31d6ba5040" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 19 10:02:32 crc kubenswrapper[4965]: I0219 10:02:32.539074 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-mwlb6" Feb 19 10:02:33 crc kubenswrapper[4965]: I0219 10:02:33.080522 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 19 10:02:33 crc kubenswrapper[4965]: I0219 10:02:33.435442 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 19 10:02:33 crc kubenswrapper[4965]: I0219 10:02:33.778468 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:02:33 crc kubenswrapper[4965]: I0219 10:02:33.902784 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-db-create-9s824"] Feb 19 10:02:33 crc kubenswrapper[4965]: I0219 10:02:33.904525 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-9s824" Feb 19 10:02:33 crc kubenswrapper[4965]: I0219 10:02:33.942579 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-create-9s824"] Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.030443 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9f0aaf4-29c1-4187-a237-39502b74bbe9-operator-scripts\") pod \"cloudkitty-db-create-9s824\" (UID: \"a9f0aaf4-29c1-4187-a237-39502b74bbe9\") " pod="openstack/cloudkitty-db-create-9s824" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.030500 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p4hq\" (UniqueName: \"kubernetes.io/projected/a9f0aaf4-29c1-4187-a237-39502b74bbe9-kube-api-access-4p4hq\") pod \"cloudkitty-db-create-9s824\" (UID: \"a9f0aaf4-29c1-4187-a237-39502b74bbe9\") " pod="openstack/cloudkitty-db-create-9s824" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.041918 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mwlb6-config-x7cx5" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.050779 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-zhgqf"] Feb 19 10:02:34 crc kubenswrapper[4965]: E0219 10:02:34.055760 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58d6fed8-aa40-446d-897c-03103683edd7" containerName="ovn-config" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.055951 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="58d6fed8-aa40-446d-897c-03103683edd7" containerName="ovn-config" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.056236 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="58d6fed8-aa40-446d-897c-03103683edd7" containerName="ovn-config" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.084484 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-zhgqf" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.094281 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-zhgqf"] Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.132016 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/58d6fed8-aa40-446d-897c-03103683edd7-var-run-ovn\") pod \"58d6fed8-aa40-446d-897c-03103683edd7\" (UID: \"58d6fed8-aa40-446d-897c-03103683edd7\") " Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.132166 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/58d6fed8-aa40-446d-897c-03103683edd7-scripts\") pod \"58d6fed8-aa40-446d-897c-03103683edd7\" (UID: \"58d6fed8-aa40-446d-897c-03103683edd7\") " Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.132262 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/58d6fed8-aa40-446d-897c-03103683edd7-var-log-ovn\") pod \"58d6fed8-aa40-446d-897c-03103683edd7\" (UID: \"58d6fed8-aa40-446d-897c-03103683edd7\") " Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.132294 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zt6rr\" (UniqueName: \"kubernetes.io/projected/58d6fed8-aa40-446d-897c-03103683edd7-kube-api-access-zt6rr\") pod \"58d6fed8-aa40-446d-897c-03103683edd7\" (UID: \"58d6fed8-aa40-446d-897c-03103683edd7\") " Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.132383 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/58d6fed8-aa40-446d-897c-03103683edd7-var-run\") pod \"58d6fed8-aa40-446d-897c-03103683edd7\" (UID: \"58d6fed8-aa40-446d-897c-03103683edd7\") " Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.132437 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/58d6fed8-aa40-446d-897c-03103683edd7-additional-scripts\") pod \"58d6fed8-aa40-446d-897c-03103683edd7\" (UID: \"58d6fed8-aa40-446d-897c-03103683edd7\") " Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.133102 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8jcp\" (UniqueName: \"kubernetes.io/projected/57994f21-19c8-4e09-b972-de9d0f398410-kube-api-access-c8jcp\") pod \"cinder-db-create-zhgqf\" (UID: \"57994f21-19c8-4e09-b972-de9d0f398410\") " pod="openstack/cinder-db-create-zhgqf" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.133242 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9f0aaf4-29c1-4187-a237-39502b74bbe9-operator-scripts\") pod \"cloudkitty-db-create-9s824\" (UID: \"a9f0aaf4-29c1-4187-a237-39502b74bbe9\") " pod="openstack/cloudkitty-db-create-9s824" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.133290 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4p4hq\" (UniqueName: \"kubernetes.io/projected/a9f0aaf4-29c1-4187-a237-39502b74bbe9-kube-api-access-4p4hq\") pod \"cloudkitty-db-create-9s824\" (UID: \"a9f0aaf4-29c1-4187-a237-39502b74bbe9\") " pod="openstack/cloudkitty-db-create-9s824" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.133318 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57994f21-19c8-4e09-b972-de9d0f398410-operator-scripts\") pod \"cinder-db-create-zhgqf\" (UID: \"57994f21-19c8-4e09-b972-de9d0f398410\") " pod="openstack/cinder-db-create-zhgqf" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.133566 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58d6fed8-aa40-446d-897c-03103683edd7-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "58d6fed8-aa40-446d-897c-03103683edd7" (UID: "58d6fed8-aa40-446d-897c-03103683edd7"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.139657 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58d6fed8-aa40-446d-897c-03103683edd7-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "58d6fed8-aa40-446d-897c-03103683edd7" (UID: "58d6fed8-aa40-446d-897c-03103683edd7"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.144632 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58d6fed8-aa40-446d-897c-03103683edd7-scripts" (OuterVolumeSpecName: "scripts") pod "58d6fed8-aa40-446d-897c-03103683edd7" (UID: "58d6fed8-aa40-446d-897c-03103683edd7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.145189 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58d6fed8-aa40-446d-897c-03103683edd7-var-run" (OuterVolumeSpecName: "var-run") pod "58d6fed8-aa40-446d-897c-03103683edd7" (UID: "58d6fed8-aa40-446d-897c-03103683edd7"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.146071 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58d6fed8-aa40-446d-897c-03103683edd7-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "58d6fed8-aa40-446d-897c-03103683edd7" (UID: "58d6fed8-aa40-446d-897c-03103683edd7"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.146810 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9f0aaf4-29c1-4187-a237-39502b74bbe9-operator-scripts\") pod \"cloudkitty-db-create-9s824\" (UID: \"a9f0aaf4-29c1-4187-a237-39502b74bbe9\") " pod="openstack/cloudkitty-db-create-9s824" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.169336 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58d6fed8-aa40-446d-897c-03103683edd7-kube-api-access-zt6rr" (OuterVolumeSpecName: "kube-api-access-zt6rr") pod "58d6fed8-aa40-446d-897c-03103683edd7" (UID: "58d6fed8-aa40-446d-897c-03103683edd7"). InnerVolumeSpecName "kube-api-access-zt6rr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.218100 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-7880-account-create-update-fxqfz"] Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.219534 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4p4hq\" (UniqueName: \"kubernetes.io/projected/a9f0aaf4-29c1-4187-a237-39502b74bbe9-kube-api-access-4p4hq\") pod \"cloudkitty-db-create-9s824\" (UID: \"a9f0aaf4-29c1-4187-a237-39502b74bbe9\") " pod="openstack/cloudkitty-db-create-9s824" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.221395 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-7880-account-create-update-fxqfz" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.228756 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-db-secret" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.263866 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-7880-account-create-update-fxqfz"] Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.264865 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8jcp\" (UniqueName: \"kubernetes.io/projected/57994f21-19c8-4e09-b972-de9d0f398410-kube-api-access-c8jcp\") pod \"cinder-db-create-zhgqf\" (UID: \"57994f21-19c8-4e09-b972-de9d0f398410\") " pod="openstack/cinder-db-create-zhgqf" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.265121 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57994f21-19c8-4e09-b972-de9d0f398410-operator-scripts\") pod \"cinder-db-create-zhgqf\" (UID: \"57994f21-19c8-4e09-b972-de9d0f398410\") " pod="openstack/cinder-db-create-zhgqf" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.265216 4965 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/58d6fed8-aa40-446d-897c-03103683edd7-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.265240 4965 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/58d6fed8-aa40-446d-897c-03103683edd7-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.265253 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zt6rr\" (UniqueName: \"kubernetes.io/projected/58d6fed8-aa40-446d-897c-03103683edd7-kube-api-access-zt6rr\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.265266 4965 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/58d6fed8-aa40-446d-897c-03103683edd7-var-run\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.265276 4965 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/58d6fed8-aa40-446d-897c-03103683edd7-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.265286 4965 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/58d6fed8-aa40-446d-897c-03103683edd7-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.266292 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57994f21-19c8-4e09-b972-de9d0f398410-operator-scripts\") pod \"cinder-db-create-zhgqf\" (UID: \"57994f21-19c8-4e09-b972-de9d0f398410\") " pod="openstack/cinder-db-create-zhgqf" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.286749 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-c4ed-account-create-update-w5v9f"] Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.288671 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c4ed-account-create-update-w5v9f" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.297060 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.304750 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-r86pd"] Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.306208 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-r86pd" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.309783 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8jcp\" (UniqueName: \"kubernetes.io/projected/57994f21-19c8-4e09-b972-de9d0f398410-kube-api-access-c8jcp\") pod \"cinder-db-create-zhgqf\" (UID: \"57994f21-19c8-4e09-b972-de9d0f398410\") " pod="openstack/cinder-db-create-zhgqf" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.312998 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-9ln6f" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.313378 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.313698 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.313832 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.334639 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-c4ed-account-create-update-w5v9f"] Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.361764 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-r86pd"] Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.366672 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3111b00-ad13-4a92-97ca-95a778007dc2-operator-scripts\") pod \"cloudkitty-7880-account-create-update-fxqfz\" (UID: \"e3111b00-ad13-4a92-97ca-95a778007dc2\") " pod="openstack/cloudkitty-7880-account-create-update-fxqfz" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.366711 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5eb6197e-e339-43bd-861a-faff9e8f4f65-operator-scripts\") pod \"cinder-c4ed-account-create-update-w5v9f\" (UID: \"5eb6197e-e339-43bd-861a-faff9e8f4f65\") " pod="openstack/cinder-c4ed-account-create-update-w5v9f" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.366753 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e834046-19e3-47b1-b822-6c73b0d8be74-combined-ca-bundle\") pod \"keystone-db-sync-r86pd\" (UID: \"5e834046-19e3-47b1-b822-6c73b0d8be74\") " pod="openstack/keystone-db-sync-r86pd" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.366794 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swngw\" (UniqueName: \"kubernetes.io/projected/5eb6197e-e339-43bd-861a-faff9e8f4f65-kube-api-access-swngw\") pod \"cinder-c4ed-account-create-update-w5v9f\" (UID: \"5eb6197e-e339-43bd-861a-faff9e8f4f65\") " pod="openstack/cinder-c4ed-account-create-update-w5v9f" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.366859 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e834046-19e3-47b1-b822-6c73b0d8be74-config-data\") pod \"keystone-db-sync-r86pd\" (UID: \"5e834046-19e3-47b1-b822-6c73b0d8be74\") " pod="openstack/keystone-db-sync-r86pd" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.366890 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5c7tk\" (UniqueName: \"kubernetes.io/projected/e3111b00-ad13-4a92-97ca-95a778007dc2-kube-api-access-5c7tk\") pod \"cloudkitty-7880-account-create-update-fxqfz\" (UID: \"e3111b00-ad13-4a92-97ca-95a778007dc2\") " pod="openstack/cloudkitty-7880-account-create-update-fxqfz" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.366936 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkz96\" (UniqueName: \"kubernetes.io/projected/5e834046-19e3-47b1-b822-6c73b0d8be74-kube-api-access-zkz96\") pod \"keystone-db-sync-r86pd\" (UID: \"5e834046-19e3-47b1-b822-6c73b0d8be74\") " pod="openstack/keystone-db-sync-r86pd" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.441163 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-zwjcj"] Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.443257 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-zwjcj" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.452837 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-zwjcj"] Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.455781 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-9s824" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.470858 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e834046-19e3-47b1-b822-6c73b0d8be74-config-data\") pod \"keystone-db-sync-r86pd\" (UID: \"5e834046-19e3-47b1-b822-6c73b0d8be74\") " pod="openstack/keystone-db-sync-r86pd" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.470938 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5c7tk\" (UniqueName: \"kubernetes.io/projected/e3111b00-ad13-4a92-97ca-95a778007dc2-kube-api-access-5c7tk\") pod \"cloudkitty-7880-account-create-update-fxqfz\" (UID: \"e3111b00-ad13-4a92-97ca-95a778007dc2\") " pod="openstack/cloudkitty-7880-account-create-update-fxqfz" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.471014 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkz96\" (UniqueName: \"kubernetes.io/projected/5e834046-19e3-47b1-b822-6c73b0d8be74-kube-api-access-zkz96\") pod \"keystone-db-sync-r86pd\" (UID: \"5e834046-19e3-47b1-b822-6c73b0d8be74\") " pod="openstack/keystone-db-sync-r86pd" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.471086 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3111b00-ad13-4a92-97ca-95a778007dc2-operator-scripts\") pod \"cloudkitty-7880-account-create-update-fxqfz\" (UID: \"e3111b00-ad13-4a92-97ca-95a778007dc2\") " pod="openstack/cloudkitty-7880-account-create-update-fxqfz" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.471125 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5eb6197e-e339-43bd-861a-faff9e8f4f65-operator-scripts\") pod \"cinder-c4ed-account-create-update-w5v9f\" (UID: \"5eb6197e-e339-43bd-861a-faff9e8f4f65\") " pod="openstack/cinder-c4ed-account-create-update-w5v9f" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.471178 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e834046-19e3-47b1-b822-6c73b0d8be74-combined-ca-bundle\") pod \"keystone-db-sync-r86pd\" (UID: \"5e834046-19e3-47b1-b822-6c73b0d8be74\") " pod="openstack/keystone-db-sync-r86pd" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.471250 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swngw\" (UniqueName: \"kubernetes.io/projected/5eb6197e-e339-43bd-861a-faff9e8f4f65-kube-api-access-swngw\") pod \"cinder-c4ed-account-create-update-w5v9f\" (UID: \"5eb6197e-e339-43bd-861a-faff9e8f4f65\") " pod="openstack/cinder-c4ed-account-create-update-w5v9f" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.475127 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-6865-account-create-update-ql48d"] Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.476607 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-6865-account-create-update-ql48d" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.480896 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3111b00-ad13-4a92-97ca-95a778007dc2-operator-scripts\") pod \"cloudkitty-7880-account-create-update-fxqfz\" (UID: \"e3111b00-ad13-4a92-97ca-95a778007dc2\") " pod="openstack/cloudkitty-7880-account-create-update-fxqfz" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.481324 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5eb6197e-e339-43bd-861a-faff9e8f4f65-operator-scripts\") pod \"cinder-c4ed-account-create-update-w5v9f\" (UID: \"5eb6197e-e339-43bd-861a-faff9e8f4f65\") " pod="openstack/cinder-c4ed-account-create-update-w5v9f" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.482029 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-zhgqf" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.492658 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e834046-19e3-47b1-b822-6c73b0d8be74-combined-ca-bundle\") pod \"keystone-db-sync-r86pd\" (UID: \"5e834046-19e3-47b1-b822-6c73b0d8be74\") " pod="openstack/keystone-db-sync-r86pd" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.493710 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.494847 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e834046-19e3-47b1-b822-6c73b0d8be74-config-data\") pod \"keystone-db-sync-r86pd\" (UID: \"5e834046-19e3-47b1-b822-6c73b0d8be74\") " pod="openstack/keystone-db-sync-r86pd" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.497392 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swngw\" (UniqueName: \"kubernetes.io/projected/5eb6197e-e339-43bd-861a-faff9e8f4f65-kube-api-access-swngw\") pod \"cinder-c4ed-account-create-update-w5v9f\" (UID: \"5eb6197e-e339-43bd-861a-faff9e8f4f65\") " pod="openstack/cinder-c4ed-account-create-update-w5v9f" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.511129 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-6865-account-create-update-ql48d"] Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.519950 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkz96\" (UniqueName: \"kubernetes.io/projected/5e834046-19e3-47b1-b822-6c73b0d8be74-kube-api-access-zkz96\") pod \"keystone-db-sync-r86pd\" (UID: \"5e834046-19e3-47b1-b822-6c73b0d8be74\") " pod="openstack/keystone-db-sync-r86pd" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.522350 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5c7tk\" (UniqueName: \"kubernetes.io/projected/e3111b00-ad13-4a92-97ca-95a778007dc2-kube-api-access-5c7tk\") pod \"cloudkitty-7880-account-create-update-fxqfz\" (UID: \"e3111b00-ad13-4a92-97ca-95a778007dc2\") " pod="openstack/cloudkitty-7880-account-create-update-fxqfz" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.543799 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-f339-account-create-update-vw9jg"] Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.548294 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f339-account-create-update-vw9jg" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.550371 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.575032 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzn77\" (UniqueName: \"kubernetes.io/projected/e59eb68c-26e2-4951-900e-5a7b59197d54-kube-api-access-tzn77\") pod \"neutron-db-create-zwjcj\" (UID: \"e59eb68c-26e2-4951-900e-5a7b59197d54\") " pod="openstack/neutron-db-create-zwjcj" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.575115 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/547c989f-c71d-4a1b-9031-61fd03d9c2f1-operator-scripts\") pod \"barbican-6865-account-create-update-ql48d\" (UID: \"547c989f-c71d-4a1b-9031-61fd03d9c2f1\") " pod="openstack/barbican-6865-account-create-update-ql48d" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.577763 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e59eb68c-26e2-4951-900e-5a7b59197d54-operator-scripts\") pod \"neutron-db-create-zwjcj\" (UID: \"e59eb68c-26e2-4951-900e-5a7b59197d54\") " pod="openstack/neutron-db-create-zwjcj" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.577941 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xng8f\" (UniqueName: \"kubernetes.io/projected/547c989f-c71d-4a1b-9031-61fd03d9c2f1-kube-api-access-xng8f\") pod \"barbican-6865-account-create-update-ql48d\" (UID: \"547c989f-c71d-4a1b-9031-61fd03d9c2f1\") " pod="openstack/barbican-6865-account-create-update-ql48d" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.584312 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-f339-account-create-update-vw9jg"] Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.599491 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-4dd7f"] Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.603827 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-4dd7f" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.612235 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-4dd7f"] Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.624721 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-7880-account-create-update-fxqfz" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.703640 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f4s6\" (UniqueName: \"kubernetes.io/projected/f5fa636a-ebf1-4873-a54b-bdf1171f8138-kube-api-access-9f4s6\") pod \"neutron-f339-account-create-update-vw9jg\" (UID: \"f5fa636a-ebf1-4873-a54b-bdf1171f8138\") " pod="openstack/neutron-f339-account-create-update-vw9jg" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.703707 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzn77\" (UniqueName: \"kubernetes.io/projected/e59eb68c-26e2-4951-900e-5a7b59197d54-kube-api-access-tzn77\") pod \"neutron-db-create-zwjcj\" (UID: \"e59eb68c-26e2-4951-900e-5a7b59197d54\") " pod="openstack/neutron-db-create-zwjcj" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.703739 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/547c989f-c71d-4a1b-9031-61fd03d9c2f1-operator-scripts\") pod \"barbican-6865-account-create-update-ql48d\" (UID: \"547c989f-c71d-4a1b-9031-61fd03d9c2f1\") " pod="openstack/barbican-6865-account-create-update-ql48d" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.703799 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e59eb68c-26e2-4951-900e-5a7b59197d54-operator-scripts\") pod \"neutron-db-create-zwjcj\" (UID: \"e59eb68c-26e2-4951-900e-5a7b59197d54\") " pod="openstack/neutron-db-create-zwjcj" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.703850 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5fa636a-ebf1-4873-a54b-bdf1171f8138-operator-scripts\") pod \"neutron-f339-account-create-update-vw9jg\" (UID: \"f5fa636a-ebf1-4873-a54b-bdf1171f8138\") " pod="openstack/neutron-f339-account-create-update-vw9jg" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.703869 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8wgm\" (UniqueName: \"kubernetes.io/projected/dfd014f4-1151-4fc5-8b0b-cfeb54b3845d-kube-api-access-p8wgm\") pod \"barbican-db-create-4dd7f\" (UID: \"dfd014f4-1151-4fc5-8b0b-cfeb54b3845d\") " pod="openstack/barbican-db-create-4dd7f" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.703890 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dfd014f4-1151-4fc5-8b0b-cfeb54b3845d-operator-scripts\") pod \"barbican-db-create-4dd7f\" (UID: \"dfd014f4-1151-4fc5-8b0b-cfeb54b3845d\") " pod="openstack/barbican-db-create-4dd7f" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.703917 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xng8f\" (UniqueName: \"kubernetes.io/projected/547c989f-c71d-4a1b-9031-61fd03d9c2f1-kube-api-access-xng8f\") pod \"barbican-6865-account-create-update-ql48d\" (UID: \"547c989f-c71d-4a1b-9031-61fd03d9c2f1\") " pod="openstack/barbican-6865-account-create-update-ql48d" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.705961 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/547c989f-c71d-4a1b-9031-61fd03d9c2f1-operator-scripts\") pod \"barbican-6865-account-create-update-ql48d\" (UID: \"547c989f-c71d-4a1b-9031-61fd03d9c2f1\") " pod="openstack/barbican-6865-account-create-update-ql48d" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.706611 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e59eb68c-26e2-4951-900e-5a7b59197d54-operator-scripts\") pod \"neutron-db-create-zwjcj\" (UID: \"e59eb68c-26e2-4951-900e-5a7b59197d54\") " pod="openstack/neutron-db-create-zwjcj" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.722294 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c4ed-account-create-update-w5v9f" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.732276 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzn77\" (UniqueName: \"kubernetes.io/projected/e59eb68c-26e2-4951-900e-5a7b59197d54-kube-api-access-tzn77\") pod \"neutron-db-create-zwjcj\" (UID: \"e59eb68c-26e2-4951-900e-5a7b59197d54\") " pod="openstack/neutron-db-create-zwjcj" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.748848 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xng8f\" (UniqueName: \"kubernetes.io/projected/547c989f-c71d-4a1b-9031-61fd03d9c2f1-kube-api-access-xng8f\") pod \"barbican-6865-account-create-update-ql48d\" (UID: \"547c989f-c71d-4a1b-9031-61fd03d9c2f1\") " pod="openstack/barbican-6865-account-create-update-ql48d" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.757617 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-r86pd" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.787786 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2c3ae050-b164-4fbc-9e5b-392eb0a4fb53","Type":"ContainerStarted","Data":"6290f054e34c87d0e8046e9a53caecb43b1662de4ba5a9e71c9ef379b7e533b6"} Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.789740 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mwlb6-config-x7cx5" event={"ID":"58d6fed8-aa40-446d-897c-03103683edd7","Type":"ContainerDied","Data":"1bc7dfb25b98223b30335006a229bdfc56f629fe92f639304eced86eb12d96b5"} Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.789767 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bc7dfb25b98223b30335006a229bdfc56f629fe92f639304eced86eb12d96b5" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.789839 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mwlb6-config-x7cx5" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.805163 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5fa636a-ebf1-4873-a54b-bdf1171f8138-operator-scripts\") pod \"neutron-f339-account-create-update-vw9jg\" (UID: \"f5fa636a-ebf1-4873-a54b-bdf1171f8138\") " pod="openstack/neutron-f339-account-create-update-vw9jg" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.805937 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8wgm\" (UniqueName: \"kubernetes.io/projected/dfd014f4-1151-4fc5-8b0b-cfeb54b3845d-kube-api-access-p8wgm\") pod \"barbican-db-create-4dd7f\" (UID: \"dfd014f4-1151-4fc5-8b0b-cfeb54b3845d\") " pod="openstack/barbican-db-create-4dd7f" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.805977 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dfd014f4-1151-4fc5-8b0b-cfeb54b3845d-operator-scripts\") pod \"barbican-db-create-4dd7f\" (UID: \"dfd014f4-1151-4fc5-8b0b-cfeb54b3845d\") " pod="openstack/barbican-db-create-4dd7f" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.806056 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9f4s6\" (UniqueName: \"kubernetes.io/projected/f5fa636a-ebf1-4873-a54b-bdf1171f8138-kube-api-access-9f4s6\") pod \"neutron-f339-account-create-update-vw9jg\" (UID: \"f5fa636a-ebf1-4873-a54b-bdf1171f8138\") " pod="openstack/neutron-f339-account-create-update-vw9jg" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.807531 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5fa636a-ebf1-4873-a54b-bdf1171f8138-operator-scripts\") pod \"neutron-f339-account-create-update-vw9jg\" (UID: \"f5fa636a-ebf1-4873-a54b-bdf1171f8138\") " pod="openstack/neutron-f339-account-create-update-vw9jg" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.809745 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dfd014f4-1151-4fc5-8b0b-cfeb54b3845d-operator-scripts\") pod \"barbican-db-create-4dd7f\" (UID: \"dfd014f4-1151-4fc5-8b0b-cfeb54b3845d\") " pod="openstack/barbican-db-create-4dd7f" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.830251 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8wgm\" (UniqueName: \"kubernetes.io/projected/dfd014f4-1151-4fc5-8b0b-cfeb54b3845d-kube-api-access-p8wgm\") pod \"barbican-db-create-4dd7f\" (UID: \"dfd014f4-1151-4fc5-8b0b-cfeb54b3845d\") " pod="openstack/barbican-db-create-4dd7f" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.834999 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9f4s6\" (UniqueName: \"kubernetes.io/projected/f5fa636a-ebf1-4873-a54b-bdf1171f8138-kube-api-access-9f4s6\") pod \"neutron-f339-account-create-update-vw9jg\" (UID: \"f5fa636a-ebf1-4873-a54b-bdf1171f8138\") " pod="openstack/neutron-f339-account-create-update-vw9jg" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.846870 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-zwjcj" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.868010 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-6865-account-create-update-ql48d" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.881211 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f339-account-create-update-vw9jg" Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.901913 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 10:02:34 crc kubenswrapper[4965]: I0219 10:02:34.928993 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-4dd7f" Feb 19 10:02:35 crc kubenswrapper[4965]: I0219 10:02:35.380301 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-mwlb6-config-x7cx5"] Feb 19 10:02:35 crc kubenswrapper[4965]: I0219 10:02:35.414252 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-mwlb6-config-x7cx5"] Feb 19 10:02:35 crc kubenswrapper[4965]: I0219 10:02:35.441281 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-create-9s824"] Feb 19 10:02:35 crc kubenswrapper[4965]: I0219 10:02:35.708163 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-zhgqf"] Feb 19 10:02:35 crc kubenswrapper[4965]: I0219 10:02:35.741112 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-r86pd"] Feb 19 10:02:35 crc kubenswrapper[4965]: I0219 10:02:35.821115 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-7880-account-create-update-fxqfz"] Feb 19 10:02:35 crc kubenswrapper[4965]: I0219 10:02:35.892551 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-9s824" event={"ID":"a9f0aaf4-29c1-4187-a237-39502b74bbe9","Type":"ContainerStarted","Data":"bc1f510260ed5f9e5edc13268f9c1fceb24cee7e6ba66cf62dd489e767ce159b"} Feb 19 10:02:35 crc kubenswrapper[4965]: I0219 10:02:35.892731 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-9s824" event={"ID":"a9f0aaf4-29c1-4187-a237-39502b74bbe9","Type":"ContainerStarted","Data":"98efac91c86dd008c9a991ebc7b8736702930fe8f6da20e61940174edb8a4612"} Feb 19 10:02:35 crc kubenswrapper[4965]: I0219 10:02:35.923852 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-zhgqf" event={"ID":"57994f21-19c8-4e09-b972-de9d0f398410","Type":"ContainerStarted","Data":"6996880906e2b268204ec841b6a67121855a333007cc0690e9916b28abd11cbb"} Feb 19 10:02:35 crc kubenswrapper[4965]: I0219 10:02:35.931062 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"efd4d0e2-bc4c-4bac-9236-37338445f7c7","Type":"ContainerStarted","Data":"ffab4c35e0f70a4c61f51aea109e57b8d2e6697d487e85c7447422c03c966656"} Feb 19 10:02:35 crc kubenswrapper[4965]: I0219 10:02:35.950924 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-db-create-9s824" podStartSLOduration=2.9509085170000002 podStartE2EDuration="2.950908517s" podCreationTimestamp="2026-02-19 10:02:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:02:35.946841478 +0000 UTC m=+1211.568162788" watchObservedRunningTime="2026-02-19 10:02:35.950908517 +0000 UTC m=+1211.572229827" Feb 19 10:02:36 crc kubenswrapper[4965]: I0219 10:02:36.010149 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2c3ae050-b164-4fbc-9e5b-392eb0a4fb53","Type":"ContainerStarted","Data":"192d00b568d4f778cc8eb534b37bd1668546227ced683c5a1b7a6cb32fd40d79"} Feb 19 10:02:36 crc kubenswrapper[4965]: I0219 10:02:36.026639 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-r86pd" event={"ID":"5e834046-19e3-47b1-b822-6c73b0d8be74","Type":"ContainerStarted","Data":"27def61dfb97fe20399d44d8924e98e1ee3722850286f15fa73130af95bef85d"} Feb 19 10:02:36 crc kubenswrapper[4965]: I0219 10:02:36.096214 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=31.028391402 podStartE2EDuration="38.096179404s" podCreationTimestamp="2026-02-19 10:01:58 +0000 UTC" firstStartedPulling="2026-02-19 10:02:17.095856135 +0000 UTC m=+1192.717177445" lastFinishedPulling="2026-02-19 10:02:24.163644137 +0000 UTC m=+1199.784965447" observedRunningTime="2026-02-19 10:02:36.095830925 +0000 UTC m=+1211.717152235" watchObservedRunningTime="2026-02-19 10:02:36.096179404 +0000 UTC m=+1211.717500724" Feb 19 10:02:36 crc kubenswrapper[4965]: I0219 10:02:36.165057 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-zwjcj"] Feb 19 10:02:36 crc kubenswrapper[4965]: I0219 10:02:36.178807 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-c4ed-account-create-update-w5v9f"] Feb 19 10:02:36 crc kubenswrapper[4965]: I0219 10:02:36.204497 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-6865-account-create-update-ql48d"] Feb 19 10:02:36 crc kubenswrapper[4965]: I0219 10:02:36.313777 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-f339-account-create-update-vw9jg"] Feb 19 10:02:36 crc kubenswrapper[4965]: W0219 10:02:36.468420 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5fa636a_ebf1_4873_a54b_bdf1171f8138.slice/crio-ab5ff786bbda166d21a24abe45c92cdb0b8da68ae98b2e54de09ebe961e6c6bc WatchSource:0}: Error finding container ab5ff786bbda166d21a24abe45c92cdb0b8da68ae98b2e54de09ebe961e6c6bc: Status 404 returned error can't find the container with id ab5ff786bbda166d21a24abe45c92cdb0b8da68ae98b2e54de09ebe961e6c6bc Feb 19 10:02:36 crc kubenswrapper[4965]: I0219 10:02:36.510369 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-4dd7f"] Feb 19 10:02:36 crc kubenswrapper[4965]: I0219 10:02:36.517387 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-29vfc"] Feb 19 10:02:36 crc kubenswrapper[4965]: I0219 10:02:36.525854 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-29vfc" Feb 19 10:02:36 crc kubenswrapper[4965]: I0219 10:02:36.530781 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 19 10:02:36 crc kubenswrapper[4965]: I0219 10:02:36.541520 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-29vfc"] Feb 19 10:02:36 crc kubenswrapper[4965]: I0219 10:02:36.621865 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wd9nd\" (UniqueName: \"kubernetes.io/projected/de82886a-c712-4ba8-b921-373802d4e7a7-kube-api-access-wd9nd\") pod \"dnsmasq-dns-5c79d794d7-29vfc\" (UID: \"de82886a-c712-4ba8-b921-373802d4e7a7\") " pod="openstack/dnsmasq-dns-5c79d794d7-29vfc" Feb 19 10:02:36 crc kubenswrapper[4965]: I0219 10:02:36.621936 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de82886a-c712-4ba8-b921-373802d4e7a7-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-29vfc\" (UID: \"de82886a-c712-4ba8-b921-373802d4e7a7\") " pod="openstack/dnsmasq-dns-5c79d794d7-29vfc" Feb 19 10:02:36 crc kubenswrapper[4965]: I0219 10:02:36.622040 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/de82886a-c712-4ba8-b921-373802d4e7a7-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-29vfc\" (UID: \"de82886a-c712-4ba8-b921-373802d4e7a7\") " pod="openstack/dnsmasq-dns-5c79d794d7-29vfc" Feb 19 10:02:36 crc kubenswrapper[4965]: I0219 10:02:36.622100 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de82886a-c712-4ba8-b921-373802d4e7a7-config\") pod \"dnsmasq-dns-5c79d794d7-29vfc\" (UID: \"de82886a-c712-4ba8-b921-373802d4e7a7\") " pod="openstack/dnsmasq-dns-5c79d794d7-29vfc" Feb 19 10:02:36 crc kubenswrapper[4965]: I0219 10:02:36.622136 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de82886a-c712-4ba8-b921-373802d4e7a7-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-29vfc\" (UID: \"de82886a-c712-4ba8-b921-373802d4e7a7\") " pod="openstack/dnsmasq-dns-5c79d794d7-29vfc" Feb 19 10:02:36 crc kubenswrapper[4965]: I0219 10:02:36.622237 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de82886a-c712-4ba8-b921-373802d4e7a7-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-29vfc\" (UID: \"de82886a-c712-4ba8-b921-373802d4e7a7\") " pod="openstack/dnsmasq-dns-5c79d794d7-29vfc" Feb 19 10:02:36 crc kubenswrapper[4965]: I0219 10:02:36.723767 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de82886a-c712-4ba8-b921-373802d4e7a7-config\") pod \"dnsmasq-dns-5c79d794d7-29vfc\" (UID: \"de82886a-c712-4ba8-b921-373802d4e7a7\") " pod="openstack/dnsmasq-dns-5c79d794d7-29vfc" Feb 19 10:02:36 crc kubenswrapper[4965]: I0219 10:02:36.723814 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de82886a-c712-4ba8-b921-373802d4e7a7-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-29vfc\" (UID: \"de82886a-c712-4ba8-b921-373802d4e7a7\") " pod="openstack/dnsmasq-dns-5c79d794d7-29vfc" Feb 19 10:02:36 crc kubenswrapper[4965]: I0219 10:02:36.723882 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de82886a-c712-4ba8-b921-373802d4e7a7-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-29vfc\" (UID: \"de82886a-c712-4ba8-b921-373802d4e7a7\") " pod="openstack/dnsmasq-dns-5c79d794d7-29vfc" Feb 19 10:02:36 crc kubenswrapper[4965]: I0219 10:02:36.723940 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wd9nd\" (UniqueName: \"kubernetes.io/projected/de82886a-c712-4ba8-b921-373802d4e7a7-kube-api-access-wd9nd\") pod \"dnsmasq-dns-5c79d794d7-29vfc\" (UID: \"de82886a-c712-4ba8-b921-373802d4e7a7\") " pod="openstack/dnsmasq-dns-5c79d794d7-29vfc" Feb 19 10:02:36 crc kubenswrapper[4965]: I0219 10:02:36.723974 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de82886a-c712-4ba8-b921-373802d4e7a7-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-29vfc\" (UID: \"de82886a-c712-4ba8-b921-373802d4e7a7\") " pod="openstack/dnsmasq-dns-5c79d794d7-29vfc" Feb 19 10:02:36 crc kubenswrapper[4965]: I0219 10:02:36.724012 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/de82886a-c712-4ba8-b921-373802d4e7a7-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-29vfc\" (UID: \"de82886a-c712-4ba8-b921-373802d4e7a7\") " pod="openstack/dnsmasq-dns-5c79d794d7-29vfc" Feb 19 10:02:36 crc kubenswrapper[4965]: I0219 10:02:36.725159 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de82886a-c712-4ba8-b921-373802d4e7a7-config\") pod \"dnsmasq-dns-5c79d794d7-29vfc\" (UID: \"de82886a-c712-4ba8-b921-373802d4e7a7\") " pod="openstack/dnsmasq-dns-5c79d794d7-29vfc" Feb 19 10:02:36 crc kubenswrapper[4965]: I0219 10:02:36.726271 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de82886a-c712-4ba8-b921-373802d4e7a7-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-29vfc\" (UID: \"de82886a-c712-4ba8-b921-373802d4e7a7\") " pod="openstack/dnsmasq-dns-5c79d794d7-29vfc" Feb 19 10:02:36 crc kubenswrapper[4965]: I0219 10:02:36.726809 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de82886a-c712-4ba8-b921-373802d4e7a7-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-29vfc\" (UID: \"de82886a-c712-4ba8-b921-373802d4e7a7\") " pod="openstack/dnsmasq-dns-5c79d794d7-29vfc" Feb 19 10:02:36 crc kubenswrapper[4965]: I0219 10:02:36.727231 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/de82886a-c712-4ba8-b921-373802d4e7a7-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-29vfc\" (UID: \"de82886a-c712-4ba8-b921-373802d4e7a7\") " pod="openstack/dnsmasq-dns-5c79d794d7-29vfc" Feb 19 10:02:36 crc kubenswrapper[4965]: I0219 10:02:36.727234 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de82886a-c712-4ba8-b921-373802d4e7a7-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-29vfc\" (UID: \"de82886a-c712-4ba8-b921-373802d4e7a7\") " pod="openstack/dnsmasq-dns-5c79d794d7-29vfc" Feb 19 10:02:36 crc kubenswrapper[4965]: I0219 10:02:36.752175 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wd9nd\" (UniqueName: \"kubernetes.io/projected/de82886a-c712-4ba8-b921-373802d4e7a7-kube-api-access-wd9nd\") pod \"dnsmasq-dns-5c79d794d7-29vfc\" (UID: \"de82886a-c712-4ba8-b921-373802d4e7a7\") " pod="openstack/dnsmasq-dns-5c79d794d7-29vfc" Feb 19 10:02:36 crc kubenswrapper[4965]: I0219 10:02:36.931057 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-29vfc" Feb 19 10:02:37 crc kubenswrapper[4965]: I0219 10:02:37.040274 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-7880-account-create-update-fxqfz" event={"ID":"e3111b00-ad13-4a92-97ca-95a778007dc2","Type":"ContainerStarted","Data":"8140197a941920e015288e8493da3b9f38be477528a2aa04ccfd44714bb7d857"} Feb 19 10:02:37 crc kubenswrapper[4965]: I0219 10:02:37.040320 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-7880-account-create-update-fxqfz" event={"ID":"e3111b00-ad13-4a92-97ca-95a778007dc2","Type":"ContainerStarted","Data":"6268cab13ebdb5190400ccf3c2cd2323563ca06fc75ee2d170ab8440df235822"} Feb 19 10:02:37 crc kubenswrapper[4965]: I0219 10:02:37.053958 4965 generic.go:334] "Generic (PLEG): container finished" podID="a9f0aaf4-29c1-4187-a237-39502b74bbe9" containerID="bc1f510260ed5f9e5edc13268f9c1fceb24cee7e6ba66cf62dd489e767ce159b" exitCode=0 Feb 19 10:02:37 crc kubenswrapper[4965]: I0219 10:02:37.054036 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-9s824" event={"ID":"a9f0aaf4-29c1-4187-a237-39502b74bbe9","Type":"ContainerDied","Data":"bc1f510260ed5f9e5edc13268f9c1fceb24cee7e6ba66cf62dd489e767ce159b"} Feb 19 10:02:37 crc kubenswrapper[4965]: I0219 10:02:37.077408 4965 generic.go:334] "Generic (PLEG): container finished" podID="57994f21-19c8-4e09-b972-de9d0f398410" containerID="1852de23369442e85fdeb588ebe22c4d2dbf04d7d4bc0beeadd826819c3f253c" exitCode=0 Feb 19 10:02:37 crc kubenswrapper[4965]: I0219 10:02:37.077712 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-zhgqf" event={"ID":"57994f21-19c8-4e09-b972-de9d0f398410","Type":"ContainerDied","Data":"1852de23369442e85fdeb588ebe22c4d2dbf04d7d4bc0beeadd826819c3f253c"} Feb 19 10:02:37 crc kubenswrapper[4965]: I0219 10:02:37.080275 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-vkkc7" event={"ID":"18ba8da5-8c18-4dad-91ce-dc34ef3fc6e4","Type":"ContainerStarted","Data":"0a713c7e515c2ad86f134cc876a83fc23beb8a521c00339ea120389fa4cc470f"} Feb 19 10:02:37 crc kubenswrapper[4965]: I0219 10:02:37.083484 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f339-account-create-update-vw9jg" event={"ID":"f5fa636a-ebf1-4873-a54b-bdf1171f8138","Type":"ContainerStarted","Data":"e4c39128a06b341fe96b189d872bd76c045429dbb13f013f0d1a36f7273a1afc"} Feb 19 10:02:37 crc kubenswrapper[4965]: I0219 10:02:37.083663 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f339-account-create-update-vw9jg" event={"ID":"f5fa636a-ebf1-4873-a54b-bdf1171f8138","Type":"ContainerStarted","Data":"ab5ff786bbda166d21a24abe45c92cdb0b8da68ae98b2e54de09ebe961e6c6bc"} Feb 19 10:02:37 crc kubenswrapper[4965]: I0219 10:02:37.105939 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-6865-account-create-update-ql48d" event={"ID":"547c989f-c71d-4a1b-9031-61fd03d9c2f1","Type":"ContainerStarted","Data":"51cbe1456e4397c88bed1f3860a060a92b32671437b6d75e715fd55941d4693e"} Feb 19 10:02:37 crc kubenswrapper[4965]: I0219 10:02:37.106146 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-6865-account-create-update-ql48d" event={"ID":"547c989f-c71d-4a1b-9031-61fd03d9c2f1","Type":"ContainerStarted","Data":"f6a04adb107513a99af40e7a7b9c4cc038e53c5e243595afc3001fec5b30608b"} Feb 19 10:02:37 crc kubenswrapper[4965]: I0219 10:02:37.116419 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c4ed-account-create-update-w5v9f" event={"ID":"5eb6197e-e339-43bd-861a-faff9e8f4f65","Type":"ContainerStarted","Data":"5b688329c2653fc9cf4ab20dd9f742d9cb2861aa97797074f100f283f322803b"} Feb 19 10:02:37 crc kubenswrapper[4965]: I0219 10:02:37.116543 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c4ed-account-create-update-w5v9f" event={"ID":"5eb6197e-e339-43bd-861a-faff9e8f4f65","Type":"ContainerStarted","Data":"1563aad2e395643383e3dd6fe6417c76dcccf74dd86c1dc0c296eca3e17b3d60"} Feb 19 10:02:37 crc kubenswrapper[4965]: I0219 10:02:37.119632 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-7880-account-create-update-fxqfz" podStartSLOduration=3.119621122 podStartE2EDuration="3.119621122s" podCreationTimestamp="2026-02-19 10:02:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:02:37.068126582 +0000 UTC m=+1212.689447892" watchObservedRunningTime="2026-02-19 10:02:37.119621122 +0000 UTC m=+1212.740942432" Feb 19 10:02:37 crc kubenswrapper[4965]: I0219 10:02:37.137994 4965 generic.go:334] "Generic (PLEG): container finished" podID="e59eb68c-26e2-4951-900e-5a7b59197d54" containerID="146be35c72f957cac9abab6462be4be9050a65418ad879665bfaf2f8085ab115" exitCode=0 Feb 19 10:02:37 crc kubenswrapper[4965]: I0219 10:02:37.138271 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-zwjcj" event={"ID":"e59eb68c-26e2-4951-900e-5a7b59197d54","Type":"ContainerDied","Data":"146be35c72f957cac9abab6462be4be9050a65418ad879665bfaf2f8085ab115"} Feb 19 10:02:37 crc kubenswrapper[4965]: I0219 10:02:37.138347 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-zwjcj" event={"ID":"e59eb68c-26e2-4951-900e-5a7b59197d54","Type":"ContainerStarted","Data":"fc143e2fcb7da688051d0358582624ab35dcd2d428f6c4f9b1f8a716d7bdd436"} Feb 19 10:02:37 crc kubenswrapper[4965]: I0219 10:02:37.161389 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-4dd7f" event={"ID":"dfd014f4-1151-4fc5-8b0b-cfeb54b3845d","Type":"ContainerStarted","Data":"5febd91fdf00927af6661c02036c04492a2c683750f9e984e67d385fa02aff6d"} Feb 19 10:02:37 crc kubenswrapper[4965]: I0219 10:02:37.161673 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-4dd7f" event={"ID":"dfd014f4-1151-4fc5-8b0b-cfeb54b3845d","Type":"ContainerStarted","Data":"36fdcf4e001ce6ac8ea35255b075972d3c6645552b856e55604a084a12c75f43"} Feb 19 10:02:37 crc kubenswrapper[4965]: I0219 10:02:37.166719 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-vkkc7" podStartSLOduration=7.993730353 podStartE2EDuration="26.166675834s" podCreationTimestamp="2026-02-19 10:02:11 +0000 UTC" firstStartedPulling="2026-02-19 10:02:16.122665256 +0000 UTC m=+1191.743986566" lastFinishedPulling="2026-02-19 10:02:34.295610737 +0000 UTC m=+1209.916932047" observedRunningTime="2026-02-19 10:02:37.125954536 +0000 UTC m=+1212.747275846" watchObservedRunningTime="2026-02-19 10:02:37.166675834 +0000 UTC m=+1212.787997144" Feb 19 10:02:37 crc kubenswrapper[4965]: I0219 10:02:37.223649 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-f339-account-create-update-vw9jg" podStartSLOduration=3.223631238 podStartE2EDuration="3.223631238s" podCreationTimestamp="2026-02-19 10:02:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:02:37.147026357 +0000 UTC m=+1212.768347667" watchObservedRunningTime="2026-02-19 10:02:37.223631238 +0000 UTC m=+1212.844952548" Feb 19 10:02:37 crc kubenswrapper[4965]: I0219 10:02:37.234262 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58d6fed8-aa40-446d-897c-03103683edd7" path="/var/lib/kubelet/pods/58d6fed8-aa40-446d-897c-03103683edd7/volumes" Feb 19 10:02:37 crc kubenswrapper[4965]: I0219 10:02:37.262282 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-c4ed-account-create-update-w5v9f" podStartSLOduration=3.262263885 podStartE2EDuration="3.262263885s" podCreationTimestamp="2026-02-19 10:02:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:02:37.202270089 +0000 UTC m=+1212.823591389" watchObservedRunningTime="2026-02-19 10:02:37.262263885 +0000 UTC m=+1212.883585195" Feb 19 10:02:37 crc kubenswrapper[4965]: I0219 10:02:37.267699 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-4dd7f" podStartSLOduration=3.267688657 podStartE2EDuration="3.267688657s" podCreationTimestamp="2026-02-19 10:02:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:02:37.216880603 +0000 UTC m=+1212.838201913" watchObservedRunningTime="2026-02-19 10:02:37.267688657 +0000 UTC m=+1212.889009957" Feb 19 10:02:37 crc kubenswrapper[4965]: I0219 10:02:37.289210 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-6865-account-create-update-ql48d" podStartSLOduration=3.289173249 podStartE2EDuration="3.289173249s" podCreationTimestamp="2026-02-19 10:02:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:02:37.243651043 +0000 UTC m=+1212.864972353" watchObservedRunningTime="2026-02-19 10:02:37.289173249 +0000 UTC m=+1212.910494549" Feb 19 10:02:37 crc kubenswrapper[4965]: I0219 10:02:37.573026 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-29vfc"] Feb 19 10:02:37 crc kubenswrapper[4965]: W0219 10:02:37.584949 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde82886a_c712_4ba8_b921_373802d4e7a7.slice/crio-af6a24a18b805cc162bdc37ef2ba25d096fb8cec362c29e4bb7eb324f24e436c WatchSource:0}: Error finding container af6a24a18b805cc162bdc37ef2ba25d096fb8cec362c29e4bb7eb324f24e436c: Status 404 returned error can't find the container with id af6a24a18b805cc162bdc37ef2ba25d096fb8cec362c29e4bb7eb324f24e436c Feb 19 10:02:38 crc kubenswrapper[4965]: I0219 10:02:38.177489 4965 generic.go:334] "Generic (PLEG): container finished" podID="e3111b00-ad13-4a92-97ca-95a778007dc2" containerID="8140197a941920e015288e8493da3b9f38be477528a2aa04ccfd44714bb7d857" exitCode=0 Feb 19 10:02:38 crc kubenswrapper[4965]: I0219 10:02:38.177574 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-7880-account-create-update-fxqfz" event={"ID":"e3111b00-ad13-4a92-97ca-95a778007dc2","Type":"ContainerDied","Data":"8140197a941920e015288e8493da3b9f38be477528a2aa04ccfd44714bb7d857"} Feb 19 10:02:38 crc kubenswrapper[4965]: I0219 10:02:38.182141 4965 generic.go:334] "Generic (PLEG): container finished" podID="547c989f-c71d-4a1b-9031-61fd03d9c2f1" containerID="51cbe1456e4397c88bed1f3860a060a92b32671437b6d75e715fd55941d4693e" exitCode=0 Feb 19 10:02:38 crc kubenswrapper[4965]: I0219 10:02:38.182232 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-6865-account-create-update-ql48d" event={"ID":"547c989f-c71d-4a1b-9031-61fd03d9c2f1","Type":"ContainerDied","Data":"51cbe1456e4397c88bed1f3860a060a92b32671437b6d75e715fd55941d4693e"} Feb 19 10:02:38 crc kubenswrapper[4965]: I0219 10:02:38.184978 4965 generic.go:334] "Generic (PLEG): container finished" podID="5eb6197e-e339-43bd-861a-faff9e8f4f65" containerID="5b688329c2653fc9cf4ab20dd9f742d9cb2861aa97797074f100f283f322803b" exitCode=0 Feb 19 10:02:38 crc kubenswrapper[4965]: I0219 10:02:38.185082 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c4ed-account-create-update-w5v9f" event={"ID":"5eb6197e-e339-43bd-861a-faff9e8f4f65","Type":"ContainerDied","Data":"5b688329c2653fc9cf4ab20dd9f742d9cb2861aa97797074f100f283f322803b"} Feb 19 10:02:38 crc kubenswrapper[4965]: I0219 10:02:38.187152 4965 generic.go:334] "Generic (PLEG): container finished" podID="de82886a-c712-4ba8-b921-373802d4e7a7" containerID="1eab54754a87344c355be5da22412e7c59fbf788bc9a4849a778b1089137778c" exitCode=0 Feb 19 10:02:38 crc kubenswrapper[4965]: I0219 10:02:38.187256 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-29vfc" event={"ID":"de82886a-c712-4ba8-b921-373802d4e7a7","Type":"ContainerDied","Data":"1eab54754a87344c355be5da22412e7c59fbf788bc9a4849a778b1089137778c"} Feb 19 10:02:38 crc kubenswrapper[4965]: I0219 10:02:38.187295 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-29vfc" event={"ID":"de82886a-c712-4ba8-b921-373802d4e7a7","Type":"ContainerStarted","Data":"af6a24a18b805cc162bdc37ef2ba25d096fb8cec362c29e4bb7eb324f24e436c"} Feb 19 10:02:38 crc kubenswrapper[4965]: I0219 10:02:38.191662 4965 generic.go:334] "Generic (PLEG): container finished" podID="dfd014f4-1151-4fc5-8b0b-cfeb54b3845d" containerID="5febd91fdf00927af6661c02036c04492a2c683750f9e984e67d385fa02aff6d" exitCode=0 Feb 19 10:02:38 crc kubenswrapper[4965]: I0219 10:02:38.191687 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-4dd7f" event={"ID":"dfd014f4-1151-4fc5-8b0b-cfeb54b3845d","Type":"ContainerDied","Data":"5febd91fdf00927af6661c02036c04492a2c683750f9e984e67d385fa02aff6d"} Feb 19 10:02:38 crc kubenswrapper[4965]: I0219 10:02:38.206989 4965 generic.go:334] "Generic (PLEG): container finished" podID="f5fa636a-ebf1-4873-a54b-bdf1171f8138" containerID="e4c39128a06b341fe96b189d872bd76c045429dbb13f013f0d1a36f7273a1afc" exitCode=0 Feb 19 10:02:38 crc kubenswrapper[4965]: I0219 10:02:38.207287 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f339-account-create-update-vw9jg" event={"ID":"f5fa636a-ebf1-4873-a54b-bdf1171f8138","Type":"ContainerDied","Data":"e4c39128a06b341fe96b189d872bd76c045429dbb13f013f0d1a36f7273a1afc"} Feb 19 10:02:38 crc kubenswrapper[4965]: I0219 10:02:38.652307 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-zwjcj" Feb 19 10:02:38 crc kubenswrapper[4965]: I0219 10:02:38.706453 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-9s824" Feb 19 10:02:38 crc kubenswrapper[4965]: I0219 10:02:38.724004 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-zhgqf" Feb 19 10:02:38 crc kubenswrapper[4965]: I0219 10:02:38.799184 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzn77\" (UniqueName: \"kubernetes.io/projected/e59eb68c-26e2-4951-900e-5a7b59197d54-kube-api-access-tzn77\") pod \"e59eb68c-26e2-4951-900e-5a7b59197d54\" (UID: \"e59eb68c-26e2-4951-900e-5a7b59197d54\") " Feb 19 10:02:38 crc kubenswrapper[4965]: I0219 10:02:38.799263 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4p4hq\" (UniqueName: \"kubernetes.io/projected/a9f0aaf4-29c1-4187-a237-39502b74bbe9-kube-api-access-4p4hq\") pod \"a9f0aaf4-29c1-4187-a237-39502b74bbe9\" (UID: \"a9f0aaf4-29c1-4187-a237-39502b74bbe9\") " Feb 19 10:02:38 crc kubenswrapper[4965]: I0219 10:02:38.799297 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9f0aaf4-29c1-4187-a237-39502b74bbe9-operator-scripts\") pod \"a9f0aaf4-29c1-4187-a237-39502b74bbe9\" (UID: \"a9f0aaf4-29c1-4187-a237-39502b74bbe9\") " Feb 19 10:02:38 crc kubenswrapper[4965]: I0219 10:02:38.799405 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e59eb68c-26e2-4951-900e-5a7b59197d54-operator-scripts\") pod \"e59eb68c-26e2-4951-900e-5a7b59197d54\" (UID: \"e59eb68c-26e2-4951-900e-5a7b59197d54\") " Feb 19 10:02:38 crc kubenswrapper[4965]: I0219 10:02:38.800535 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e59eb68c-26e2-4951-900e-5a7b59197d54-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e59eb68c-26e2-4951-900e-5a7b59197d54" (UID: "e59eb68c-26e2-4951-900e-5a7b59197d54"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:38 crc kubenswrapper[4965]: I0219 10:02:38.800710 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9f0aaf4-29c1-4187-a237-39502b74bbe9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a9f0aaf4-29c1-4187-a237-39502b74bbe9" (UID: "a9f0aaf4-29c1-4187-a237-39502b74bbe9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:38 crc kubenswrapper[4965]: I0219 10:02:38.805124 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9f0aaf4-29c1-4187-a237-39502b74bbe9-kube-api-access-4p4hq" (OuterVolumeSpecName: "kube-api-access-4p4hq") pod "a9f0aaf4-29c1-4187-a237-39502b74bbe9" (UID: "a9f0aaf4-29c1-4187-a237-39502b74bbe9"). InnerVolumeSpecName "kube-api-access-4p4hq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:02:38 crc kubenswrapper[4965]: I0219 10:02:38.806038 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e59eb68c-26e2-4951-900e-5a7b59197d54-kube-api-access-tzn77" (OuterVolumeSpecName: "kube-api-access-tzn77") pod "e59eb68c-26e2-4951-900e-5a7b59197d54" (UID: "e59eb68c-26e2-4951-900e-5a7b59197d54"). InnerVolumeSpecName "kube-api-access-tzn77". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:02:38 crc kubenswrapper[4965]: I0219 10:02:38.901536 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57994f21-19c8-4e09-b972-de9d0f398410-operator-scripts\") pod \"57994f21-19c8-4e09-b972-de9d0f398410\" (UID: \"57994f21-19c8-4e09-b972-de9d0f398410\") " Feb 19 10:02:38 crc kubenswrapper[4965]: I0219 10:02:38.901770 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8jcp\" (UniqueName: \"kubernetes.io/projected/57994f21-19c8-4e09-b972-de9d0f398410-kube-api-access-c8jcp\") pod \"57994f21-19c8-4e09-b972-de9d0f398410\" (UID: \"57994f21-19c8-4e09-b972-de9d0f398410\") " Feb 19 10:02:38 crc kubenswrapper[4965]: I0219 10:02:38.902081 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57994f21-19c8-4e09-b972-de9d0f398410-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "57994f21-19c8-4e09-b972-de9d0f398410" (UID: "57994f21-19c8-4e09-b972-de9d0f398410"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:38 crc kubenswrapper[4965]: I0219 10:02:38.902642 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzn77\" (UniqueName: \"kubernetes.io/projected/e59eb68c-26e2-4951-900e-5a7b59197d54-kube-api-access-tzn77\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:38 crc kubenswrapper[4965]: I0219 10:02:38.902668 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4p4hq\" (UniqueName: \"kubernetes.io/projected/a9f0aaf4-29c1-4187-a237-39502b74bbe9-kube-api-access-4p4hq\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:38 crc kubenswrapper[4965]: I0219 10:02:38.902686 4965 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9f0aaf4-29c1-4187-a237-39502b74bbe9-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:38 crc kubenswrapper[4965]: I0219 10:02:38.902702 4965 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57994f21-19c8-4e09-b972-de9d0f398410-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:38 crc kubenswrapper[4965]: I0219 10:02:38.902718 4965 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e59eb68c-26e2-4951-900e-5a7b59197d54-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:38 crc kubenswrapper[4965]: I0219 10:02:38.905448 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57994f21-19c8-4e09-b972-de9d0f398410-kube-api-access-c8jcp" (OuterVolumeSpecName: "kube-api-access-c8jcp") pod "57994f21-19c8-4e09-b972-de9d0f398410" (UID: "57994f21-19c8-4e09-b972-de9d0f398410"). InnerVolumeSpecName "kube-api-access-c8jcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:02:39 crc kubenswrapper[4965]: I0219 10:02:39.004181 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8jcp\" (UniqueName: \"kubernetes.io/projected/57994f21-19c8-4e09-b972-de9d0f398410-kube-api-access-c8jcp\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:39 crc kubenswrapper[4965]: I0219 10:02:39.219918 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"efd4d0e2-bc4c-4bac-9236-37338445f7c7","Type":"ContainerStarted","Data":"9bba336a7312879e7a7e67a15199e21a7d41ad2b4b275ed6adfbede2a17ccd3b"} Feb 19 10:02:39 crc kubenswrapper[4965]: I0219 10:02:39.229361 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-zwjcj" event={"ID":"e59eb68c-26e2-4951-900e-5a7b59197d54","Type":"ContainerDied","Data":"fc143e2fcb7da688051d0358582624ab35dcd2d428f6c4f9b1f8a716d7bdd436"} Feb 19 10:02:39 crc kubenswrapper[4965]: I0219 10:02:39.229404 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc143e2fcb7da688051d0358582624ab35dcd2d428f6c4f9b1f8a716d7bdd436" Feb 19 10:02:39 crc kubenswrapper[4965]: I0219 10:02:39.229444 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-zwjcj" Feb 19 10:02:39 crc kubenswrapper[4965]: I0219 10:02:39.241718 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-29vfc" event={"ID":"de82886a-c712-4ba8-b921-373802d4e7a7","Type":"ContainerStarted","Data":"28be941b5ad90c273137a13063514952f3701f715393ed88a138ece1562781d7"} Feb 19 10:02:39 crc kubenswrapper[4965]: I0219 10:02:39.241841 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c79d794d7-29vfc" Feb 19 10:02:39 crc kubenswrapper[4965]: I0219 10:02:39.246319 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-9s824" Feb 19 10:02:39 crc kubenswrapper[4965]: I0219 10:02:39.246483 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-9s824" event={"ID":"a9f0aaf4-29c1-4187-a237-39502b74bbe9","Type":"ContainerDied","Data":"98efac91c86dd008c9a991ebc7b8736702930fe8f6da20e61940174edb8a4612"} Feb 19 10:02:39 crc kubenswrapper[4965]: I0219 10:02:39.246530 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98efac91c86dd008c9a991ebc7b8736702930fe8f6da20e61940174edb8a4612" Feb 19 10:02:39 crc kubenswrapper[4965]: I0219 10:02:39.265907 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-zhgqf" Feb 19 10:02:39 crc kubenswrapper[4965]: I0219 10:02:39.265979 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-zhgqf" event={"ID":"57994f21-19c8-4e09-b972-de9d0f398410","Type":"ContainerDied","Data":"6996880906e2b268204ec841b6a67121855a333007cc0690e9916b28abd11cbb"} Feb 19 10:02:39 crc kubenswrapper[4965]: I0219 10:02:39.266030 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6996880906e2b268204ec841b6a67121855a333007cc0690e9916b28abd11cbb" Feb 19 10:02:39 crc kubenswrapper[4965]: I0219 10:02:39.301414 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c79d794d7-29vfc" podStartSLOduration=3.301387585 podStartE2EDuration="3.301387585s" podCreationTimestamp="2026-02-19 10:02:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:02:39.287569969 +0000 UTC m=+1214.908891279" watchObservedRunningTime="2026-02-19 10:02:39.301387585 +0000 UTC m=+1214.922708895" Feb 19 10:02:41 crc kubenswrapper[4965]: I0219 10:02:41.245612 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 19 10:02:41 crc kubenswrapper[4965]: I0219 10:02:41.768634 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c4ed-account-create-update-w5v9f" Feb 19 10:02:41 crc kubenswrapper[4965]: I0219 10:02:41.774036 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-7880-account-create-update-fxqfz" Feb 19 10:02:41 crc kubenswrapper[4965]: I0219 10:02:41.782937 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-4dd7f" Feb 19 10:02:41 crc kubenswrapper[4965]: I0219 10:02:41.813955 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-6865-account-create-update-ql48d" Feb 19 10:02:41 crc kubenswrapper[4965]: I0219 10:02:41.821667 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f339-account-create-update-vw9jg" Feb 19 10:02:41 crc kubenswrapper[4965]: I0219 10:02:41.866393 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8wgm\" (UniqueName: \"kubernetes.io/projected/dfd014f4-1151-4fc5-8b0b-cfeb54b3845d-kube-api-access-p8wgm\") pod \"dfd014f4-1151-4fc5-8b0b-cfeb54b3845d\" (UID: \"dfd014f4-1151-4fc5-8b0b-cfeb54b3845d\") " Feb 19 10:02:41 crc kubenswrapper[4965]: I0219 10:02:41.866466 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dfd014f4-1151-4fc5-8b0b-cfeb54b3845d-operator-scripts\") pod \"dfd014f4-1151-4fc5-8b0b-cfeb54b3845d\" (UID: \"dfd014f4-1151-4fc5-8b0b-cfeb54b3845d\") " Feb 19 10:02:41 crc kubenswrapper[4965]: I0219 10:02:41.866492 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5eb6197e-e339-43bd-861a-faff9e8f4f65-operator-scripts\") pod \"5eb6197e-e339-43bd-861a-faff9e8f4f65\" (UID: \"5eb6197e-e339-43bd-861a-faff9e8f4f65\") " Feb 19 10:02:41 crc kubenswrapper[4965]: I0219 10:02:41.866551 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xng8f\" (UniqueName: \"kubernetes.io/projected/547c989f-c71d-4a1b-9031-61fd03d9c2f1-kube-api-access-xng8f\") pod \"547c989f-c71d-4a1b-9031-61fd03d9c2f1\" (UID: \"547c989f-c71d-4a1b-9031-61fd03d9c2f1\") " Feb 19 10:02:41 crc kubenswrapper[4965]: I0219 10:02:41.866586 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5c7tk\" (UniqueName: \"kubernetes.io/projected/e3111b00-ad13-4a92-97ca-95a778007dc2-kube-api-access-5c7tk\") pod \"e3111b00-ad13-4a92-97ca-95a778007dc2\" (UID: \"e3111b00-ad13-4a92-97ca-95a778007dc2\") " Feb 19 10:02:41 crc kubenswrapper[4965]: I0219 10:02:41.866610 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9f4s6\" (UniqueName: \"kubernetes.io/projected/f5fa636a-ebf1-4873-a54b-bdf1171f8138-kube-api-access-9f4s6\") pod \"f5fa636a-ebf1-4873-a54b-bdf1171f8138\" (UID: \"f5fa636a-ebf1-4873-a54b-bdf1171f8138\") " Feb 19 10:02:41 crc kubenswrapper[4965]: I0219 10:02:41.866667 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/547c989f-c71d-4a1b-9031-61fd03d9c2f1-operator-scripts\") pod \"547c989f-c71d-4a1b-9031-61fd03d9c2f1\" (UID: \"547c989f-c71d-4a1b-9031-61fd03d9c2f1\") " Feb 19 10:02:41 crc kubenswrapper[4965]: I0219 10:02:41.866694 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swngw\" (UniqueName: \"kubernetes.io/projected/5eb6197e-e339-43bd-861a-faff9e8f4f65-kube-api-access-swngw\") pod \"5eb6197e-e339-43bd-861a-faff9e8f4f65\" (UID: \"5eb6197e-e339-43bd-861a-faff9e8f4f65\") " Feb 19 10:02:41 crc kubenswrapper[4965]: I0219 10:02:41.866767 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5fa636a-ebf1-4873-a54b-bdf1171f8138-operator-scripts\") pod \"f5fa636a-ebf1-4873-a54b-bdf1171f8138\" (UID: \"f5fa636a-ebf1-4873-a54b-bdf1171f8138\") " Feb 19 10:02:41 crc kubenswrapper[4965]: I0219 10:02:41.866798 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3111b00-ad13-4a92-97ca-95a778007dc2-operator-scripts\") pod \"e3111b00-ad13-4a92-97ca-95a778007dc2\" (UID: \"e3111b00-ad13-4a92-97ca-95a778007dc2\") " Feb 19 10:02:41 crc kubenswrapper[4965]: I0219 10:02:41.867791 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3111b00-ad13-4a92-97ca-95a778007dc2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e3111b00-ad13-4a92-97ca-95a778007dc2" (UID: "e3111b00-ad13-4a92-97ca-95a778007dc2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:41 crc kubenswrapper[4965]: I0219 10:02:41.868000 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfd014f4-1151-4fc5-8b0b-cfeb54b3845d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dfd014f4-1151-4fc5-8b0b-cfeb54b3845d" (UID: "dfd014f4-1151-4fc5-8b0b-cfeb54b3845d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:41 crc kubenswrapper[4965]: I0219 10:02:41.868074 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/547c989f-c71d-4a1b-9031-61fd03d9c2f1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "547c989f-c71d-4a1b-9031-61fd03d9c2f1" (UID: "547c989f-c71d-4a1b-9031-61fd03d9c2f1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:41 crc kubenswrapper[4965]: I0219 10:02:41.868019 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5fa636a-ebf1-4873-a54b-bdf1171f8138-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f5fa636a-ebf1-4873-a54b-bdf1171f8138" (UID: "f5fa636a-ebf1-4873-a54b-bdf1171f8138"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:41 crc kubenswrapper[4965]: I0219 10:02:41.868638 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5eb6197e-e339-43bd-861a-faff9e8f4f65-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5eb6197e-e339-43bd-861a-faff9e8f4f65" (UID: "5eb6197e-e339-43bd-861a-faff9e8f4f65"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:41 crc kubenswrapper[4965]: I0219 10:02:41.871577 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfd014f4-1151-4fc5-8b0b-cfeb54b3845d-kube-api-access-p8wgm" (OuterVolumeSpecName: "kube-api-access-p8wgm") pod "dfd014f4-1151-4fc5-8b0b-cfeb54b3845d" (UID: "dfd014f4-1151-4fc5-8b0b-cfeb54b3845d"). InnerVolumeSpecName "kube-api-access-p8wgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:02:41 crc kubenswrapper[4965]: I0219 10:02:41.872517 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/547c989f-c71d-4a1b-9031-61fd03d9c2f1-kube-api-access-xng8f" (OuterVolumeSpecName: "kube-api-access-xng8f") pod "547c989f-c71d-4a1b-9031-61fd03d9c2f1" (UID: "547c989f-c71d-4a1b-9031-61fd03d9c2f1"). InnerVolumeSpecName "kube-api-access-xng8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:02:41 crc kubenswrapper[4965]: I0219 10:02:41.872577 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3111b00-ad13-4a92-97ca-95a778007dc2-kube-api-access-5c7tk" (OuterVolumeSpecName: "kube-api-access-5c7tk") pod "e3111b00-ad13-4a92-97ca-95a778007dc2" (UID: "e3111b00-ad13-4a92-97ca-95a778007dc2"). InnerVolumeSpecName "kube-api-access-5c7tk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:02:41 crc kubenswrapper[4965]: I0219 10:02:41.881167 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5fa636a-ebf1-4873-a54b-bdf1171f8138-kube-api-access-9f4s6" (OuterVolumeSpecName: "kube-api-access-9f4s6") pod "f5fa636a-ebf1-4873-a54b-bdf1171f8138" (UID: "f5fa636a-ebf1-4873-a54b-bdf1171f8138"). InnerVolumeSpecName "kube-api-access-9f4s6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:02:41 crc kubenswrapper[4965]: I0219 10:02:41.888174 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5eb6197e-e339-43bd-861a-faff9e8f4f65-kube-api-access-swngw" (OuterVolumeSpecName: "kube-api-access-swngw") pod "5eb6197e-e339-43bd-861a-faff9e8f4f65" (UID: "5eb6197e-e339-43bd-861a-faff9e8f4f65"). InnerVolumeSpecName "kube-api-access-swngw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:02:41 crc kubenswrapper[4965]: I0219 10:02:41.968421 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8wgm\" (UniqueName: \"kubernetes.io/projected/dfd014f4-1151-4fc5-8b0b-cfeb54b3845d-kube-api-access-p8wgm\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:41 crc kubenswrapper[4965]: I0219 10:02:41.968452 4965 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dfd014f4-1151-4fc5-8b0b-cfeb54b3845d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:41 crc kubenswrapper[4965]: I0219 10:02:41.968461 4965 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5eb6197e-e339-43bd-861a-faff9e8f4f65-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:41 crc kubenswrapper[4965]: I0219 10:02:41.968470 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xng8f\" (UniqueName: \"kubernetes.io/projected/547c989f-c71d-4a1b-9031-61fd03d9c2f1-kube-api-access-xng8f\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:41 crc kubenswrapper[4965]: I0219 10:02:41.968478 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5c7tk\" (UniqueName: \"kubernetes.io/projected/e3111b00-ad13-4a92-97ca-95a778007dc2-kube-api-access-5c7tk\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:41 crc kubenswrapper[4965]: I0219 10:02:41.968488 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9f4s6\" (UniqueName: \"kubernetes.io/projected/f5fa636a-ebf1-4873-a54b-bdf1171f8138-kube-api-access-9f4s6\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:41 crc kubenswrapper[4965]: I0219 10:02:41.968497 4965 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/547c989f-c71d-4a1b-9031-61fd03d9c2f1-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:41 crc kubenswrapper[4965]: I0219 10:02:41.968506 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swngw\" (UniqueName: \"kubernetes.io/projected/5eb6197e-e339-43bd-861a-faff9e8f4f65-kube-api-access-swngw\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:41 crc kubenswrapper[4965]: I0219 10:02:41.968514 4965 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5fa636a-ebf1-4873-a54b-bdf1171f8138-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:41 crc kubenswrapper[4965]: I0219 10:02:41.968523 4965 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3111b00-ad13-4a92-97ca-95a778007dc2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:42 crc kubenswrapper[4965]: I0219 10:02:42.308539 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c4ed-account-create-update-w5v9f" event={"ID":"5eb6197e-e339-43bd-861a-faff9e8f4f65","Type":"ContainerDied","Data":"1563aad2e395643383e3dd6fe6417c76dcccf74dd86c1dc0c296eca3e17b3d60"} Feb 19 10:02:42 crc kubenswrapper[4965]: I0219 10:02:42.308563 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c4ed-account-create-update-w5v9f" Feb 19 10:02:42 crc kubenswrapper[4965]: I0219 10:02:42.308596 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1563aad2e395643383e3dd6fe6417c76dcccf74dd86c1dc0c296eca3e17b3d60" Feb 19 10:02:42 crc kubenswrapper[4965]: I0219 10:02:42.310694 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-r86pd" event={"ID":"5e834046-19e3-47b1-b822-6c73b0d8be74","Type":"ContainerStarted","Data":"dba155e576c3de023f13b353f54421392f022889e1b71ca59d844da11ccfb4f9"} Feb 19 10:02:42 crc kubenswrapper[4965]: I0219 10:02:42.312589 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-4dd7f" event={"ID":"dfd014f4-1151-4fc5-8b0b-cfeb54b3845d","Type":"ContainerDied","Data":"36fdcf4e001ce6ac8ea35255b075972d3c6645552b856e55604a084a12c75f43"} Feb 19 10:02:42 crc kubenswrapper[4965]: I0219 10:02:42.312616 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36fdcf4e001ce6ac8ea35255b075972d3c6645552b856e55604a084a12c75f43" Feb 19 10:02:42 crc kubenswrapper[4965]: I0219 10:02:42.312665 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-4dd7f" Feb 19 10:02:42 crc kubenswrapper[4965]: I0219 10:02:42.321235 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f339-account-create-update-vw9jg" event={"ID":"f5fa636a-ebf1-4873-a54b-bdf1171f8138","Type":"ContainerDied","Data":"ab5ff786bbda166d21a24abe45c92cdb0b8da68ae98b2e54de09ebe961e6c6bc"} Feb 19 10:02:42 crc kubenswrapper[4965]: I0219 10:02:42.321284 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab5ff786bbda166d21a24abe45c92cdb0b8da68ae98b2e54de09ebe961e6c6bc" Feb 19 10:02:42 crc kubenswrapper[4965]: I0219 10:02:42.321352 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f339-account-create-update-vw9jg" Feb 19 10:02:42 crc kubenswrapper[4965]: I0219 10:02:42.325920 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-7880-account-create-update-fxqfz" event={"ID":"e3111b00-ad13-4a92-97ca-95a778007dc2","Type":"ContainerDied","Data":"6268cab13ebdb5190400ccf3c2cd2323563ca06fc75ee2d170ab8440df235822"} Feb 19 10:02:42 crc kubenswrapper[4965]: I0219 10:02:42.325954 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6268cab13ebdb5190400ccf3c2cd2323563ca06fc75ee2d170ab8440df235822" Feb 19 10:02:42 crc kubenswrapper[4965]: I0219 10:02:42.326005 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-7880-account-create-update-fxqfz" Feb 19 10:02:42 crc kubenswrapper[4965]: I0219 10:02:42.330766 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-6865-account-create-update-ql48d" event={"ID":"547c989f-c71d-4a1b-9031-61fd03d9c2f1","Type":"ContainerDied","Data":"f6a04adb107513a99af40e7a7b9c4cc038e53c5e243595afc3001fec5b30608b"} Feb 19 10:02:42 crc kubenswrapper[4965]: I0219 10:02:42.330801 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6a04adb107513a99af40e7a7b9c4cc038e53c5e243595afc3001fec5b30608b" Feb 19 10:02:42 crc kubenswrapper[4965]: I0219 10:02:42.330866 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-6865-account-create-update-ql48d" Feb 19 10:02:42 crc kubenswrapper[4965]: I0219 10:02:42.361091 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-r86pd" podStartSLOduration=2.6058925090000002 podStartE2EDuration="8.361075072s" podCreationTimestamp="2026-02-19 10:02:34 +0000 UTC" firstStartedPulling="2026-02-19 10:02:35.871608711 +0000 UTC m=+1211.492930021" lastFinishedPulling="2026-02-19 10:02:41.626791274 +0000 UTC m=+1217.248112584" observedRunningTime="2026-02-19 10:02:42.335247375 +0000 UTC m=+1217.956568685" watchObservedRunningTime="2026-02-19 10:02:42.361075072 +0000 UTC m=+1217.982396382" Feb 19 10:02:43 crc kubenswrapper[4965]: I0219 10:02:43.346681 4965 generic.go:334] "Generic (PLEG): container finished" podID="18ba8da5-8c18-4dad-91ce-dc34ef3fc6e4" containerID="0a713c7e515c2ad86f134cc876a83fc23beb8a521c00339ea120389fa4cc470f" exitCode=0 Feb 19 10:02:43 crc kubenswrapper[4965]: I0219 10:02:43.346799 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-vkkc7" event={"ID":"18ba8da5-8c18-4dad-91ce-dc34ef3fc6e4","Type":"ContainerDied","Data":"0a713c7e515c2ad86f134cc876a83fc23beb8a521c00339ea120389fa4cc470f"} Feb 19 10:02:44 crc kubenswrapper[4965]: E0219 10:02:44.658410 4965 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e834046_19e3_47b1_b822_6c73b0d8be74.slice/crio-dba155e576c3de023f13b353f54421392f022889e1b71ca59d844da11ccfb4f9.scope\": RecentStats: unable to find data in memory cache]" Feb 19 10:02:44 crc kubenswrapper[4965]: I0219 10:02:44.800456 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-vkkc7" Feb 19 10:02:44 crc kubenswrapper[4965]: I0219 10:02:44.934239 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18ba8da5-8c18-4dad-91ce-dc34ef3fc6e4-config-data\") pod \"18ba8da5-8c18-4dad-91ce-dc34ef3fc6e4\" (UID: \"18ba8da5-8c18-4dad-91ce-dc34ef3fc6e4\") " Feb 19 10:02:44 crc kubenswrapper[4965]: I0219 10:02:44.934339 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2h4gf\" (UniqueName: \"kubernetes.io/projected/18ba8da5-8c18-4dad-91ce-dc34ef3fc6e4-kube-api-access-2h4gf\") pod \"18ba8da5-8c18-4dad-91ce-dc34ef3fc6e4\" (UID: \"18ba8da5-8c18-4dad-91ce-dc34ef3fc6e4\") " Feb 19 10:02:44 crc kubenswrapper[4965]: I0219 10:02:44.934476 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18ba8da5-8c18-4dad-91ce-dc34ef3fc6e4-combined-ca-bundle\") pod \"18ba8da5-8c18-4dad-91ce-dc34ef3fc6e4\" (UID: \"18ba8da5-8c18-4dad-91ce-dc34ef3fc6e4\") " Feb 19 10:02:44 crc kubenswrapper[4965]: I0219 10:02:44.934512 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/18ba8da5-8c18-4dad-91ce-dc34ef3fc6e4-db-sync-config-data\") pod \"18ba8da5-8c18-4dad-91ce-dc34ef3fc6e4\" (UID: \"18ba8da5-8c18-4dad-91ce-dc34ef3fc6e4\") " Feb 19 10:02:44 crc kubenswrapper[4965]: I0219 10:02:44.943543 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18ba8da5-8c18-4dad-91ce-dc34ef3fc6e4-kube-api-access-2h4gf" (OuterVolumeSpecName: "kube-api-access-2h4gf") pod "18ba8da5-8c18-4dad-91ce-dc34ef3fc6e4" (UID: "18ba8da5-8c18-4dad-91ce-dc34ef3fc6e4"). InnerVolumeSpecName "kube-api-access-2h4gf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:02:44 crc kubenswrapper[4965]: I0219 10:02:44.966770 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18ba8da5-8c18-4dad-91ce-dc34ef3fc6e4-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "18ba8da5-8c18-4dad-91ce-dc34ef3fc6e4" (UID: "18ba8da5-8c18-4dad-91ce-dc34ef3fc6e4"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:44 crc kubenswrapper[4965]: I0219 10:02:44.976558 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18ba8da5-8c18-4dad-91ce-dc34ef3fc6e4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "18ba8da5-8c18-4dad-91ce-dc34ef3fc6e4" (UID: "18ba8da5-8c18-4dad-91ce-dc34ef3fc6e4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:45 crc kubenswrapper[4965]: I0219 10:02:45.018407 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18ba8da5-8c18-4dad-91ce-dc34ef3fc6e4-config-data" (OuterVolumeSpecName: "config-data") pod "18ba8da5-8c18-4dad-91ce-dc34ef3fc6e4" (UID: "18ba8da5-8c18-4dad-91ce-dc34ef3fc6e4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:45 crc kubenswrapper[4965]: I0219 10:02:45.036572 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2h4gf\" (UniqueName: \"kubernetes.io/projected/18ba8da5-8c18-4dad-91ce-dc34ef3fc6e4-kube-api-access-2h4gf\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:45 crc kubenswrapper[4965]: I0219 10:02:45.036618 4965 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18ba8da5-8c18-4dad-91ce-dc34ef3fc6e4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:45 crc kubenswrapper[4965]: I0219 10:02:45.036629 4965 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/18ba8da5-8c18-4dad-91ce-dc34ef3fc6e4-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:45 crc kubenswrapper[4965]: I0219 10:02:45.036640 4965 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18ba8da5-8c18-4dad-91ce-dc34ef3fc6e4-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:45 crc kubenswrapper[4965]: I0219 10:02:45.372643 4965 generic.go:334] "Generic (PLEG): container finished" podID="efd4d0e2-bc4c-4bac-9236-37338445f7c7" containerID="9bba336a7312879e7a7e67a15199e21a7d41ad2b4b275ed6adfbede2a17ccd3b" exitCode=0 Feb 19 10:02:45 crc kubenswrapper[4965]: I0219 10:02:45.372733 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"efd4d0e2-bc4c-4bac-9236-37338445f7c7","Type":"ContainerDied","Data":"9bba336a7312879e7a7e67a15199e21a7d41ad2b4b275ed6adfbede2a17ccd3b"} Feb 19 10:02:45 crc kubenswrapper[4965]: I0219 10:02:45.374857 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-vkkc7" event={"ID":"18ba8da5-8c18-4dad-91ce-dc34ef3fc6e4","Type":"ContainerDied","Data":"b1addb71c18b15579fddf937c2541f6b592e47c287bae2b82729913628462a5d"} Feb 19 10:02:45 crc kubenswrapper[4965]: I0219 10:02:45.375320 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1addb71c18b15579fddf937c2541f6b592e47c287bae2b82729913628462a5d" Feb 19 10:02:45 crc kubenswrapper[4965]: I0219 10:02:45.374896 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-vkkc7" Feb 19 10:02:45 crc kubenswrapper[4965]: I0219 10:02:45.376982 4965 generic.go:334] "Generic (PLEG): container finished" podID="5e834046-19e3-47b1-b822-6c73b0d8be74" containerID="dba155e576c3de023f13b353f54421392f022889e1b71ca59d844da11ccfb4f9" exitCode=0 Feb 19 10:02:45 crc kubenswrapper[4965]: I0219 10:02:45.377026 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-r86pd" event={"ID":"5e834046-19e3-47b1-b822-6c73b0d8be74","Type":"ContainerDied","Data":"dba155e576c3de023f13b353f54421392f022889e1b71ca59d844da11ccfb4f9"} Feb 19 10:02:45 crc kubenswrapper[4965]: I0219 10:02:45.828483 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-29vfc"] Feb 19 10:02:45 crc kubenswrapper[4965]: I0219 10:02:45.829043 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c79d794d7-29vfc" podUID="de82886a-c712-4ba8-b921-373802d4e7a7" containerName="dnsmasq-dns" containerID="cri-o://28be941b5ad90c273137a13063514952f3701f715393ed88a138ece1562781d7" gracePeriod=10 Feb 19 10:02:45 crc kubenswrapper[4965]: I0219 10:02:45.834321 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c79d794d7-29vfc" Feb 19 10:02:45 crc kubenswrapper[4965]: I0219 10:02:45.861780 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-7plxw"] Feb 19 10:02:45 crc kubenswrapper[4965]: E0219 10:02:45.862423 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18ba8da5-8c18-4dad-91ce-dc34ef3fc6e4" containerName="glance-db-sync" Feb 19 10:02:45 crc kubenswrapper[4965]: I0219 10:02:45.862446 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="18ba8da5-8c18-4dad-91ce-dc34ef3fc6e4" containerName="glance-db-sync" Feb 19 10:02:45 crc kubenswrapper[4965]: E0219 10:02:45.862467 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfd014f4-1151-4fc5-8b0b-cfeb54b3845d" containerName="mariadb-database-create" Feb 19 10:02:45 crc kubenswrapper[4965]: I0219 10:02:45.862476 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfd014f4-1151-4fc5-8b0b-cfeb54b3845d" containerName="mariadb-database-create" Feb 19 10:02:45 crc kubenswrapper[4965]: E0219 10:02:45.862492 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="547c989f-c71d-4a1b-9031-61fd03d9c2f1" containerName="mariadb-account-create-update" Feb 19 10:02:45 crc kubenswrapper[4965]: I0219 10:02:45.862500 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="547c989f-c71d-4a1b-9031-61fd03d9c2f1" containerName="mariadb-account-create-update" Feb 19 10:02:45 crc kubenswrapper[4965]: E0219 10:02:45.862514 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5eb6197e-e339-43bd-861a-faff9e8f4f65" containerName="mariadb-account-create-update" Feb 19 10:02:45 crc kubenswrapper[4965]: I0219 10:02:45.862522 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="5eb6197e-e339-43bd-861a-faff9e8f4f65" containerName="mariadb-account-create-update" Feb 19 10:02:45 crc kubenswrapper[4965]: E0219 10:02:45.862536 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3111b00-ad13-4a92-97ca-95a778007dc2" containerName="mariadb-account-create-update" Feb 19 10:02:45 crc kubenswrapper[4965]: I0219 10:02:45.862544 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3111b00-ad13-4a92-97ca-95a778007dc2" containerName="mariadb-account-create-update" Feb 19 10:02:45 crc kubenswrapper[4965]: E0219 10:02:45.862556 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9f0aaf4-29c1-4187-a237-39502b74bbe9" containerName="mariadb-database-create" Feb 19 10:02:45 crc kubenswrapper[4965]: I0219 10:02:45.862563 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9f0aaf4-29c1-4187-a237-39502b74bbe9" containerName="mariadb-database-create" Feb 19 10:02:45 crc kubenswrapper[4965]: E0219 10:02:45.862572 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5fa636a-ebf1-4873-a54b-bdf1171f8138" containerName="mariadb-account-create-update" Feb 19 10:02:45 crc kubenswrapper[4965]: I0219 10:02:45.862580 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5fa636a-ebf1-4873-a54b-bdf1171f8138" containerName="mariadb-account-create-update" Feb 19 10:02:45 crc kubenswrapper[4965]: E0219 10:02:45.862595 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e59eb68c-26e2-4951-900e-5a7b59197d54" containerName="mariadb-database-create" Feb 19 10:02:45 crc kubenswrapper[4965]: I0219 10:02:45.862603 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="e59eb68c-26e2-4951-900e-5a7b59197d54" containerName="mariadb-database-create" Feb 19 10:02:45 crc kubenswrapper[4965]: E0219 10:02:45.862616 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57994f21-19c8-4e09-b972-de9d0f398410" containerName="mariadb-database-create" Feb 19 10:02:45 crc kubenswrapper[4965]: I0219 10:02:45.862623 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="57994f21-19c8-4e09-b972-de9d0f398410" containerName="mariadb-database-create" Feb 19 10:02:45 crc kubenswrapper[4965]: I0219 10:02:45.862828 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5fa636a-ebf1-4873-a54b-bdf1171f8138" containerName="mariadb-account-create-update" Feb 19 10:02:45 crc kubenswrapper[4965]: I0219 10:02:45.862845 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="18ba8da5-8c18-4dad-91ce-dc34ef3fc6e4" containerName="glance-db-sync" Feb 19 10:02:45 crc kubenswrapper[4965]: I0219 10:02:45.862853 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="57994f21-19c8-4e09-b972-de9d0f398410" containerName="mariadb-database-create" Feb 19 10:02:45 crc kubenswrapper[4965]: I0219 10:02:45.862871 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9f0aaf4-29c1-4187-a237-39502b74bbe9" containerName="mariadb-database-create" Feb 19 10:02:45 crc kubenswrapper[4965]: I0219 10:02:45.862885 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="5eb6197e-e339-43bd-861a-faff9e8f4f65" containerName="mariadb-account-create-update" Feb 19 10:02:45 crc kubenswrapper[4965]: I0219 10:02:45.862902 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3111b00-ad13-4a92-97ca-95a778007dc2" containerName="mariadb-account-create-update" Feb 19 10:02:45 crc kubenswrapper[4965]: I0219 10:02:45.862914 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfd014f4-1151-4fc5-8b0b-cfeb54b3845d" containerName="mariadb-database-create" Feb 19 10:02:45 crc kubenswrapper[4965]: I0219 10:02:45.862930 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="e59eb68c-26e2-4951-900e-5a7b59197d54" containerName="mariadb-database-create" Feb 19 10:02:45 crc kubenswrapper[4965]: I0219 10:02:45.862941 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="547c989f-c71d-4a1b-9031-61fd03d9c2f1" containerName="mariadb-account-create-update" Feb 19 10:02:45 crc kubenswrapper[4965]: I0219 10:02:45.864449 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-7plxw" Feb 19 10:02:45 crc kubenswrapper[4965]: I0219 10:02:45.894271 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-7plxw"] Feb 19 10:02:45 crc kubenswrapper[4965]: I0219 10:02:45.955377 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23eeb935-93ea-4fee-9140-89216cef2850-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-7plxw\" (UID: \"23eeb935-93ea-4fee-9140-89216cef2850\") " pod="openstack/dnsmasq-dns-5f59b8f679-7plxw" Feb 19 10:02:45 crc kubenswrapper[4965]: I0219 10:02:45.955720 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23eeb935-93ea-4fee-9140-89216cef2850-config\") pod \"dnsmasq-dns-5f59b8f679-7plxw\" (UID: \"23eeb935-93ea-4fee-9140-89216cef2850\") " pod="openstack/dnsmasq-dns-5f59b8f679-7plxw" Feb 19 10:02:45 crc kubenswrapper[4965]: I0219 10:02:45.955755 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23eeb935-93ea-4fee-9140-89216cef2850-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-7plxw\" (UID: \"23eeb935-93ea-4fee-9140-89216cef2850\") " pod="openstack/dnsmasq-dns-5f59b8f679-7plxw" Feb 19 10:02:45 crc kubenswrapper[4965]: I0219 10:02:45.955817 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23eeb935-93ea-4fee-9140-89216cef2850-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-7plxw\" (UID: \"23eeb935-93ea-4fee-9140-89216cef2850\") " pod="openstack/dnsmasq-dns-5f59b8f679-7plxw" Feb 19 10:02:45 crc kubenswrapper[4965]: I0219 10:02:45.956018 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x98hp\" (UniqueName: \"kubernetes.io/projected/23eeb935-93ea-4fee-9140-89216cef2850-kube-api-access-x98hp\") pod \"dnsmasq-dns-5f59b8f679-7plxw\" (UID: \"23eeb935-93ea-4fee-9140-89216cef2850\") " pod="openstack/dnsmasq-dns-5f59b8f679-7plxw" Feb 19 10:02:45 crc kubenswrapper[4965]: I0219 10:02:45.956180 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23eeb935-93ea-4fee-9140-89216cef2850-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-7plxw\" (UID: \"23eeb935-93ea-4fee-9140-89216cef2850\") " pod="openstack/dnsmasq-dns-5f59b8f679-7plxw" Feb 19 10:02:46 crc kubenswrapper[4965]: I0219 10:02:46.058165 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x98hp\" (UniqueName: \"kubernetes.io/projected/23eeb935-93ea-4fee-9140-89216cef2850-kube-api-access-x98hp\") pod \"dnsmasq-dns-5f59b8f679-7plxw\" (UID: \"23eeb935-93ea-4fee-9140-89216cef2850\") " pod="openstack/dnsmasq-dns-5f59b8f679-7plxw" Feb 19 10:02:46 crc kubenswrapper[4965]: I0219 10:02:46.059387 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23eeb935-93ea-4fee-9140-89216cef2850-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-7plxw\" (UID: \"23eeb935-93ea-4fee-9140-89216cef2850\") " pod="openstack/dnsmasq-dns-5f59b8f679-7plxw" Feb 19 10:02:46 crc kubenswrapper[4965]: I0219 10:02:46.059548 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23eeb935-93ea-4fee-9140-89216cef2850-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-7plxw\" (UID: \"23eeb935-93ea-4fee-9140-89216cef2850\") " pod="openstack/dnsmasq-dns-5f59b8f679-7plxw" Feb 19 10:02:46 crc kubenswrapper[4965]: I0219 10:02:46.059706 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23eeb935-93ea-4fee-9140-89216cef2850-config\") pod \"dnsmasq-dns-5f59b8f679-7plxw\" (UID: \"23eeb935-93ea-4fee-9140-89216cef2850\") " pod="openstack/dnsmasq-dns-5f59b8f679-7plxw" Feb 19 10:02:46 crc kubenswrapper[4965]: I0219 10:02:46.059783 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23eeb935-93ea-4fee-9140-89216cef2850-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-7plxw\" (UID: \"23eeb935-93ea-4fee-9140-89216cef2850\") " pod="openstack/dnsmasq-dns-5f59b8f679-7plxw" Feb 19 10:02:46 crc kubenswrapper[4965]: I0219 10:02:46.060143 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23eeb935-93ea-4fee-9140-89216cef2850-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-7plxw\" (UID: \"23eeb935-93ea-4fee-9140-89216cef2850\") " pod="openstack/dnsmasq-dns-5f59b8f679-7plxw" Feb 19 10:02:46 crc kubenswrapper[4965]: I0219 10:02:46.060469 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23eeb935-93ea-4fee-9140-89216cef2850-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-7plxw\" (UID: \"23eeb935-93ea-4fee-9140-89216cef2850\") " pod="openstack/dnsmasq-dns-5f59b8f679-7plxw" Feb 19 10:02:46 crc kubenswrapper[4965]: I0219 10:02:46.061318 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23eeb935-93ea-4fee-9140-89216cef2850-config\") pod \"dnsmasq-dns-5f59b8f679-7plxw\" (UID: \"23eeb935-93ea-4fee-9140-89216cef2850\") " pod="openstack/dnsmasq-dns-5f59b8f679-7plxw" Feb 19 10:02:46 crc kubenswrapper[4965]: I0219 10:02:46.061502 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23eeb935-93ea-4fee-9140-89216cef2850-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-7plxw\" (UID: \"23eeb935-93ea-4fee-9140-89216cef2850\") " pod="openstack/dnsmasq-dns-5f59b8f679-7plxw" Feb 19 10:02:46 crc kubenswrapper[4965]: I0219 10:02:46.061763 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23eeb935-93ea-4fee-9140-89216cef2850-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-7plxw\" (UID: \"23eeb935-93ea-4fee-9140-89216cef2850\") " pod="openstack/dnsmasq-dns-5f59b8f679-7plxw" Feb 19 10:02:46 crc kubenswrapper[4965]: I0219 10:02:46.062093 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23eeb935-93ea-4fee-9140-89216cef2850-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-7plxw\" (UID: \"23eeb935-93ea-4fee-9140-89216cef2850\") " pod="openstack/dnsmasq-dns-5f59b8f679-7plxw" Feb 19 10:02:46 crc kubenswrapper[4965]: I0219 10:02:46.081884 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x98hp\" (UniqueName: \"kubernetes.io/projected/23eeb935-93ea-4fee-9140-89216cef2850-kube-api-access-x98hp\") pod \"dnsmasq-dns-5f59b8f679-7plxw\" (UID: \"23eeb935-93ea-4fee-9140-89216cef2850\") " pod="openstack/dnsmasq-dns-5f59b8f679-7plxw" Feb 19 10:02:46 crc kubenswrapper[4965]: I0219 10:02:46.188735 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-7plxw" Feb 19 10:02:46 crc kubenswrapper[4965]: I0219 10:02:46.346287 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-29vfc" Feb 19 10:02:46 crc kubenswrapper[4965]: I0219 10:02:46.451600 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"efd4d0e2-bc4c-4bac-9236-37338445f7c7","Type":"ContainerStarted","Data":"e67f0deb2a8bc1d234028dec08d5fd84849decbc74686c0d8215d0d6111f93f5"} Feb 19 10:02:46 crc kubenswrapper[4965]: I0219 10:02:46.465110 4965 generic.go:334] "Generic (PLEG): container finished" podID="de82886a-c712-4ba8-b921-373802d4e7a7" containerID="28be941b5ad90c273137a13063514952f3701f715393ed88a138ece1562781d7" exitCode=0 Feb 19 10:02:46 crc kubenswrapper[4965]: I0219 10:02:46.465340 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-29vfc" Feb 19 10:02:46 crc kubenswrapper[4965]: I0219 10:02:46.465568 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-29vfc" event={"ID":"de82886a-c712-4ba8-b921-373802d4e7a7","Type":"ContainerDied","Data":"28be941b5ad90c273137a13063514952f3701f715393ed88a138ece1562781d7"} Feb 19 10:02:46 crc kubenswrapper[4965]: I0219 10:02:46.465599 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-29vfc" event={"ID":"de82886a-c712-4ba8-b921-373802d4e7a7","Type":"ContainerDied","Data":"af6a24a18b805cc162bdc37ef2ba25d096fb8cec362c29e4bb7eb324f24e436c"} Feb 19 10:02:46 crc kubenswrapper[4965]: I0219 10:02:46.465617 4965 scope.go:117] "RemoveContainer" containerID="28be941b5ad90c273137a13063514952f3701f715393ed88a138ece1562781d7" Feb 19 10:02:46 crc kubenswrapper[4965]: I0219 10:02:46.467437 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de82886a-c712-4ba8-b921-373802d4e7a7-ovsdbserver-sb\") pod \"de82886a-c712-4ba8-b921-373802d4e7a7\" (UID: \"de82886a-c712-4ba8-b921-373802d4e7a7\") " Feb 19 10:02:46 crc kubenswrapper[4965]: I0219 10:02:46.467480 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de82886a-c712-4ba8-b921-373802d4e7a7-ovsdbserver-nb\") pod \"de82886a-c712-4ba8-b921-373802d4e7a7\" (UID: \"de82886a-c712-4ba8-b921-373802d4e7a7\") " Feb 19 10:02:46 crc kubenswrapper[4965]: I0219 10:02:46.467609 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wd9nd\" (UniqueName: \"kubernetes.io/projected/de82886a-c712-4ba8-b921-373802d4e7a7-kube-api-access-wd9nd\") pod \"de82886a-c712-4ba8-b921-373802d4e7a7\" (UID: \"de82886a-c712-4ba8-b921-373802d4e7a7\") " Feb 19 10:02:46 crc kubenswrapper[4965]: I0219 10:02:46.467703 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/de82886a-c712-4ba8-b921-373802d4e7a7-dns-swift-storage-0\") pod \"de82886a-c712-4ba8-b921-373802d4e7a7\" (UID: \"de82886a-c712-4ba8-b921-373802d4e7a7\") " Feb 19 10:02:46 crc kubenswrapper[4965]: I0219 10:02:46.467726 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de82886a-c712-4ba8-b921-373802d4e7a7-config\") pod \"de82886a-c712-4ba8-b921-373802d4e7a7\" (UID: \"de82886a-c712-4ba8-b921-373802d4e7a7\") " Feb 19 10:02:46 crc kubenswrapper[4965]: I0219 10:02:46.467760 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de82886a-c712-4ba8-b921-373802d4e7a7-dns-svc\") pod \"de82886a-c712-4ba8-b921-373802d4e7a7\" (UID: \"de82886a-c712-4ba8-b921-373802d4e7a7\") " Feb 19 10:02:46 crc kubenswrapper[4965]: I0219 10:02:46.473599 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de82886a-c712-4ba8-b921-373802d4e7a7-kube-api-access-wd9nd" (OuterVolumeSpecName: "kube-api-access-wd9nd") pod "de82886a-c712-4ba8-b921-373802d4e7a7" (UID: "de82886a-c712-4ba8-b921-373802d4e7a7"). InnerVolumeSpecName "kube-api-access-wd9nd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:02:46 crc kubenswrapper[4965]: I0219 10:02:46.496459 4965 scope.go:117] "RemoveContainer" containerID="1eab54754a87344c355be5da22412e7c59fbf788bc9a4849a778b1089137778c" Feb 19 10:02:46 crc kubenswrapper[4965]: I0219 10:02:46.519402 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de82886a-c712-4ba8-b921-373802d4e7a7-config" (OuterVolumeSpecName: "config") pod "de82886a-c712-4ba8-b921-373802d4e7a7" (UID: "de82886a-c712-4ba8-b921-373802d4e7a7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:46 crc kubenswrapper[4965]: I0219 10:02:46.527318 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de82886a-c712-4ba8-b921-373802d4e7a7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "de82886a-c712-4ba8-b921-373802d4e7a7" (UID: "de82886a-c712-4ba8-b921-373802d4e7a7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:46 crc kubenswrapper[4965]: I0219 10:02:46.529175 4965 scope.go:117] "RemoveContainer" containerID="28be941b5ad90c273137a13063514952f3701f715393ed88a138ece1562781d7" Feb 19 10:02:46 crc kubenswrapper[4965]: E0219 10:02:46.530603 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28be941b5ad90c273137a13063514952f3701f715393ed88a138ece1562781d7\": container with ID starting with 28be941b5ad90c273137a13063514952f3701f715393ed88a138ece1562781d7 not found: ID does not exist" containerID="28be941b5ad90c273137a13063514952f3701f715393ed88a138ece1562781d7" Feb 19 10:02:46 crc kubenswrapper[4965]: I0219 10:02:46.530635 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28be941b5ad90c273137a13063514952f3701f715393ed88a138ece1562781d7"} err="failed to get container status \"28be941b5ad90c273137a13063514952f3701f715393ed88a138ece1562781d7\": rpc error: code = NotFound desc = could not find container \"28be941b5ad90c273137a13063514952f3701f715393ed88a138ece1562781d7\": container with ID starting with 28be941b5ad90c273137a13063514952f3701f715393ed88a138ece1562781d7 not found: ID does not exist" Feb 19 10:02:46 crc kubenswrapper[4965]: I0219 10:02:46.530664 4965 scope.go:117] "RemoveContainer" containerID="1eab54754a87344c355be5da22412e7c59fbf788bc9a4849a778b1089137778c" Feb 19 10:02:46 crc kubenswrapper[4965]: E0219 10:02:46.530851 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1eab54754a87344c355be5da22412e7c59fbf788bc9a4849a778b1089137778c\": container with ID starting with 1eab54754a87344c355be5da22412e7c59fbf788bc9a4849a778b1089137778c not found: ID does not exist" containerID="1eab54754a87344c355be5da22412e7c59fbf788bc9a4849a778b1089137778c" Feb 19 10:02:46 crc kubenswrapper[4965]: I0219 10:02:46.530878 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1eab54754a87344c355be5da22412e7c59fbf788bc9a4849a778b1089137778c"} err="failed to get container status \"1eab54754a87344c355be5da22412e7c59fbf788bc9a4849a778b1089137778c\": rpc error: code = NotFound desc = could not find container \"1eab54754a87344c355be5da22412e7c59fbf788bc9a4849a778b1089137778c\": container with ID starting with 1eab54754a87344c355be5da22412e7c59fbf788bc9a4849a778b1089137778c not found: ID does not exist" Feb 19 10:02:46 crc kubenswrapper[4965]: I0219 10:02:46.534535 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de82886a-c712-4ba8-b921-373802d4e7a7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "de82886a-c712-4ba8-b921-373802d4e7a7" (UID: "de82886a-c712-4ba8-b921-373802d4e7a7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:46 crc kubenswrapper[4965]: I0219 10:02:46.534966 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de82886a-c712-4ba8-b921-373802d4e7a7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "de82886a-c712-4ba8-b921-373802d4e7a7" (UID: "de82886a-c712-4ba8-b921-373802d4e7a7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:46 crc kubenswrapper[4965]: I0219 10:02:46.553129 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de82886a-c712-4ba8-b921-373802d4e7a7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "de82886a-c712-4ba8-b921-373802d4e7a7" (UID: "de82886a-c712-4ba8-b921-373802d4e7a7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:46 crc kubenswrapper[4965]: I0219 10:02:46.570620 4965 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de82886a-c712-4ba8-b921-373802d4e7a7-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:46 crc kubenswrapper[4965]: I0219 10:02:46.570648 4965 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de82886a-c712-4ba8-b921-373802d4e7a7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:46 crc kubenswrapper[4965]: I0219 10:02:46.570658 4965 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de82886a-c712-4ba8-b921-373802d4e7a7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:46 crc kubenswrapper[4965]: I0219 10:02:46.570667 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wd9nd\" (UniqueName: \"kubernetes.io/projected/de82886a-c712-4ba8-b921-373802d4e7a7-kube-api-access-wd9nd\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:46 crc kubenswrapper[4965]: I0219 10:02:46.570676 4965 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/de82886a-c712-4ba8-b921-373802d4e7a7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:46 crc kubenswrapper[4965]: I0219 10:02:46.570684 4965 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de82886a-c712-4ba8-b921-373802d4e7a7-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:46 crc kubenswrapper[4965]: I0219 10:02:46.719101 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-7plxw"] Feb 19 10:02:46 crc kubenswrapper[4965]: W0219 10:02:46.719595 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod23eeb935_93ea_4fee_9140_89216cef2850.slice/crio-182503f119ba3e85734f707d61ccfbb44ee6d65759c05d6bd6ed09a2508496f2 WatchSource:0}: Error finding container 182503f119ba3e85734f707d61ccfbb44ee6d65759c05d6bd6ed09a2508496f2: Status 404 returned error can't find the container with id 182503f119ba3e85734f707d61ccfbb44ee6d65759c05d6bd6ed09a2508496f2 Feb 19 10:02:46 crc kubenswrapper[4965]: I0219 10:02:46.899930 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-r86pd" Feb 19 10:02:46 crc kubenswrapper[4965]: I0219 10:02:46.920750 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-29vfc"] Feb 19 10:02:46 crc kubenswrapper[4965]: I0219 10:02:46.929571 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-29vfc"] Feb 19 10:02:46 crc kubenswrapper[4965]: I0219 10:02:46.976898 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e834046-19e3-47b1-b822-6c73b0d8be74-config-data\") pod \"5e834046-19e3-47b1-b822-6c73b0d8be74\" (UID: \"5e834046-19e3-47b1-b822-6c73b0d8be74\") " Feb 19 10:02:46 crc kubenswrapper[4965]: I0219 10:02:46.977030 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkz96\" (UniqueName: \"kubernetes.io/projected/5e834046-19e3-47b1-b822-6c73b0d8be74-kube-api-access-zkz96\") pod \"5e834046-19e3-47b1-b822-6c73b0d8be74\" (UID: \"5e834046-19e3-47b1-b822-6c73b0d8be74\") " Feb 19 10:02:46 crc kubenswrapper[4965]: I0219 10:02:46.977055 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e834046-19e3-47b1-b822-6c73b0d8be74-combined-ca-bundle\") pod \"5e834046-19e3-47b1-b822-6c73b0d8be74\" (UID: \"5e834046-19e3-47b1-b822-6c73b0d8be74\") " Feb 19 10:02:46 crc kubenswrapper[4965]: I0219 10:02:46.984783 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e834046-19e3-47b1-b822-6c73b0d8be74-kube-api-access-zkz96" (OuterVolumeSpecName: "kube-api-access-zkz96") pod "5e834046-19e3-47b1-b822-6c73b0d8be74" (UID: "5e834046-19e3-47b1-b822-6c73b0d8be74"). InnerVolumeSpecName "kube-api-access-zkz96". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:02:47 crc kubenswrapper[4965]: I0219 10:02:47.020285 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e834046-19e3-47b1-b822-6c73b0d8be74-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5e834046-19e3-47b1-b822-6c73b0d8be74" (UID: "5e834046-19e3-47b1-b822-6c73b0d8be74"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:47 crc kubenswrapper[4965]: I0219 10:02:47.041886 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e834046-19e3-47b1-b822-6c73b0d8be74-config-data" (OuterVolumeSpecName: "config-data") pod "5e834046-19e3-47b1-b822-6c73b0d8be74" (UID: "5e834046-19e3-47b1-b822-6c73b0d8be74"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:47 crc kubenswrapper[4965]: I0219 10:02:47.078580 4965 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e834046-19e3-47b1-b822-6c73b0d8be74-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:47 crc kubenswrapper[4965]: I0219 10:02:47.078625 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkz96\" (UniqueName: \"kubernetes.io/projected/5e834046-19e3-47b1-b822-6c73b0d8be74-kube-api-access-zkz96\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:47 crc kubenswrapper[4965]: I0219 10:02:47.078637 4965 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e834046-19e3-47b1-b822-6c73b0d8be74-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:47 crc kubenswrapper[4965]: I0219 10:02:47.219346 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de82886a-c712-4ba8-b921-373802d4e7a7" path="/var/lib/kubelet/pods/de82886a-c712-4ba8-b921-373802d4e7a7/volumes" Feb 19 10:02:47 crc kubenswrapper[4965]: I0219 10:02:47.478918 4965 generic.go:334] "Generic (PLEG): container finished" podID="23eeb935-93ea-4fee-9140-89216cef2850" containerID="4d4b9ddf09f8dbe517647b068902968523cf368894ce455e2660475978387aae" exitCode=0 Feb 19 10:02:47 crc kubenswrapper[4965]: I0219 10:02:47.479079 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-7plxw" event={"ID":"23eeb935-93ea-4fee-9140-89216cef2850","Type":"ContainerDied","Data":"4d4b9ddf09f8dbe517647b068902968523cf368894ce455e2660475978387aae"} Feb 19 10:02:47 crc kubenswrapper[4965]: I0219 10:02:47.479130 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-7plxw" event={"ID":"23eeb935-93ea-4fee-9140-89216cef2850","Type":"ContainerStarted","Data":"182503f119ba3e85734f707d61ccfbb44ee6d65759c05d6bd6ed09a2508496f2"} Feb 19 10:02:47 crc kubenswrapper[4965]: I0219 10:02:47.488457 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-r86pd" event={"ID":"5e834046-19e3-47b1-b822-6c73b0d8be74","Type":"ContainerDied","Data":"27def61dfb97fe20399d44d8924e98e1ee3722850286f15fa73130af95bef85d"} Feb 19 10:02:47 crc kubenswrapper[4965]: I0219 10:02:47.488514 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27def61dfb97fe20399d44d8924e98e1ee3722850286f15fa73130af95bef85d" Feb 19 10:02:47 crc kubenswrapper[4965]: I0219 10:02:47.488524 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-r86pd" Feb 19 10:02:47 crc kubenswrapper[4965]: I0219 10:02:47.746266 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-7plxw"] Feb 19 10:02:47 crc kubenswrapper[4965]: I0219 10:02:47.798799 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-wqhc2"] Feb 19 10:02:47 crc kubenswrapper[4965]: E0219 10:02:47.799404 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e834046-19e3-47b1-b822-6c73b0d8be74" containerName="keystone-db-sync" Feb 19 10:02:47 crc kubenswrapper[4965]: I0219 10:02:47.799431 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e834046-19e3-47b1-b822-6c73b0d8be74" containerName="keystone-db-sync" Feb 19 10:02:47 crc kubenswrapper[4965]: E0219 10:02:47.799451 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de82886a-c712-4ba8-b921-373802d4e7a7" containerName="dnsmasq-dns" Feb 19 10:02:47 crc kubenswrapper[4965]: I0219 10:02:47.799459 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="de82886a-c712-4ba8-b921-373802d4e7a7" containerName="dnsmasq-dns" Feb 19 10:02:47 crc kubenswrapper[4965]: E0219 10:02:47.799483 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de82886a-c712-4ba8-b921-373802d4e7a7" containerName="init" Feb 19 10:02:47 crc kubenswrapper[4965]: I0219 10:02:47.799492 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="de82886a-c712-4ba8-b921-373802d4e7a7" containerName="init" Feb 19 10:02:47 crc kubenswrapper[4965]: I0219 10:02:47.799695 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="de82886a-c712-4ba8-b921-373802d4e7a7" containerName="dnsmasq-dns" Feb 19 10:02:47 crc kubenswrapper[4965]: I0219 10:02:47.799723 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e834046-19e3-47b1-b822-6c73b0d8be74" containerName="keystone-db-sync" Feb 19 10:02:47 crc kubenswrapper[4965]: I0219 10:02:47.800579 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wqhc2" Feb 19 10:02:47 crc kubenswrapper[4965]: I0219 10:02:47.813324 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 19 10:02:47 crc kubenswrapper[4965]: I0219 10:02:47.813514 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 19 10:02:47 crc kubenswrapper[4965]: I0219 10:02:47.813987 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 19 10:02:47 crc kubenswrapper[4965]: I0219 10:02:47.814206 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-9ln6f" Feb 19 10:02:47 crc kubenswrapper[4965]: I0219 10:02:47.814450 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 19 10:02:47 crc kubenswrapper[4965]: I0219 10:02:47.835617 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-wqhc2"] Feb 19 10:02:47 crc kubenswrapper[4965]: I0219 10:02:47.847957 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-ws9nm"] Feb 19 10:02:47 crc kubenswrapper[4965]: I0219 10:02:47.849692 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-ws9nm" Feb 19 10:02:47 crc kubenswrapper[4965]: I0219 10:02:47.879346 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-ws9nm"] Feb 19 10:02:47 crc kubenswrapper[4965]: I0219 10:02:47.942932 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b5ef713d-f101-4d3f-bdb2-6fe4f2966380-fernet-keys\") pod \"keystone-bootstrap-wqhc2\" (UID: \"b5ef713d-f101-4d3f-bdb2-6fe4f2966380\") " pod="openstack/keystone-bootstrap-wqhc2" Feb 19 10:02:47 crc kubenswrapper[4965]: I0219 10:02:47.943763 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5ef713d-f101-4d3f-bdb2-6fe4f2966380-config-data\") pod \"keystone-bootstrap-wqhc2\" (UID: \"b5ef713d-f101-4d3f-bdb2-6fe4f2966380\") " pod="openstack/keystone-bootstrap-wqhc2" Feb 19 10:02:47 crc kubenswrapper[4965]: I0219 10:02:47.943787 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-745g5\" (UniqueName: \"kubernetes.io/projected/b5ef713d-f101-4d3f-bdb2-6fe4f2966380-kube-api-access-745g5\") pod \"keystone-bootstrap-wqhc2\" (UID: \"b5ef713d-f101-4d3f-bdb2-6fe4f2966380\") " pod="openstack/keystone-bootstrap-wqhc2" Feb 19 10:02:47 crc kubenswrapper[4965]: I0219 10:02:47.943809 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e56b5777-a1b2-4ee2-8998-0620cbcfb678-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-ws9nm\" (UID: \"e56b5777-a1b2-4ee2-8998-0620cbcfb678\") " pod="openstack/dnsmasq-dns-bbf5cc879-ws9nm" Feb 19 10:02:47 crc kubenswrapper[4965]: I0219 10:02:47.943866 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e56b5777-a1b2-4ee2-8998-0620cbcfb678-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-ws9nm\" (UID: \"e56b5777-a1b2-4ee2-8998-0620cbcfb678\") " pod="openstack/dnsmasq-dns-bbf5cc879-ws9nm" Feb 19 10:02:47 crc kubenswrapper[4965]: I0219 10:02:47.943939 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e56b5777-a1b2-4ee2-8998-0620cbcfb678-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-ws9nm\" (UID: \"e56b5777-a1b2-4ee2-8998-0620cbcfb678\") " pod="openstack/dnsmasq-dns-bbf5cc879-ws9nm" Feb 19 10:02:47 crc kubenswrapper[4965]: I0219 10:02:47.943957 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e56b5777-a1b2-4ee2-8998-0620cbcfb678-config\") pod \"dnsmasq-dns-bbf5cc879-ws9nm\" (UID: \"e56b5777-a1b2-4ee2-8998-0620cbcfb678\") " pod="openstack/dnsmasq-dns-bbf5cc879-ws9nm" Feb 19 10:02:47 crc kubenswrapper[4965]: I0219 10:02:47.943981 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5ef713d-f101-4d3f-bdb2-6fe4f2966380-scripts\") pod \"keystone-bootstrap-wqhc2\" (UID: \"b5ef713d-f101-4d3f-bdb2-6fe4f2966380\") " pod="openstack/keystone-bootstrap-wqhc2" Feb 19 10:02:47 crc kubenswrapper[4965]: I0219 10:02:47.943998 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b5ef713d-f101-4d3f-bdb2-6fe4f2966380-credential-keys\") pod \"keystone-bootstrap-wqhc2\" (UID: \"b5ef713d-f101-4d3f-bdb2-6fe4f2966380\") " pod="openstack/keystone-bootstrap-wqhc2" Feb 19 10:02:47 crc kubenswrapper[4965]: I0219 10:02:47.944023 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpbl8\" (UniqueName: \"kubernetes.io/projected/e56b5777-a1b2-4ee2-8998-0620cbcfb678-kube-api-access-hpbl8\") pod \"dnsmasq-dns-bbf5cc879-ws9nm\" (UID: \"e56b5777-a1b2-4ee2-8998-0620cbcfb678\") " pod="openstack/dnsmasq-dns-bbf5cc879-ws9nm" Feb 19 10:02:47 crc kubenswrapper[4965]: I0219 10:02:47.944056 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e56b5777-a1b2-4ee2-8998-0620cbcfb678-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-ws9nm\" (UID: \"e56b5777-a1b2-4ee2-8998-0620cbcfb678\") " pod="openstack/dnsmasq-dns-bbf5cc879-ws9nm" Feb 19 10:02:47 crc kubenswrapper[4965]: I0219 10:02:47.944085 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5ef713d-f101-4d3f-bdb2-6fe4f2966380-combined-ca-bundle\") pod \"keystone-bootstrap-wqhc2\" (UID: \"b5ef713d-f101-4d3f-bdb2-6fe4f2966380\") " pod="openstack/keystone-bootstrap-wqhc2" Feb 19 10:02:47 crc kubenswrapper[4965]: I0219 10:02:47.961853 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-7rwpz"] Feb 19 10:02:47 crc kubenswrapper[4965]: I0219 10:02:47.963162 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7rwpz" Feb 19 10:02:47 crc kubenswrapper[4965]: I0219 10:02:47.965448 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 19 10:02:47 crc kubenswrapper[4965]: I0219 10:02:47.965637 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 19 10:02:47 crc kubenswrapper[4965]: I0219 10:02:47.965771 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-5dg9r" Feb 19 10:02:47 crc kubenswrapper[4965]: I0219 10:02:47.988655 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-7rwpz"] Feb 19 10:02:48 crc kubenswrapper[4965]: E0219 10:02:48.013691 4965 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Feb 19 10:02:48 crc kubenswrapper[4965]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/23eeb935-93ea-4fee-9140-89216cef2850/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 19 10:02:48 crc kubenswrapper[4965]: > podSandboxID="182503f119ba3e85734f707d61ccfbb44ee6d65759c05d6bd6ed09a2508496f2" Feb 19 10:02:48 crc kubenswrapper[4965]: E0219 10:02:48.013944 4965 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 19 10:02:48 crc kubenswrapper[4965]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nf8h546hb4h579h64dhdch589h5f8h556h96h94h64dh589h66ch5ch4h5c7hf5h554hb8h557h674h695hb7h5f4h9ch5bfh99h99h5c8h654h64fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-swift-storage-0,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-swift-storage-0,SubPath:dns-swift-storage-0,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x98hp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5f59b8f679-7plxw_openstack(23eeb935-93ea-4fee-9140-89216cef2850): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/23eeb935-93ea-4fee-9140-89216cef2850/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 19 10:02:48 crc kubenswrapper[4965]: > logger="UnhandledError" Feb 19 10:02:48 crc kubenswrapper[4965]: E0219 10:02:48.015120 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/23eeb935-93ea-4fee-9140-89216cef2850/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-5f59b8f679-7plxw" podUID="23eeb935-93ea-4fee-9140-89216cef2850" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.047936 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5ef713d-f101-4d3f-bdb2-6fe4f2966380-scripts\") pod \"keystone-bootstrap-wqhc2\" (UID: \"b5ef713d-f101-4d3f-bdb2-6fe4f2966380\") " pod="openstack/keystone-bootstrap-wqhc2" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.048740 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b5ef713d-f101-4d3f-bdb2-6fe4f2966380-credential-keys\") pod \"keystone-bootstrap-wqhc2\" (UID: \"b5ef713d-f101-4d3f-bdb2-6fe4f2966380\") " pod="openstack/keystone-bootstrap-wqhc2" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.048782 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpbl8\" (UniqueName: \"kubernetes.io/projected/e56b5777-a1b2-4ee2-8998-0620cbcfb678-kube-api-access-hpbl8\") pod \"dnsmasq-dns-bbf5cc879-ws9nm\" (UID: \"e56b5777-a1b2-4ee2-8998-0620cbcfb678\") " pod="openstack/dnsmasq-dns-bbf5cc879-ws9nm" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.048823 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce8bac0d-7aa6-437f-b234-370384cf1153-combined-ca-bundle\") pod \"cinder-db-sync-7rwpz\" (UID: \"ce8bac0d-7aa6-437f-b234-370384cf1153\") " pod="openstack/cinder-db-sync-7rwpz" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.048847 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e56b5777-a1b2-4ee2-8998-0620cbcfb678-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-ws9nm\" (UID: \"e56b5777-a1b2-4ee2-8998-0620cbcfb678\") " pod="openstack/dnsmasq-dns-bbf5cc879-ws9nm" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.048881 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5ef713d-f101-4d3f-bdb2-6fe4f2966380-combined-ca-bundle\") pod \"keystone-bootstrap-wqhc2\" (UID: \"b5ef713d-f101-4d3f-bdb2-6fe4f2966380\") " pod="openstack/keystone-bootstrap-wqhc2" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.048903 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b5ef713d-f101-4d3f-bdb2-6fe4f2966380-fernet-keys\") pod \"keystone-bootstrap-wqhc2\" (UID: \"b5ef713d-f101-4d3f-bdb2-6fe4f2966380\") " pod="openstack/keystone-bootstrap-wqhc2" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.048936 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce8bac0d-7aa6-437f-b234-370384cf1153-scripts\") pod \"cinder-db-sync-7rwpz\" (UID: \"ce8bac0d-7aa6-437f-b234-370384cf1153\") " pod="openstack/cinder-db-sync-7rwpz" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.048953 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5ef713d-f101-4d3f-bdb2-6fe4f2966380-config-data\") pod \"keystone-bootstrap-wqhc2\" (UID: \"b5ef713d-f101-4d3f-bdb2-6fe4f2966380\") " pod="openstack/keystone-bootstrap-wqhc2" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.048968 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-745g5\" (UniqueName: \"kubernetes.io/projected/b5ef713d-f101-4d3f-bdb2-6fe4f2966380-kube-api-access-745g5\") pod \"keystone-bootstrap-wqhc2\" (UID: \"b5ef713d-f101-4d3f-bdb2-6fe4f2966380\") " pod="openstack/keystone-bootstrap-wqhc2" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.048987 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e56b5777-a1b2-4ee2-8998-0620cbcfb678-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-ws9nm\" (UID: \"e56b5777-a1b2-4ee2-8998-0620cbcfb678\") " pod="openstack/dnsmasq-dns-bbf5cc879-ws9nm" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.049020 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce8bac0d-7aa6-437f-b234-370384cf1153-config-data\") pod \"cinder-db-sync-7rwpz\" (UID: \"ce8bac0d-7aa6-437f-b234-370384cf1153\") " pod="openstack/cinder-db-sync-7rwpz" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.049044 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ce8bac0d-7aa6-437f-b234-370384cf1153-db-sync-config-data\") pod \"cinder-db-sync-7rwpz\" (UID: \"ce8bac0d-7aa6-437f-b234-370384cf1153\") " pod="openstack/cinder-db-sync-7rwpz" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.049081 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e56b5777-a1b2-4ee2-8998-0620cbcfb678-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-ws9nm\" (UID: \"e56b5777-a1b2-4ee2-8998-0620cbcfb678\") " pod="openstack/dnsmasq-dns-bbf5cc879-ws9nm" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.049115 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ce8bac0d-7aa6-437f-b234-370384cf1153-etc-machine-id\") pod \"cinder-db-sync-7rwpz\" (UID: \"ce8bac0d-7aa6-437f-b234-370384cf1153\") " pod="openstack/cinder-db-sync-7rwpz" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.049164 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64rvr\" (UniqueName: \"kubernetes.io/projected/ce8bac0d-7aa6-437f-b234-370384cf1153-kube-api-access-64rvr\") pod \"cinder-db-sync-7rwpz\" (UID: \"ce8bac0d-7aa6-437f-b234-370384cf1153\") " pod="openstack/cinder-db-sync-7rwpz" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.049206 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e56b5777-a1b2-4ee2-8998-0620cbcfb678-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-ws9nm\" (UID: \"e56b5777-a1b2-4ee2-8998-0620cbcfb678\") " pod="openstack/dnsmasq-dns-bbf5cc879-ws9nm" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.049249 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e56b5777-a1b2-4ee2-8998-0620cbcfb678-config\") pod \"dnsmasq-dns-bbf5cc879-ws9nm\" (UID: \"e56b5777-a1b2-4ee2-8998-0620cbcfb678\") " pod="openstack/dnsmasq-dns-bbf5cc879-ws9nm" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.050042 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e56b5777-a1b2-4ee2-8998-0620cbcfb678-config\") pod \"dnsmasq-dns-bbf5cc879-ws9nm\" (UID: \"e56b5777-a1b2-4ee2-8998-0620cbcfb678\") " pod="openstack/dnsmasq-dns-bbf5cc879-ws9nm" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.053571 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e56b5777-a1b2-4ee2-8998-0620cbcfb678-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-ws9nm\" (UID: \"e56b5777-a1b2-4ee2-8998-0620cbcfb678\") " pod="openstack/dnsmasq-dns-bbf5cc879-ws9nm" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.056040 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e56b5777-a1b2-4ee2-8998-0620cbcfb678-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-ws9nm\" (UID: \"e56b5777-a1b2-4ee2-8998-0620cbcfb678\") " pod="openstack/dnsmasq-dns-bbf5cc879-ws9nm" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.056663 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e56b5777-a1b2-4ee2-8998-0620cbcfb678-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-ws9nm\" (UID: \"e56b5777-a1b2-4ee2-8998-0620cbcfb678\") " pod="openstack/dnsmasq-dns-bbf5cc879-ws9nm" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.067434 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e56b5777-a1b2-4ee2-8998-0620cbcfb678-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-ws9nm\" (UID: \"e56b5777-a1b2-4ee2-8998-0620cbcfb678\") " pod="openstack/dnsmasq-dns-bbf5cc879-ws9nm" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.095622 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-wsss7"] Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.100726 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-wsss7" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.104206 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.104422 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-p2rjz" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.104636 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.105423 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5ef713d-f101-4d3f-bdb2-6fe4f2966380-combined-ca-bundle\") pod \"keystone-bootstrap-wqhc2\" (UID: \"b5ef713d-f101-4d3f-bdb2-6fe4f2966380\") " pod="openstack/keystone-bootstrap-wqhc2" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.110387 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.113120 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.114034 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b5ef713d-f101-4d3f-bdb2-6fe4f2966380-credential-keys\") pod \"keystone-bootstrap-wqhc2\" (UID: \"b5ef713d-f101-4d3f-bdb2-6fe4f2966380\") " pod="openstack/keystone-bootstrap-wqhc2" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.138242 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-wsss7"] Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.140857 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.145281 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.147843 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.150349 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ce8bac0d-7aa6-437f-b234-370384cf1153-etc-machine-id\") pod \"cinder-db-sync-7rwpz\" (UID: \"ce8bac0d-7aa6-437f-b234-370384cf1153\") " pod="openstack/cinder-db-sync-7rwpz" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.150432 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64rvr\" (UniqueName: \"kubernetes.io/projected/ce8bac0d-7aa6-437f-b234-370384cf1153-kube-api-access-64rvr\") pod \"cinder-db-sync-7rwpz\" (UID: \"ce8bac0d-7aa6-437f-b234-370384cf1153\") " pod="openstack/cinder-db-sync-7rwpz" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.150532 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce8bac0d-7aa6-437f-b234-370384cf1153-combined-ca-bundle\") pod \"cinder-db-sync-7rwpz\" (UID: \"ce8bac0d-7aa6-437f-b234-370384cf1153\") " pod="openstack/cinder-db-sync-7rwpz" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.150599 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce8bac0d-7aa6-437f-b234-370384cf1153-scripts\") pod \"cinder-db-sync-7rwpz\" (UID: \"ce8bac0d-7aa6-437f-b234-370384cf1153\") " pod="openstack/cinder-db-sync-7rwpz" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.150662 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce8bac0d-7aa6-437f-b234-370384cf1153-config-data\") pod \"cinder-db-sync-7rwpz\" (UID: \"ce8bac0d-7aa6-437f-b234-370384cf1153\") " pod="openstack/cinder-db-sync-7rwpz" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.150697 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ce8bac0d-7aa6-437f-b234-370384cf1153-db-sync-config-data\") pod \"cinder-db-sync-7rwpz\" (UID: \"ce8bac0d-7aa6-437f-b234-370384cf1153\") " pod="openstack/cinder-db-sync-7rwpz" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.154312 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ce8bac0d-7aa6-437f-b234-370384cf1153-etc-machine-id\") pod \"cinder-db-sync-7rwpz\" (UID: \"ce8bac0d-7aa6-437f-b234-370384cf1153\") " pod="openstack/cinder-db-sync-7rwpz" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.163780 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-qllz5"] Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.165395 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-qllz5" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.168813 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.171579 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-nhmv6" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.236297 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-qllz5"] Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.252400 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fcad3660-ade7-407c-9d77-bb1c2c2721a8-run-httpd\") pod \"ceilometer-0\" (UID: \"fcad3660-ade7-407c-9d77-bb1c2c2721a8\") " pod="openstack/ceilometer-0" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.252498 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d7bc0481-970b-4e8e-868f-490ea553952e-db-sync-config-data\") pod \"barbican-db-sync-qllz5\" (UID: \"d7bc0481-970b-4e8e-868f-490ea553952e\") " pod="openstack/barbican-db-sync-qllz5" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.252537 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flv7f\" (UniqueName: \"kubernetes.io/projected/fcad3660-ade7-407c-9d77-bb1c2c2721a8-kube-api-access-flv7f\") pod \"ceilometer-0\" (UID: \"fcad3660-ade7-407c-9d77-bb1c2c2721a8\") " pod="openstack/ceilometer-0" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.252590 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sjwh\" (UniqueName: \"kubernetes.io/projected/f4c38eda-5f59-4756-a3b7-2731c66ef436-kube-api-access-6sjwh\") pod \"neutron-db-sync-wsss7\" (UID: \"f4c38eda-5f59-4756-a3b7-2731c66ef436\") " pod="openstack/neutron-db-sync-wsss7" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.252643 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcad3660-ade7-407c-9d77-bb1c2c2721a8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fcad3660-ade7-407c-9d77-bb1c2c2721a8\") " pod="openstack/ceilometer-0" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.252758 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f4c38eda-5f59-4756-a3b7-2731c66ef436-config\") pod \"neutron-db-sync-wsss7\" (UID: \"f4c38eda-5f59-4756-a3b7-2731c66ef436\") " pod="openstack/neutron-db-sync-wsss7" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.252835 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7bc0481-970b-4e8e-868f-490ea553952e-combined-ca-bundle\") pod \"barbican-db-sync-qllz5\" (UID: \"d7bc0481-970b-4e8e-868f-490ea553952e\") " pod="openstack/barbican-db-sync-qllz5" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.252853 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fcad3660-ade7-407c-9d77-bb1c2c2721a8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fcad3660-ade7-407c-9d77-bb1c2c2721a8\") " pod="openstack/ceilometer-0" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.252870 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fcad3660-ade7-407c-9d77-bb1c2c2721a8-log-httpd\") pod \"ceilometer-0\" (UID: \"fcad3660-ade7-407c-9d77-bb1c2c2721a8\") " pod="openstack/ceilometer-0" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.252942 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcad3660-ade7-407c-9d77-bb1c2c2721a8-config-data\") pod \"ceilometer-0\" (UID: \"fcad3660-ade7-407c-9d77-bb1c2c2721a8\") " pod="openstack/ceilometer-0" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.252978 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcad3660-ade7-407c-9d77-bb1c2c2721a8-scripts\") pod \"ceilometer-0\" (UID: \"fcad3660-ade7-407c-9d77-bb1c2c2721a8\") " pod="openstack/ceilometer-0" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.253043 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbbv9\" (UniqueName: \"kubernetes.io/projected/d7bc0481-970b-4e8e-868f-490ea553952e-kube-api-access-bbbv9\") pod \"barbican-db-sync-qllz5\" (UID: \"d7bc0481-970b-4e8e-868f-490ea553952e\") " pod="openstack/barbican-db-sync-qllz5" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.253129 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4c38eda-5f59-4756-a3b7-2731c66ef436-combined-ca-bundle\") pod \"neutron-db-sync-wsss7\" (UID: \"f4c38eda-5f59-4756-a3b7-2731c66ef436\") " pod="openstack/neutron-db-sync-wsss7" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.300289 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-ws9nm"] Feb 19 10:02:48 crc kubenswrapper[4965]: E0219 10:02:48.300955 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-hpbl8], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-bbf5cc879-ws9nm" podUID="e56b5777-a1b2-4ee2-8998-0620cbcfb678" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.312310 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5ef713d-f101-4d3f-bdb2-6fe4f2966380-scripts\") pod \"keystone-bootstrap-wqhc2\" (UID: \"b5ef713d-f101-4d3f-bdb2-6fe4f2966380\") " pod="openstack/keystone-bootstrap-wqhc2" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.312924 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5ef713d-f101-4d3f-bdb2-6fe4f2966380-config-data\") pod \"keystone-bootstrap-wqhc2\" (UID: \"b5ef713d-f101-4d3f-bdb2-6fe4f2966380\") " pod="openstack/keystone-bootstrap-wqhc2" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.322836 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b5ef713d-f101-4d3f-bdb2-6fe4f2966380-fernet-keys\") pod \"keystone-bootstrap-wqhc2\" (UID: \"b5ef713d-f101-4d3f-bdb2-6fe4f2966380\") " pod="openstack/keystone-bootstrap-wqhc2" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.330952 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-745g5\" (UniqueName: \"kubernetes.io/projected/b5ef713d-f101-4d3f-bdb2-6fe4f2966380-kube-api-access-745g5\") pod \"keystone-bootstrap-wqhc2\" (UID: \"b5ef713d-f101-4d3f-bdb2-6fe4f2966380\") " pod="openstack/keystone-bootstrap-wqhc2" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.331941 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpbl8\" (UniqueName: \"kubernetes.io/projected/e56b5777-a1b2-4ee2-8998-0620cbcfb678-kube-api-access-hpbl8\") pod \"dnsmasq-dns-bbf5cc879-ws9nm\" (UID: \"e56b5777-a1b2-4ee2-8998-0620cbcfb678\") " pod="openstack/dnsmasq-dns-bbf5cc879-ws9nm" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.335866 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ce8bac0d-7aa6-437f-b234-370384cf1153-db-sync-config-data\") pod \"cinder-db-sync-7rwpz\" (UID: \"ce8bac0d-7aa6-437f-b234-370384cf1153\") " pod="openstack/cinder-db-sync-7rwpz" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.336223 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce8bac0d-7aa6-437f-b234-370384cf1153-combined-ca-bundle\") pod \"cinder-db-sync-7rwpz\" (UID: \"ce8bac0d-7aa6-437f-b234-370384cf1153\") " pod="openstack/cinder-db-sync-7rwpz" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.340574 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce8bac0d-7aa6-437f-b234-370384cf1153-scripts\") pod \"cinder-db-sync-7rwpz\" (UID: \"ce8bac0d-7aa6-437f-b234-370384cf1153\") " pod="openstack/cinder-db-sync-7rwpz" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.340990 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64rvr\" (UniqueName: \"kubernetes.io/projected/ce8bac0d-7aa6-437f-b234-370384cf1153-kube-api-access-64rvr\") pod \"cinder-db-sync-7rwpz\" (UID: \"ce8bac0d-7aa6-437f-b234-370384cf1153\") " pod="openstack/cinder-db-sync-7rwpz" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.341425 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce8bac0d-7aa6-437f-b234-370384cf1153-config-data\") pod \"cinder-db-sync-7rwpz\" (UID: \"ce8bac0d-7aa6-437f-b234-370384cf1153\") " pod="openstack/cinder-db-sync-7rwpz" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.345763 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-db-sync-wh9q9"] Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.346893 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-wh9q9" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.356124 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f4c38eda-5f59-4756-a3b7-2731c66ef436-config\") pod \"neutron-db-sync-wsss7\" (UID: \"f4c38eda-5f59-4756-a3b7-2731c66ef436\") " pod="openstack/neutron-db-sync-wsss7" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.356183 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7bc0481-970b-4e8e-868f-490ea553952e-combined-ca-bundle\") pod \"barbican-db-sync-qllz5\" (UID: \"d7bc0481-970b-4e8e-868f-490ea553952e\") " pod="openstack/barbican-db-sync-qllz5" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.356207 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fcad3660-ade7-407c-9d77-bb1c2c2721a8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fcad3660-ade7-407c-9d77-bb1c2c2721a8\") " pod="openstack/ceilometer-0" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.356241 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fcad3660-ade7-407c-9d77-bb1c2c2721a8-log-httpd\") pod \"ceilometer-0\" (UID: \"fcad3660-ade7-407c-9d77-bb1c2c2721a8\") " pod="openstack/ceilometer-0" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.356270 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcad3660-ade7-407c-9d77-bb1c2c2721a8-config-data\") pod \"ceilometer-0\" (UID: \"fcad3660-ade7-407c-9d77-bb1c2c2721a8\") " pod="openstack/ceilometer-0" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.356309 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcad3660-ade7-407c-9d77-bb1c2c2721a8-scripts\") pod \"ceilometer-0\" (UID: \"fcad3660-ade7-407c-9d77-bb1c2c2721a8\") " pod="openstack/ceilometer-0" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.356332 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbbv9\" (UniqueName: \"kubernetes.io/projected/d7bc0481-970b-4e8e-868f-490ea553952e-kube-api-access-bbbv9\") pod \"barbican-db-sync-qllz5\" (UID: \"d7bc0481-970b-4e8e-868f-490ea553952e\") " pod="openstack/barbican-db-sync-qllz5" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.356372 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4c38eda-5f59-4756-a3b7-2731c66ef436-combined-ca-bundle\") pod \"neutron-db-sync-wsss7\" (UID: \"f4c38eda-5f59-4756-a3b7-2731c66ef436\") " pod="openstack/neutron-db-sync-wsss7" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.356424 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fcad3660-ade7-407c-9d77-bb1c2c2721a8-run-httpd\") pod \"ceilometer-0\" (UID: \"fcad3660-ade7-407c-9d77-bb1c2c2721a8\") " pod="openstack/ceilometer-0" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.356442 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d7bc0481-970b-4e8e-868f-490ea553952e-db-sync-config-data\") pod \"barbican-db-sync-qllz5\" (UID: \"d7bc0481-970b-4e8e-868f-490ea553952e\") " pod="openstack/barbican-db-sync-qllz5" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.356464 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flv7f\" (UniqueName: \"kubernetes.io/projected/fcad3660-ade7-407c-9d77-bb1c2c2721a8-kube-api-access-flv7f\") pod \"ceilometer-0\" (UID: \"fcad3660-ade7-407c-9d77-bb1c2c2721a8\") " pod="openstack/ceilometer-0" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.356487 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sjwh\" (UniqueName: \"kubernetes.io/projected/f4c38eda-5f59-4756-a3b7-2731c66ef436-kube-api-access-6sjwh\") pod \"neutron-db-sync-wsss7\" (UID: \"f4c38eda-5f59-4756-a3b7-2731c66ef436\") " pod="openstack/neutron-db-sync-wsss7" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.356519 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcad3660-ade7-407c-9d77-bb1c2c2721a8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fcad3660-ade7-407c-9d77-bb1c2c2721a8\") " pod="openstack/ceilometer-0" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.360857 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-config-data" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.361041 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-scripts" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.361149 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-cloudkitty-dockercfg-rbdxx" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.361285 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-client-internal" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.361340 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcad3660-ade7-407c-9d77-bb1c2c2721a8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fcad3660-ade7-407c-9d77-bb1c2c2721a8\") " pod="openstack/ceilometer-0" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.364885 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f4c38eda-5f59-4756-a3b7-2731c66ef436-config\") pod \"neutron-db-sync-wsss7\" (UID: \"f4c38eda-5f59-4756-a3b7-2731c66ef436\") " pod="openstack/neutron-db-sync-wsss7" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.366662 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fcad3660-ade7-407c-9d77-bb1c2c2721a8-log-httpd\") pod \"ceilometer-0\" (UID: \"fcad3660-ade7-407c-9d77-bb1c2c2721a8\") " pod="openstack/ceilometer-0" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.367361 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-wh9q9"] Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.367502 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fcad3660-ade7-407c-9d77-bb1c2c2721a8-run-httpd\") pod \"ceilometer-0\" (UID: \"fcad3660-ade7-407c-9d77-bb1c2c2721a8\") " pod="openstack/ceilometer-0" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.373421 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcad3660-ade7-407c-9d77-bb1c2c2721a8-scripts\") pod \"ceilometer-0\" (UID: \"fcad3660-ade7-407c-9d77-bb1c2c2721a8\") " pod="openstack/ceilometer-0" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.376484 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4c38eda-5f59-4756-a3b7-2731c66ef436-combined-ca-bundle\") pod \"neutron-db-sync-wsss7\" (UID: \"f4c38eda-5f59-4756-a3b7-2731c66ef436\") " pod="openstack/neutron-db-sync-wsss7" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.377877 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fcad3660-ade7-407c-9d77-bb1c2c2721a8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fcad3660-ade7-407c-9d77-bb1c2c2721a8\") " pod="openstack/ceilometer-0" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.377941 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-ns8h9"] Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.379160 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-ns8h9" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.381904 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7bc0481-970b-4e8e-868f-490ea553952e-combined-ca-bundle\") pod \"barbican-db-sync-qllz5\" (UID: \"d7bc0481-970b-4e8e-868f-490ea553952e\") " pod="openstack/barbican-db-sync-qllz5" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.383711 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d7bc0481-970b-4e8e-868f-490ea553952e-db-sync-config-data\") pod \"barbican-db-sync-qllz5\" (UID: \"d7bc0481-970b-4e8e-868f-490ea553952e\") " pod="openstack/barbican-db-sync-qllz5" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.387436 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcad3660-ade7-407c-9d77-bb1c2c2721a8-config-data\") pod \"ceilometer-0\" (UID: \"fcad3660-ade7-407c-9d77-bb1c2c2721a8\") " pod="openstack/ceilometer-0" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.388310 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-bm75n"] Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.389854 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-bm75n" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.390158 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.390316 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.390411 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-sbdps" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.393079 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbbv9\" (UniqueName: \"kubernetes.io/projected/d7bc0481-970b-4e8e-868f-490ea553952e-kube-api-access-bbbv9\") pod \"barbican-db-sync-qllz5\" (UID: \"d7bc0481-970b-4e8e-868f-490ea553952e\") " pod="openstack/barbican-db-sync-qllz5" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.401921 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-ns8h9"] Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.407902 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-bm75n"] Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.416991 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flv7f\" (UniqueName: \"kubernetes.io/projected/fcad3660-ade7-407c-9d77-bb1c2c2721a8-kube-api-access-flv7f\") pod \"ceilometer-0\" (UID: \"fcad3660-ade7-407c-9d77-bb1c2c2721a8\") " pod="openstack/ceilometer-0" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.456805 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sjwh\" (UniqueName: \"kubernetes.io/projected/f4c38eda-5f59-4756-a3b7-2731c66ef436-kube-api-access-6sjwh\") pod \"neutron-db-sync-wsss7\" (UID: \"f4c38eda-5f59-4756-a3b7-2731c66ef436\") " pod="openstack/neutron-db-sync-wsss7" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.457917 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4zwf\" (UniqueName: \"kubernetes.io/projected/8671fa02-a5fa-41f0-b232-fdfc4133ab58-kube-api-access-m4zwf\") pod \"placement-db-sync-ns8h9\" (UID: \"8671fa02-a5fa-41f0-b232-fdfc4133ab58\") " pod="openstack/placement-db-sync-ns8h9" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.457987 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whzwf\" (UniqueName: \"kubernetes.io/projected/e4e3779f-9f25-4334-97f9-a3778bd78d5e-kube-api-access-whzwf\") pod \"cloudkitty-db-sync-wh9q9\" (UID: \"e4e3779f-9f25-4334-97f9-a3778bd78d5e\") " pod="openstack/cloudkitty-db-sync-wh9q9" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.458030 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ed4a364a-14cc-442e-9297-ca9497e633ca-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-bm75n\" (UID: \"ed4a364a-14cc-442e-9297-ca9497e633ca\") " pod="openstack/dnsmasq-dns-56df8fb6b7-bm75n" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.458047 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4e3779f-9f25-4334-97f9-a3778bd78d5e-combined-ca-bundle\") pod \"cloudkitty-db-sync-wh9q9\" (UID: \"e4e3779f-9f25-4334-97f9-a3778bd78d5e\") " pod="openstack/cloudkitty-db-sync-wh9q9" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.458075 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed4a364a-14cc-442e-9297-ca9497e633ca-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-bm75n\" (UID: \"ed4a364a-14cc-442e-9297-ca9497e633ca\") " pod="openstack/dnsmasq-dns-56df8fb6b7-bm75n" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.458105 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8671fa02-a5fa-41f0-b232-fdfc4133ab58-config-data\") pod \"placement-db-sync-ns8h9\" (UID: \"8671fa02-a5fa-41f0-b232-fdfc4133ab58\") " pod="openstack/placement-db-sync-ns8h9" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.458148 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8671fa02-a5fa-41f0-b232-fdfc4133ab58-scripts\") pod \"placement-db-sync-ns8h9\" (UID: \"8671fa02-a5fa-41f0-b232-fdfc4133ab58\") " pod="openstack/placement-db-sync-ns8h9" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.458174 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4e3779f-9f25-4334-97f9-a3778bd78d5e-config-data\") pod \"cloudkitty-db-sync-wh9q9\" (UID: \"e4e3779f-9f25-4334-97f9-a3778bd78d5e\") " pod="openstack/cloudkitty-db-sync-wh9q9" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.458273 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8671fa02-a5fa-41f0-b232-fdfc4133ab58-logs\") pod \"placement-db-sync-ns8h9\" (UID: \"8671fa02-a5fa-41f0-b232-fdfc4133ab58\") " pod="openstack/placement-db-sync-ns8h9" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.458297 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/e4e3779f-9f25-4334-97f9-a3778bd78d5e-certs\") pod \"cloudkitty-db-sync-wh9q9\" (UID: \"e4e3779f-9f25-4334-97f9-a3778bd78d5e\") " pod="openstack/cloudkitty-db-sync-wh9q9" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.458316 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed4a364a-14cc-442e-9297-ca9497e633ca-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-bm75n\" (UID: \"ed4a364a-14cc-442e-9297-ca9497e633ca\") " pod="openstack/dnsmasq-dns-56df8fb6b7-bm75n" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.458333 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8671fa02-a5fa-41f0-b232-fdfc4133ab58-combined-ca-bundle\") pod \"placement-db-sync-ns8h9\" (UID: \"8671fa02-a5fa-41f0-b232-fdfc4133ab58\") " pod="openstack/placement-db-sync-ns8h9" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.458350 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnfjw\" (UniqueName: \"kubernetes.io/projected/ed4a364a-14cc-442e-9297-ca9497e633ca-kube-api-access-nnfjw\") pod \"dnsmasq-dns-56df8fb6b7-bm75n\" (UID: \"ed4a364a-14cc-442e-9297-ca9497e633ca\") " pod="openstack/dnsmasq-dns-56df8fb6b7-bm75n" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.458366 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed4a364a-14cc-442e-9297-ca9497e633ca-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-bm75n\" (UID: \"ed4a364a-14cc-442e-9297-ca9497e633ca\") " pod="openstack/dnsmasq-dns-56df8fb6b7-bm75n" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.458384 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed4a364a-14cc-442e-9297-ca9497e633ca-config\") pod \"dnsmasq-dns-56df8fb6b7-bm75n\" (UID: \"ed4a364a-14cc-442e-9297-ca9497e633ca\") " pod="openstack/dnsmasq-dns-56df8fb6b7-bm75n" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.458413 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4e3779f-9f25-4334-97f9-a3778bd78d5e-scripts\") pod \"cloudkitty-db-sync-wh9q9\" (UID: \"e4e3779f-9f25-4334-97f9-a3778bd78d5e\") " pod="openstack/cloudkitty-db-sync-wh9q9" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.511314 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wqhc2" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.543151 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-ws9nm" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.561334 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8671fa02-a5fa-41f0-b232-fdfc4133ab58-scripts\") pod \"placement-db-sync-ns8h9\" (UID: \"8671fa02-a5fa-41f0-b232-fdfc4133ab58\") " pod="openstack/placement-db-sync-ns8h9" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.561395 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4e3779f-9f25-4334-97f9-a3778bd78d5e-config-data\") pod \"cloudkitty-db-sync-wh9q9\" (UID: \"e4e3779f-9f25-4334-97f9-a3778bd78d5e\") " pod="openstack/cloudkitty-db-sync-wh9q9" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.561460 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8671fa02-a5fa-41f0-b232-fdfc4133ab58-logs\") pod \"placement-db-sync-ns8h9\" (UID: \"8671fa02-a5fa-41f0-b232-fdfc4133ab58\") " pod="openstack/placement-db-sync-ns8h9" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.561478 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/e4e3779f-9f25-4334-97f9-a3778bd78d5e-certs\") pod \"cloudkitty-db-sync-wh9q9\" (UID: \"e4e3779f-9f25-4334-97f9-a3778bd78d5e\") " pod="openstack/cloudkitty-db-sync-wh9q9" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.561499 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed4a364a-14cc-442e-9297-ca9497e633ca-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-bm75n\" (UID: \"ed4a364a-14cc-442e-9297-ca9497e633ca\") " pod="openstack/dnsmasq-dns-56df8fb6b7-bm75n" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.561521 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8671fa02-a5fa-41f0-b232-fdfc4133ab58-combined-ca-bundle\") pod \"placement-db-sync-ns8h9\" (UID: \"8671fa02-a5fa-41f0-b232-fdfc4133ab58\") " pod="openstack/placement-db-sync-ns8h9" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.561553 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnfjw\" (UniqueName: \"kubernetes.io/projected/ed4a364a-14cc-442e-9297-ca9497e633ca-kube-api-access-nnfjw\") pod \"dnsmasq-dns-56df8fb6b7-bm75n\" (UID: \"ed4a364a-14cc-442e-9297-ca9497e633ca\") " pod="openstack/dnsmasq-dns-56df8fb6b7-bm75n" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.561573 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed4a364a-14cc-442e-9297-ca9497e633ca-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-bm75n\" (UID: \"ed4a364a-14cc-442e-9297-ca9497e633ca\") " pod="openstack/dnsmasq-dns-56df8fb6b7-bm75n" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.561592 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed4a364a-14cc-442e-9297-ca9497e633ca-config\") pod \"dnsmasq-dns-56df8fb6b7-bm75n\" (UID: \"ed4a364a-14cc-442e-9297-ca9497e633ca\") " pod="openstack/dnsmasq-dns-56df8fb6b7-bm75n" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.561624 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4e3779f-9f25-4334-97f9-a3778bd78d5e-scripts\") pod \"cloudkitty-db-sync-wh9q9\" (UID: \"e4e3779f-9f25-4334-97f9-a3778bd78d5e\") " pod="openstack/cloudkitty-db-sync-wh9q9" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.561647 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4zwf\" (UniqueName: \"kubernetes.io/projected/8671fa02-a5fa-41f0-b232-fdfc4133ab58-kube-api-access-m4zwf\") pod \"placement-db-sync-ns8h9\" (UID: \"8671fa02-a5fa-41f0-b232-fdfc4133ab58\") " pod="openstack/placement-db-sync-ns8h9" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.561687 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whzwf\" (UniqueName: \"kubernetes.io/projected/e4e3779f-9f25-4334-97f9-a3778bd78d5e-kube-api-access-whzwf\") pod \"cloudkitty-db-sync-wh9q9\" (UID: \"e4e3779f-9f25-4334-97f9-a3778bd78d5e\") " pod="openstack/cloudkitty-db-sync-wh9q9" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.561726 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ed4a364a-14cc-442e-9297-ca9497e633ca-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-bm75n\" (UID: \"ed4a364a-14cc-442e-9297-ca9497e633ca\") " pod="openstack/dnsmasq-dns-56df8fb6b7-bm75n" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.561750 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4e3779f-9f25-4334-97f9-a3778bd78d5e-combined-ca-bundle\") pod \"cloudkitty-db-sync-wh9q9\" (UID: \"e4e3779f-9f25-4334-97f9-a3778bd78d5e\") " pod="openstack/cloudkitty-db-sync-wh9q9" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.561781 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed4a364a-14cc-442e-9297-ca9497e633ca-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-bm75n\" (UID: \"ed4a364a-14cc-442e-9297-ca9497e633ca\") " pod="openstack/dnsmasq-dns-56df8fb6b7-bm75n" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.561808 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8671fa02-a5fa-41f0-b232-fdfc4133ab58-config-data\") pod \"placement-db-sync-ns8h9\" (UID: \"8671fa02-a5fa-41f0-b232-fdfc4133ab58\") " pod="openstack/placement-db-sync-ns8h9" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.563186 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed4a364a-14cc-442e-9297-ca9497e633ca-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-bm75n\" (UID: \"ed4a364a-14cc-442e-9297-ca9497e633ca\") " pod="openstack/dnsmasq-dns-56df8fb6b7-bm75n" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.563768 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed4a364a-14cc-442e-9297-ca9497e633ca-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-bm75n\" (UID: \"ed4a364a-14cc-442e-9297-ca9497e633ca\") " pod="openstack/dnsmasq-dns-56df8fb6b7-bm75n" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.569965 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8671fa02-a5fa-41f0-b232-fdfc4133ab58-logs\") pod \"placement-db-sync-ns8h9\" (UID: \"8671fa02-a5fa-41f0-b232-fdfc4133ab58\") " pod="openstack/placement-db-sync-ns8h9" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.571085 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed4a364a-14cc-442e-9297-ca9497e633ca-config\") pod \"dnsmasq-dns-56df8fb6b7-bm75n\" (UID: \"ed4a364a-14cc-442e-9297-ca9497e633ca\") " pod="openstack/dnsmasq-dns-56df8fb6b7-bm75n" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.572466 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed4a364a-14cc-442e-9297-ca9497e633ca-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-bm75n\" (UID: \"ed4a364a-14cc-442e-9297-ca9497e633ca\") " pod="openstack/dnsmasq-dns-56df8fb6b7-bm75n" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.571320 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ed4a364a-14cc-442e-9297-ca9497e633ca-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-bm75n\" (UID: \"ed4a364a-14cc-442e-9297-ca9497e633ca\") " pod="openstack/dnsmasq-dns-56df8fb6b7-bm75n" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.580466 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4e3779f-9f25-4334-97f9-a3778bd78d5e-config-data\") pod \"cloudkitty-db-sync-wh9q9\" (UID: \"e4e3779f-9f25-4334-97f9-a3778bd78d5e\") " pod="openstack/cloudkitty-db-sync-wh9q9" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.586142 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8671fa02-a5fa-41f0-b232-fdfc4133ab58-combined-ca-bundle\") pod \"placement-db-sync-ns8h9\" (UID: \"8671fa02-a5fa-41f0-b232-fdfc4133ab58\") " pod="openstack/placement-db-sync-ns8h9" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.586174 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8671fa02-a5fa-41f0-b232-fdfc4133ab58-config-data\") pod \"placement-db-sync-ns8h9\" (UID: \"8671fa02-a5fa-41f0-b232-fdfc4133ab58\") " pod="openstack/placement-db-sync-ns8h9" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.586185 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/e4e3779f-9f25-4334-97f9-a3778bd78d5e-certs\") pod \"cloudkitty-db-sync-wh9q9\" (UID: \"e4e3779f-9f25-4334-97f9-a3778bd78d5e\") " pod="openstack/cloudkitty-db-sync-wh9q9" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.587797 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4e3779f-9f25-4334-97f9-a3778bd78d5e-combined-ca-bundle\") pod \"cloudkitty-db-sync-wh9q9\" (UID: \"e4e3779f-9f25-4334-97f9-a3778bd78d5e\") " pod="openstack/cloudkitty-db-sync-wh9q9" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.598567 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7rwpz" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.600540 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4e3779f-9f25-4334-97f9-a3778bd78d5e-scripts\") pod \"cloudkitty-db-sync-wh9q9\" (UID: \"e4e3779f-9f25-4334-97f9-a3778bd78d5e\") " pod="openstack/cloudkitty-db-sync-wh9q9" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.601654 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnfjw\" (UniqueName: \"kubernetes.io/projected/ed4a364a-14cc-442e-9297-ca9497e633ca-kube-api-access-nnfjw\") pod \"dnsmasq-dns-56df8fb6b7-bm75n\" (UID: \"ed4a364a-14cc-442e-9297-ca9497e633ca\") " pod="openstack/dnsmasq-dns-56df8fb6b7-bm75n" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.604849 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whzwf\" (UniqueName: \"kubernetes.io/projected/e4e3779f-9f25-4334-97f9-a3778bd78d5e-kube-api-access-whzwf\") pod \"cloudkitty-db-sync-wh9q9\" (UID: \"e4e3779f-9f25-4334-97f9-a3778bd78d5e\") " pod="openstack/cloudkitty-db-sync-wh9q9" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.614793 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4zwf\" (UniqueName: \"kubernetes.io/projected/8671fa02-a5fa-41f0-b232-fdfc4133ab58-kube-api-access-m4zwf\") pod \"placement-db-sync-ns8h9\" (UID: \"8671fa02-a5fa-41f0-b232-fdfc4133ab58\") " pod="openstack/placement-db-sync-ns8h9" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.616943 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8671fa02-a5fa-41f0-b232-fdfc4133ab58-scripts\") pod \"placement-db-sync-ns8h9\" (UID: \"8671fa02-a5fa-41f0-b232-fdfc4133ab58\") " pod="openstack/placement-db-sync-ns8h9" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.768517 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-wsss7" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.783149 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.925315 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.930274 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.936441 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.936814 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-66d72" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.937379 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.943465 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.949075 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-qllz5" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.970988 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-wh9q9" Feb 19 10:02:48 crc kubenswrapper[4965]: I0219 10:02:48.985050 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-ns8h9" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.000170 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-bm75n" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.013134 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-ws9nm" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.049285 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.050925 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.054058 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.073853 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpbl8\" (UniqueName: \"kubernetes.io/projected/e56b5777-a1b2-4ee2-8998-0620cbcfb678-kube-api-access-hpbl8\") pod \"e56b5777-a1b2-4ee2-8998-0620cbcfb678\" (UID: \"e56b5777-a1b2-4ee2-8998-0620cbcfb678\") " Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.073916 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e56b5777-a1b2-4ee2-8998-0620cbcfb678-dns-swift-storage-0\") pod \"e56b5777-a1b2-4ee2-8998-0620cbcfb678\" (UID: \"e56b5777-a1b2-4ee2-8998-0620cbcfb678\") " Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.073960 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e56b5777-a1b2-4ee2-8998-0620cbcfb678-ovsdbserver-nb\") pod \"e56b5777-a1b2-4ee2-8998-0620cbcfb678\" (UID: \"e56b5777-a1b2-4ee2-8998-0620cbcfb678\") " Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.074009 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e56b5777-a1b2-4ee2-8998-0620cbcfb678-dns-svc\") pod \"e56b5777-a1b2-4ee2-8998-0620cbcfb678\" (UID: \"e56b5777-a1b2-4ee2-8998-0620cbcfb678\") " Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.074120 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e56b5777-a1b2-4ee2-8998-0620cbcfb678-ovsdbserver-sb\") pod \"e56b5777-a1b2-4ee2-8998-0620cbcfb678\" (UID: \"e56b5777-a1b2-4ee2-8998-0620cbcfb678\") " Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.074152 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e56b5777-a1b2-4ee2-8998-0620cbcfb678-config\") pod \"e56b5777-a1b2-4ee2-8998-0620cbcfb678\" (UID: \"e56b5777-a1b2-4ee2-8998-0620cbcfb678\") " Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.074727 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/277e909a-4dbb-48ae-941a-d9c5e6e22e36-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"277e909a-4dbb-48ae-941a-d9c5e6e22e36\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.074770 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b8954926-b989-4d4b-b68d-eda06a80d48a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8954926-b989-4d4b-b68d-eda06a80d48a\") pod \"glance-default-external-api-0\" (UID: \"277e909a-4dbb-48ae-941a-d9c5e6e22e36\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.074816 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/277e909a-4dbb-48ae-941a-d9c5e6e22e36-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"277e909a-4dbb-48ae-941a-d9c5e6e22e36\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.074838 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnmdn\" (UniqueName: \"kubernetes.io/projected/277e909a-4dbb-48ae-941a-d9c5e6e22e36-kube-api-access-dnmdn\") pod \"glance-default-external-api-0\" (UID: \"277e909a-4dbb-48ae-941a-d9c5e6e22e36\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.074947 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/277e909a-4dbb-48ae-941a-d9c5e6e22e36-config-data\") pod \"glance-default-external-api-0\" (UID: \"277e909a-4dbb-48ae-941a-d9c5e6e22e36\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.075005 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/277e909a-4dbb-48ae-941a-d9c5e6e22e36-scripts\") pod \"glance-default-external-api-0\" (UID: \"277e909a-4dbb-48ae-941a-d9c5e6e22e36\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.075298 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/277e909a-4dbb-48ae-941a-d9c5e6e22e36-logs\") pod \"glance-default-external-api-0\" (UID: \"277e909a-4dbb-48ae-941a-d9c5e6e22e36\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.078015 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e56b5777-a1b2-4ee2-8998-0620cbcfb678-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e56b5777-a1b2-4ee2-8998-0620cbcfb678" (UID: "e56b5777-a1b2-4ee2-8998-0620cbcfb678"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.078367 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e56b5777-a1b2-4ee2-8998-0620cbcfb678-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e56b5777-a1b2-4ee2-8998-0620cbcfb678" (UID: "e56b5777-a1b2-4ee2-8998-0620cbcfb678"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.078659 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e56b5777-a1b2-4ee2-8998-0620cbcfb678-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e56b5777-a1b2-4ee2-8998-0620cbcfb678" (UID: "e56b5777-a1b2-4ee2-8998-0620cbcfb678"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.079503 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e56b5777-a1b2-4ee2-8998-0620cbcfb678-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e56b5777-a1b2-4ee2-8998-0620cbcfb678" (UID: "e56b5777-a1b2-4ee2-8998-0620cbcfb678"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.080033 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e56b5777-a1b2-4ee2-8998-0620cbcfb678-config" (OuterVolumeSpecName: "config") pod "e56b5777-a1b2-4ee2-8998-0620cbcfb678" (UID: "e56b5777-a1b2-4ee2-8998-0620cbcfb678"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.084027 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e56b5777-a1b2-4ee2-8998-0620cbcfb678-kube-api-access-hpbl8" (OuterVolumeSpecName: "kube-api-access-hpbl8") pod "e56b5777-a1b2-4ee2-8998-0620cbcfb678" (UID: "e56b5777-a1b2-4ee2-8998-0620cbcfb678"). InnerVolumeSpecName "kube-api-access-hpbl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.091607 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.173074 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-7plxw" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.176933 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5428ae1e-e83f-4cae-8b81-6a008058186f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5428ae1e-e83f-4cae-8b81-6a008058186f\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.176973 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5428ae1e-e83f-4cae-8b81-6a008058186f-logs\") pod \"glance-default-internal-api-0\" (UID: \"5428ae1e-e83f-4cae-8b81-6a008058186f\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.177013 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-dd8fd35d-2a75-4403-b478-22ed0ff75616\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dd8fd35d-2a75-4403-b478-22ed0ff75616\") pod \"glance-default-internal-api-0\" (UID: \"5428ae1e-e83f-4cae-8b81-6a008058186f\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.177044 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/277e909a-4dbb-48ae-941a-d9c5e6e22e36-config-data\") pod \"glance-default-external-api-0\" (UID: \"277e909a-4dbb-48ae-941a-d9c5e6e22e36\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.177243 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/277e909a-4dbb-48ae-941a-d9c5e6e22e36-scripts\") pod \"glance-default-external-api-0\" (UID: \"277e909a-4dbb-48ae-941a-d9c5e6e22e36\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.177444 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5428ae1e-e83f-4cae-8b81-6a008058186f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5428ae1e-e83f-4cae-8b81-6a008058186f\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.177506 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt8p7\" (UniqueName: \"kubernetes.io/projected/5428ae1e-e83f-4cae-8b81-6a008058186f-kube-api-access-qt8p7\") pod \"glance-default-internal-api-0\" (UID: \"5428ae1e-e83f-4cae-8b81-6a008058186f\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.177558 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/277e909a-4dbb-48ae-941a-d9c5e6e22e36-logs\") pod \"glance-default-external-api-0\" (UID: \"277e909a-4dbb-48ae-941a-d9c5e6e22e36\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.177624 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/277e909a-4dbb-48ae-941a-d9c5e6e22e36-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"277e909a-4dbb-48ae-941a-d9c5e6e22e36\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.177648 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b8954926-b989-4d4b-b68d-eda06a80d48a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8954926-b989-4d4b-b68d-eda06a80d48a\") pod \"glance-default-external-api-0\" (UID: \"277e909a-4dbb-48ae-941a-d9c5e6e22e36\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.177672 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5428ae1e-e83f-4cae-8b81-6a008058186f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5428ae1e-e83f-4cae-8b81-6a008058186f\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.177704 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/277e909a-4dbb-48ae-941a-d9c5e6e22e36-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"277e909a-4dbb-48ae-941a-d9c5e6e22e36\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.177727 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnmdn\" (UniqueName: \"kubernetes.io/projected/277e909a-4dbb-48ae-941a-d9c5e6e22e36-kube-api-access-dnmdn\") pod \"glance-default-external-api-0\" (UID: \"277e909a-4dbb-48ae-941a-d9c5e6e22e36\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.177754 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5428ae1e-e83f-4cae-8b81-6a008058186f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5428ae1e-e83f-4cae-8b81-6a008058186f\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.177811 4965 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e56b5777-a1b2-4ee2-8998-0620cbcfb678-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.177823 4965 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e56b5777-a1b2-4ee2-8998-0620cbcfb678-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.177833 4965 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e56b5777-a1b2-4ee2-8998-0620cbcfb678-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.177842 4965 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e56b5777-a1b2-4ee2-8998-0620cbcfb678-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.177851 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hpbl8\" (UniqueName: \"kubernetes.io/projected/e56b5777-a1b2-4ee2-8998-0620cbcfb678-kube-api-access-hpbl8\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.177860 4965 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e56b5777-a1b2-4ee2-8998-0620cbcfb678-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.178040 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/277e909a-4dbb-48ae-941a-d9c5e6e22e36-logs\") pod \"glance-default-external-api-0\" (UID: \"277e909a-4dbb-48ae-941a-d9c5e6e22e36\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.178381 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/277e909a-4dbb-48ae-941a-d9c5e6e22e36-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"277e909a-4dbb-48ae-941a-d9c5e6e22e36\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.182693 4965 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.182728 4965 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b8954926-b989-4d4b-b68d-eda06a80d48a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8954926-b989-4d4b-b68d-eda06a80d48a\") pod \"glance-default-external-api-0\" (UID: \"277e909a-4dbb-48ae-941a-d9c5e6e22e36\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6e7bd73c7e8cf1522bc205031417ace7701fabab6d8bd5d89d84d48b59498ea6/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.182729 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/277e909a-4dbb-48ae-941a-d9c5e6e22e36-scripts\") pod \"glance-default-external-api-0\" (UID: \"277e909a-4dbb-48ae-941a-d9c5e6e22e36\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.182787 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/277e909a-4dbb-48ae-941a-d9c5e6e22e36-config-data\") pod \"glance-default-external-api-0\" (UID: \"277e909a-4dbb-48ae-941a-d9c5e6e22e36\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.183743 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/277e909a-4dbb-48ae-941a-d9c5e6e22e36-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"277e909a-4dbb-48ae-941a-d9c5e6e22e36\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.219327 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnmdn\" (UniqueName: \"kubernetes.io/projected/277e909a-4dbb-48ae-941a-d9c5e6e22e36-kube-api-access-dnmdn\") pod \"glance-default-external-api-0\" (UID: \"277e909a-4dbb-48ae-941a-d9c5e6e22e36\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.259960 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-wqhc2"] Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.279815 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5428ae1e-e83f-4cae-8b81-6a008058186f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5428ae1e-e83f-4cae-8b81-6a008058186f\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.279905 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5428ae1e-e83f-4cae-8b81-6a008058186f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5428ae1e-e83f-4cae-8b81-6a008058186f\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.279934 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5428ae1e-e83f-4cae-8b81-6a008058186f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5428ae1e-e83f-4cae-8b81-6a008058186f\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.279951 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5428ae1e-e83f-4cae-8b81-6a008058186f-logs\") pod \"glance-default-internal-api-0\" (UID: \"5428ae1e-e83f-4cae-8b81-6a008058186f\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.279987 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-dd8fd35d-2a75-4403-b478-22ed0ff75616\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dd8fd35d-2a75-4403-b478-22ed0ff75616\") pod \"glance-default-internal-api-0\" (UID: \"5428ae1e-e83f-4cae-8b81-6a008058186f\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.280085 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5428ae1e-e83f-4cae-8b81-6a008058186f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5428ae1e-e83f-4cae-8b81-6a008058186f\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.280105 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qt8p7\" (UniqueName: \"kubernetes.io/projected/5428ae1e-e83f-4cae-8b81-6a008058186f-kube-api-access-qt8p7\") pod \"glance-default-internal-api-0\" (UID: \"5428ae1e-e83f-4cae-8b81-6a008058186f\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.286085 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5428ae1e-e83f-4cae-8b81-6a008058186f-logs\") pod \"glance-default-internal-api-0\" (UID: \"5428ae1e-e83f-4cae-8b81-6a008058186f\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.287085 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5428ae1e-e83f-4cae-8b81-6a008058186f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5428ae1e-e83f-4cae-8b81-6a008058186f\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.296749 4965 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.296796 4965 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-dd8fd35d-2a75-4403-b478-22ed0ff75616\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dd8fd35d-2a75-4403-b478-22ed0ff75616\") pod \"glance-default-internal-api-0\" (UID: \"5428ae1e-e83f-4cae-8b81-6a008058186f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/bbd5634dc66e040ac4fcb8a10b0a021d0db9968a1cda30e816c0dbc4187cf813/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.306124 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5428ae1e-e83f-4cae-8b81-6a008058186f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5428ae1e-e83f-4cae-8b81-6a008058186f\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.307774 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5428ae1e-e83f-4cae-8b81-6a008058186f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5428ae1e-e83f-4cae-8b81-6a008058186f\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.311830 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5428ae1e-e83f-4cae-8b81-6a008058186f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5428ae1e-e83f-4cae-8b81-6a008058186f\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.323179 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qt8p7\" (UniqueName: \"kubernetes.io/projected/5428ae1e-e83f-4cae-8b81-6a008058186f-kube-api-access-qt8p7\") pod \"glance-default-internal-api-0\" (UID: \"5428ae1e-e83f-4cae-8b81-6a008058186f\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.328698 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-7rwpz"] Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.380907 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x98hp\" (UniqueName: \"kubernetes.io/projected/23eeb935-93ea-4fee-9140-89216cef2850-kube-api-access-x98hp\") pod \"23eeb935-93ea-4fee-9140-89216cef2850\" (UID: \"23eeb935-93ea-4fee-9140-89216cef2850\") " Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.380956 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23eeb935-93ea-4fee-9140-89216cef2850-dns-swift-storage-0\") pod \"23eeb935-93ea-4fee-9140-89216cef2850\" (UID: \"23eeb935-93ea-4fee-9140-89216cef2850\") " Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.381033 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23eeb935-93ea-4fee-9140-89216cef2850-dns-svc\") pod \"23eeb935-93ea-4fee-9140-89216cef2850\" (UID: \"23eeb935-93ea-4fee-9140-89216cef2850\") " Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.381090 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23eeb935-93ea-4fee-9140-89216cef2850-ovsdbserver-sb\") pod \"23eeb935-93ea-4fee-9140-89216cef2850\" (UID: \"23eeb935-93ea-4fee-9140-89216cef2850\") " Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.381136 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23eeb935-93ea-4fee-9140-89216cef2850-config\") pod \"23eeb935-93ea-4fee-9140-89216cef2850\" (UID: \"23eeb935-93ea-4fee-9140-89216cef2850\") " Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.381187 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23eeb935-93ea-4fee-9140-89216cef2850-ovsdbserver-nb\") pod \"23eeb935-93ea-4fee-9140-89216cef2850\" (UID: \"23eeb935-93ea-4fee-9140-89216cef2850\") " Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.390031 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23eeb935-93ea-4fee-9140-89216cef2850-kube-api-access-x98hp" (OuterVolumeSpecName: "kube-api-access-x98hp") pod "23eeb935-93ea-4fee-9140-89216cef2850" (UID: "23eeb935-93ea-4fee-9140-89216cef2850"). InnerVolumeSpecName "kube-api-access-x98hp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.393446 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b8954926-b989-4d4b-b68d-eda06a80d48a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8954926-b989-4d4b-b68d-eda06a80d48a\") pod \"glance-default-external-api-0\" (UID: \"277e909a-4dbb-48ae-941a-d9c5e6e22e36\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.404297 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-dd8fd35d-2a75-4403-b478-22ed0ff75616\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dd8fd35d-2a75-4403-b478-22ed0ff75616\") pod \"glance-default-internal-api-0\" (UID: \"5428ae1e-e83f-4cae-8b81-6a008058186f\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.432070 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23eeb935-93ea-4fee-9140-89216cef2850-config" (OuterVolumeSpecName: "config") pod "23eeb935-93ea-4fee-9140-89216cef2850" (UID: "23eeb935-93ea-4fee-9140-89216cef2850"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.437898 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23eeb935-93ea-4fee-9140-89216cef2850-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "23eeb935-93ea-4fee-9140-89216cef2850" (UID: "23eeb935-93ea-4fee-9140-89216cef2850"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.445811 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23eeb935-93ea-4fee-9140-89216cef2850-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "23eeb935-93ea-4fee-9140-89216cef2850" (UID: "23eeb935-93ea-4fee-9140-89216cef2850"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.462874 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23eeb935-93ea-4fee-9140-89216cef2850-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "23eeb935-93ea-4fee-9140-89216cef2850" (UID: "23eeb935-93ea-4fee-9140-89216cef2850"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.472112 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.486708 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x98hp\" (UniqueName: \"kubernetes.io/projected/23eeb935-93ea-4fee-9140-89216cef2850-kube-api-access-x98hp\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.486752 4965 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23eeb935-93ea-4fee-9140-89216cef2850-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.486763 4965 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23eeb935-93ea-4fee-9140-89216cef2850-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.486772 4965 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23eeb935-93ea-4fee-9140-89216cef2850-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.486780 4965 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23eeb935-93ea-4fee-9140-89216cef2850-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.503577 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23eeb935-93ea-4fee-9140-89216cef2850-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "23eeb935-93ea-4fee-9140-89216cef2850" (UID: "23eeb935-93ea-4fee-9140-89216cef2850"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.504918 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.562431 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wqhc2" event={"ID":"b5ef713d-f101-4d3f-bdb2-6fe4f2966380","Type":"ContainerStarted","Data":"fe44f32e9d47ce76faf15ccfaa4c44c603408ce8f6a278bb887c5d10239ebc45"} Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.568691 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7rwpz" event={"ID":"ce8bac0d-7aa6-437f-b234-370384cf1153","Type":"ContainerStarted","Data":"137b331a032251deb704440163a87a12fcabd54f7a5554cc0933473d44674a62"} Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.573531 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"efd4d0e2-bc4c-4bac-9236-37338445f7c7","Type":"ContainerStarted","Data":"9355f020028e33deb971993c913cae5d71fd140718b60511f8da3567c1db6aee"} Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.573578 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"efd4d0e2-bc4c-4bac-9236-37338445f7c7","Type":"ContainerStarted","Data":"0e5930001a1af1e5c5fe785fc53dbfe52ac8f7ffce4741ef10175702309bd53b"} Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.581274 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-wsss7"] Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.581619 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-ws9nm" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.581693 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-7plxw" event={"ID":"23eeb935-93ea-4fee-9140-89216cef2850","Type":"ContainerDied","Data":"182503f119ba3e85734f707d61ccfbb44ee6d65759c05d6bd6ed09a2508496f2"} Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.581745 4965 scope.go:117] "RemoveContainer" containerID="4d4b9ddf09f8dbe517647b068902968523cf368894ce455e2660475978387aae" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.581993 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-7plxw" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.587950 4965 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23eeb935-93ea-4fee-9140-89216cef2850-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.594740 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:02:49 crc kubenswrapper[4965]: W0219 10:02:49.610390 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfcad3660_ade7_407c_9d77_bb1c2c2721a8.slice/crio-fc0afb4918c8382cd21d552560d3252b679e39f5e5b2404026fa93ac9b3a29ab WatchSource:0}: Error finding container fc0afb4918c8382cd21d552560d3252b679e39f5e5b2404026fa93ac9b3a29ab: Status 404 returned error can't find the container with id fc0afb4918c8382cd21d552560d3252b679e39f5e5b2404026fa93ac9b3a29ab Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.662557 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=25.662537988 podStartE2EDuration="25.662537988s" podCreationTimestamp="2026-02-19 10:02:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:02:49.623565542 +0000 UTC m=+1225.244886852" watchObservedRunningTime="2026-02-19 10:02:49.662537988 +0000 UTC m=+1225.283859458" Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.679318 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-ws9nm"] Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.691804 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-ws9nm"] Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.765125 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-7plxw"] Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.846316 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-7plxw"] Feb 19 10:02:49 crc kubenswrapper[4965]: I0219 10:02:49.877577 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-wh9q9"] Feb 19 10:02:50 crc kubenswrapper[4965]: I0219 10:02:50.123505 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-qllz5"] Feb 19 10:02:50 crc kubenswrapper[4965]: I0219 10:02:50.135354 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-bm75n"] Feb 19 10:02:50 crc kubenswrapper[4965]: I0219 10:02:50.154074 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-ns8h9"] Feb 19 10:02:50 crc kubenswrapper[4965]: W0219 10:02:50.185894 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7bc0481_970b_4e8e_868f_490ea553952e.slice/crio-07ce3cdc4fb747d1465483be806cb2cc3d95a8a99e404860088a0ad42519d50e WatchSource:0}: Error finding container 07ce3cdc4fb747d1465483be806cb2cc3d95a8a99e404860088a0ad42519d50e: Status 404 returned error can't find the container with id 07ce3cdc4fb747d1465483be806cb2cc3d95a8a99e404860088a0ad42519d50e Feb 19 10:02:50 crc kubenswrapper[4965]: I0219 10:02:50.329425 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 10:02:50 crc kubenswrapper[4965]: I0219 10:02:50.612761 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wqhc2" event={"ID":"b5ef713d-f101-4d3f-bdb2-6fe4f2966380","Type":"ContainerStarted","Data":"d639b0e62244bd72cf3e36a38011fe222908c4f35ba3b0bc5b9e57e2d49084ed"} Feb 19 10:02:50 crc kubenswrapper[4965]: I0219 10:02:50.627125 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-ns8h9" event={"ID":"8671fa02-a5fa-41f0-b232-fdfc4133ab58","Type":"ContainerStarted","Data":"3f7f9e8fff1e38ec6842f289fc0af124699ecb07eabc4eb86b086867e8d004bb"} Feb 19 10:02:50 crc kubenswrapper[4965]: I0219 10:02:50.647469 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-wqhc2" podStartSLOduration=3.647448281 podStartE2EDuration="3.647448281s" podCreationTimestamp="2026-02-19 10:02:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:02:50.641253001 +0000 UTC m=+1226.262574301" watchObservedRunningTime="2026-02-19 10:02:50.647448281 +0000 UTC m=+1226.268769591" Feb 19 10:02:50 crc kubenswrapper[4965]: I0219 10:02:50.647631 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:02:50 crc kubenswrapper[4965]: I0219 10:02:50.668406 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-wsss7" event={"ID":"f4c38eda-5f59-4756-a3b7-2731c66ef436","Type":"ContainerStarted","Data":"ef632fe3e3bf9c0ed0f47da3d2d474210be51f58973ecfe5bb091366f7778748"} Feb 19 10:02:50 crc kubenswrapper[4965]: I0219 10:02:50.668458 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-wsss7" event={"ID":"f4c38eda-5f59-4756-a3b7-2731c66ef436","Type":"ContainerStarted","Data":"662d1b50bf9c0e472fbf38f0d3fec5169b4ad9a22c977bcbc8387b8a5ccbfc43"} Feb 19 10:02:50 crc kubenswrapper[4965]: I0219 10:02:50.668529 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 19 10:02:50 crc kubenswrapper[4965]: I0219 10:02:50.674734 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-bm75n" event={"ID":"ed4a364a-14cc-442e-9297-ca9497e633ca","Type":"ContainerStarted","Data":"d8daa333b4a3c0c0a759d1e07534be2227a9380e0a9f631ed0f3dfec9a787422"} Feb 19 10:02:50 crc kubenswrapper[4965]: I0219 10:02:50.684826 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fcad3660-ade7-407c-9d77-bb1c2c2721a8","Type":"ContainerStarted","Data":"fc0afb4918c8382cd21d552560d3252b679e39f5e5b2404026fa93ac9b3a29ab"} Feb 19 10:02:50 crc kubenswrapper[4965]: I0219 10:02:50.708678 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5428ae1e-e83f-4cae-8b81-6a008058186f","Type":"ContainerStarted","Data":"f0a124b4a4d67ca120170048c4517f1913ee0cea788e8d1521082ace26c92dc4"} Feb 19 10:02:50 crc kubenswrapper[4965]: I0219 10:02:50.708756 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 10:02:50 crc kubenswrapper[4965]: I0219 10:02:50.714789 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-wsss7" podStartSLOduration=2.714763855 podStartE2EDuration="2.714763855s" podCreationTimestamp="2026-02-19 10:02:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:02:50.708513354 +0000 UTC m=+1226.329834654" watchObservedRunningTime="2026-02-19 10:02:50.714763855 +0000 UTC m=+1226.336085165" Feb 19 10:02:50 crc kubenswrapper[4965]: I0219 10:02:50.736290 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-wh9q9" event={"ID":"e4e3779f-9f25-4334-97f9-a3778bd78d5e","Type":"ContainerStarted","Data":"66c1218bdd600b8d7b583f7bc8544b41847e9f3dfa59899066b033f707cfcf4b"} Feb 19 10:02:50 crc kubenswrapper[4965]: I0219 10:02:50.744769 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-qllz5" event={"ID":"d7bc0481-970b-4e8e-868f-490ea553952e","Type":"ContainerStarted","Data":"07ce3cdc4fb747d1465483be806cb2cc3d95a8a99e404860088a0ad42519d50e"} Feb 19 10:02:50 crc kubenswrapper[4965]: I0219 10:02:50.766200 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 10:02:50 crc kubenswrapper[4965]: I0219 10:02:50.898006 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 10:02:50 crc kubenswrapper[4965]: W0219 10:02:50.918899 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod277e909a_4dbb_48ae_941a_d9c5e6e22e36.slice/crio-e8f41115a3bcbc26f31f212f34d5bce8874931c1a166efa68df27b4d44e60b95 WatchSource:0}: Error finding container e8f41115a3bcbc26f31f212f34d5bce8874931c1a166efa68df27b4d44e60b95: Status 404 returned error can't find the container with id e8f41115a3bcbc26f31f212f34d5bce8874931c1a166efa68df27b4d44e60b95 Feb 19 10:02:51 crc kubenswrapper[4965]: I0219 10:02:51.223026 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23eeb935-93ea-4fee-9140-89216cef2850" path="/var/lib/kubelet/pods/23eeb935-93ea-4fee-9140-89216cef2850/volumes" Feb 19 10:02:51 crc kubenswrapper[4965]: I0219 10:02:51.223843 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e56b5777-a1b2-4ee2-8998-0620cbcfb678" path="/var/lib/kubelet/pods/e56b5777-a1b2-4ee2-8998-0620cbcfb678/volumes" Feb 19 10:02:51 crc kubenswrapper[4965]: I0219 10:02:51.778732 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5428ae1e-e83f-4cae-8b81-6a008058186f","Type":"ContainerStarted","Data":"32920deaa83f3b1b8f6e1a25258268a7d5ed5637078ebe5b5591690438965122"} Feb 19 10:02:51 crc kubenswrapper[4965]: I0219 10:02:51.782418 4965 generic.go:334] "Generic (PLEG): container finished" podID="ed4a364a-14cc-442e-9297-ca9497e633ca" containerID="7ef8b8f21e2500aba83002ffad9fe586d9b0e8e2132b8274d346397a4ae1ecde" exitCode=0 Feb 19 10:02:51 crc kubenswrapper[4965]: I0219 10:02:51.782463 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-bm75n" event={"ID":"ed4a364a-14cc-442e-9297-ca9497e633ca","Type":"ContainerDied","Data":"7ef8b8f21e2500aba83002ffad9fe586d9b0e8e2132b8274d346397a4ae1ecde"} Feb 19 10:02:51 crc kubenswrapper[4965]: I0219 10:02:51.786440 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"277e909a-4dbb-48ae-941a-d9c5e6e22e36","Type":"ContainerStarted","Data":"e8f41115a3bcbc26f31f212f34d5bce8874931c1a166efa68df27b4d44e60b95"} Feb 19 10:02:52 crc kubenswrapper[4965]: I0219 10:02:52.823295 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"277e909a-4dbb-48ae-941a-d9c5e6e22e36","Type":"ContainerStarted","Data":"6981547354c035d875761a9803ed33ad85751cb2aa13adceaf6db1d0fcdb156a"} Feb 19 10:02:52 crc kubenswrapper[4965]: I0219 10:02:52.828044 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5428ae1e-e83f-4cae-8b81-6a008058186f","Type":"ContainerStarted","Data":"8f1d186e9f54ad2ab3a77edb7fef2acf297a469344501335f2fb0abbec7f6ba9"} Feb 19 10:02:52 crc kubenswrapper[4965]: I0219 10:02:52.828417 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5428ae1e-e83f-4cae-8b81-6a008058186f" containerName="glance-log" containerID="cri-o://32920deaa83f3b1b8f6e1a25258268a7d5ed5637078ebe5b5591690438965122" gracePeriod=30 Feb 19 10:02:52 crc kubenswrapper[4965]: I0219 10:02:52.830249 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5428ae1e-e83f-4cae-8b81-6a008058186f" containerName="glance-httpd" containerID="cri-o://8f1d186e9f54ad2ab3a77edb7fef2acf297a469344501335f2fb0abbec7f6ba9" gracePeriod=30 Feb 19 10:02:52 crc kubenswrapper[4965]: I0219 10:02:52.858860 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-bm75n" event={"ID":"ed4a364a-14cc-442e-9297-ca9497e633ca","Type":"ContainerStarted","Data":"9f0d43a53c0c1183f82212afcc663221a869cf526ccfe5bbd507e34de318526b"} Feb 19 10:02:52 crc kubenswrapper[4965]: I0219 10:02:52.860708 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56df8fb6b7-bm75n" Feb 19 10:02:52 crc kubenswrapper[4965]: I0219 10:02:52.886231 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.886208177 podStartE2EDuration="5.886208177s" podCreationTimestamp="2026-02-19 10:02:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:02:52.879847843 +0000 UTC m=+1228.501169173" watchObservedRunningTime="2026-02-19 10:02:52.886208177 +0000 UTC m=+1228.507529487" Feb 19 10:02:52 crc kubenswrapper[4965]: I0219 10:02:52.933109 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56df8fb6b7-bm75n" podStartSLOduration=4.933089486 podStartE2EDuration="4.933089486s" podCreationTimestamp="2026-02-19 10:02:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:02:52.907675649 +0000 UTC m=+1228.528996969" watchObservedRunningTime="2026-02-19 10:02:52.933089486 +0000 UTC m=+1228.554410796" Feb 19 10:02:53 crc kubenswrapper[4965]: I0219 10:02:53.643067 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 10:02:53 crc kubenswrapper[4965]: I0219 10:02:53.727796 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5428ae1e-e83f-4cae-8b81-6a008058186f-combined-ca-bundle\") pod \"5428ae1e-e83f-4cae-8b81-6a008058186f\" (UID: \"5428ae1e-e83f-4cae-8b81-6a008058186f\") " Feb 19 10:02:53 crc kubenswrapper[4965]: I0219 10:02:53.728077 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dd8fd35d-2a75-4403-b478-22ed0ff75616\") pod \"5428ae1e-e83f-4cae-8b81-6a008058186f\" (UID: \"5428ae1e-e83f-4cae-8b81-6a008058186f\") " Feb 19 10:02:53 crc kubenswrapper[4965]: I0219 10:02:53.728247 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5428ae1e-e83f-4cae-8b81-6a008058186f-scripts\") pod \"5428ae1e-e83f-4cae-8b81-6a008058186f\" (UID: \"5428ae1e-e83f-4cae-8b81-6a008058186f\") " Feb 19 10:02:53 crc kubenswrapper[4965]: I0219 10:02:53.728290 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qt8p7\" (UniqueName: \"kubernetes.io/projected/5428ae1e-e83f-4cae-8b81-6a008058186f-kube-api-access-qt8p7\") pod \"5428ae1e-e83f-4cae-8b81-6a008058186f\" (UID: \"5428ae1e-e83f-4cae-8b81-6a008058186f\") " Feb 19 10:02:53 crc kubenswrapper[4965]: I0219 10:02:53.728398 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5428ae1e-e83f-4cae-8b81-6a008058186f-config-data\") pod \"5428ae1e-e83f-4cae-8b81-6a008058186f\" (UID: \"5428ae1e-e83f-4cae-8b81-6a008058186f\") " Feb 19 10:02:53 crc kubenswrapper[4965]: I0219 10:02:53.728415 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5428ae1e-e83f-4cae-8b81-6a008058186f-httpd-run\") pod \"5428ae1e-e83f-4cae-8b81-6a008058186f\" (UID: \"5428ae1e-e83f-4cae-8b81-6a008058186f\") " Feb 19 10:02:53 crc kubenswrapper[4965]: I0219 10:02:53.728458 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5428ae1e-e83f-4cae-8b81-6a008058186f-logs\") pod \"5428ae1e-e83f-4cae-8b81-6a008058186f\" (UID: \"5428ae1e-e83f-4cae-8b81-6a008058186f\") " Feb 19 10:02:53 crc kubenswrapper[4965]: I0219 10:02:53.729028 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5428ae1e-e83f-4cae-8b81-6a008058186f-logs" (OuterVolumeSpecName: "logs") pod "5428ae1e-e83f-4cae-8b81-6a008058186f" (UID: "5428ae1e-e83f-4cae-8b81-6a008058186f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:02:53 crc kubenswrapper[4965]: I0219 10:02:53.729316 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5428ae1e-e83f-4cae-8b81-6a008058186f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5428ae1e-e83f-4cae-8b81-6a008058186f" (UID: "5428ae1e-e83f-4cae-8b81-6a008058186f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:02:53 crc kubenswrapper[4965]: I0219 10:02:53.734555 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5428ae1e-e83f-4cae-8b81-6a008058186f-kube-api-access-qt8p7" (OuterVolumeSpecName: "kube-api-access-qt8p7") pod "5428ae1e-e83f-4cae-8b81-6a008058186f" (UID: "5428ae1e-e83f-4cae-8b81-6a008058186f"). InnerVolumeSpecName "kube-api-access-qt8p7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:02:53 crc kubenswrapper[4965]: I0219 10:02:53.734652 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5428ae1e-e83f-4cae-8b81-6a008058186f-scripts" (OuterVolumeSpecName: "scripts") pod "5428ae1e-e83f-4cae-8b81-6a008058186f" (UID: "5428ae1e-e83f-4cae-8b81-6a008058186f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:53 crc kubenswrapper[4965]: I0219 10:02:53.746073 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dd8fd35d-2a75-4403-b478-22ed0ff75616" (OuterVolumeSpecName: "glance") pod "5428ae1e-e83f-4cae-8b81-6a008058186f" (UID: "5428ae1e-e83f-4cae-8b81-6a008058186f"). InnerVolumeSpecName "pvc-dd8fd35d-2a75-4403-b478-22ed0ff75616". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 10:02:53 crc kubenswrapper[4965]: I0219 10:02:53.774813 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5428ae1e-e83f-4cae-8b81-6a008058186f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5428ae1e-e83f-4cae-8b81-6a008058186f" (UID: "5428ae1e-e83f-4cae-8b81-6a008058186f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:53 crc kubenswrapper[4965]: I0219 10:02:53.792020 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5428ae1e-e83f-4cae-8b81-6a008058186f-config-data" (OuterVolumeSpecName: "config-data") pod "5428ae1e-e83f-4cae-8b81-6a008058186f" (UID: "5428ae1e-e83f-4cae-8b81-6a008058186f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:53 crc kubenswrapper[4965]: I0219 10:02:53.830561 4965 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5428ae1e-e83f-4cae-8b81-6a008058186f-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:53 crc kubenswrapper[4965]: I0219 10:02:53.830589 4965 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5428ae1e-e83f-4cae-8b81-6a008058186f-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:53 crc kubenswrapper[4965]: I0219 10:02:53.830601 4965 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5428ae1e-e83f-4cae-8b81-6a008058186f-logs\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:53 crc kubenswrapper[4965]: I0219 10:02:53.830612 4965 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5428ae1e-e83f-4cae-8b81-6a008058186f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:53 crc kubenswrapper[4965]: I0219 10:02:53.830639 4965 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-dd8fd35d-2a75-4403-b478-22ed0ff75616\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dd8fd35d-2a75-4403-b478-22ed0ff75616\") on node \"crc\" " Feb 19 10:02:53 crc kubenswrapper[4965]: I0219 10:02:53.830652 4965 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5428ae1e-e83f-4cae-8b81-6a008058186f-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:53 crc kubenswrapper[4965]: I0219 10:02:53.830662 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qt8p7\" (UniqueName: \"kubernetes.io/projected/5428ae1e-e83f-4cae-8b81-6a008058186f-kube-api-access-qt8p7\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:53 crc kubenswrapper[4965]: I0219 10:02:53.857007 4965 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 19 10:02:53 crc kubenswrapper[4965]: I0219 10:02:53.857167 4965 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-dd8fd35d-2a75-4403-b478-22ed0ff75616" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dd8fd35d-2a75-4403-b478-22ed0ff75616") on node "crc" Feb 19 10:02:53 crc kubenswrapper[4965]: I0219 10:02:53.877015 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"277e909a-4dbb-48ae-941a-d9c5e6e22e36","Type":"ContainerStarted","Data":"f7e0c555dbd3766b90e54bc910065cdf0bdd6ac1f12aeca09cea316496c8b790"} Feb 19 10:02:53 crc kubenswrapper[4965]: I0219 10:02:53.877108 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="277e909a-4dbb-48ae-941a-d9c5e6e22e36" containerName="glance-log" containerID="cri-o://6981547354c035d875761a9803ed33ad85751cb2aa13adceaf6db1d0fcdb156a" gracePeriod=30 Feb 19 10:02:53 crc kubenswrapper[4965]: I0219 10:02:53.877496 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="277e909a-4dbb-48ae-941a-d9c5e6e22e36" containerName="glance-httpd" containerID="cri-o://f7e0c555dbd3766b90e54bc910065cdf0bdd6ac1f12aeca09cea316496c8b790" gracePeriod=30 Feb 19 10:02:53 crc kubenswrapper[4965]: I0219 10:02:53.884594 4965 generic.go:334] "Generic (PLEG): container finished" podID="5428ae1e-e83f-4cae-8b81-6a008058186f" containerID="8f1d186e9f54ad2ab3a77edb7fef2acf297a469344501335f2fb0abbec7f6ba9" exitCode=143 Feb 19 10:02:53 crc kubenswrapper[4965]: I0219 10:02:53.884623 4965 generic.go:334] "Generic (PLEG): container finished" podID="5428ae1e-e83f-4cae-8b81-6a008058186f" containerID="32920deaa83f3b1b8f6e1a25258268a7d5ed5637078ebe5b5591690438965122" exitCode=143 Feb 19 10:02:53 crc kubenswrapper[4965]: I0219 10:02:53.884967 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 10:02:53 crc kubenswrapper[4965]: I0219 10:02:53.885822 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5428ae1e-e83f-4cae-8b81-6a008058186f","Type":"ContainerDied","Data":"8f1d186e9f54ad2ab3a77edb7fef2acf297a469344501335f2fb0abbec7f6ba9"} Feb 19 10:02:53 crc kubenswrapper[4965]: I0219 10:02:53.885856 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5428ae1e-e83f-4cae-8b81-6a008058186f","Type":"ContainerDied","Data":"32920deaa83f3b1b8f6e1a25258268a7d5ed5637078ebe5b5591690438965122"} Feb 19 10:02:53 crc kubenswrapper[4965]: I0219 10:02:53.885867 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5428ae1e-e83f-4cae-8b81-6a008058186f","Type":"ContainerDied","Data":"f0a124b4a4d67ca120170048c4517f1913ee0cea788e8d1521082ace26c92dc4"} Feb 19 10:02:53 crc kubenswrapper[4965]: I0219 10:02:53.885882 4965 scope.go:117] "RemoveContainer" containerID="8f1d186e9f54ad2ab3a77edb7fef2acf297a469344501335f2fb0abbec7f6ba9" Feb 19 10:02:53 crc kubenswrapper[4965]: I0219 10:02:53.899703 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.899685024 podStartE2EDuration="6.899685024s" podCreationTimestamp="2026-02-19 10:02:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:02:53.893381651 +0000 UTC m=+1229.514702961" watchObservedRunningTime="2026-02-19 10:02:53.899685024 +0000 UTC m=+1229.521006354" Feb 19 10:02:53 crc kubenswrapper[4965]: I0219 10:02:53.933370 4965 reconciler_common.go:293] "Volume detached for volume \"pvc-dd8fd35d-2a75-4403-b478-22ed0ff75616\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dd8fd35d-2a75-4403-b478-22ed0ff75616\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:53 crc kubenswrapper[4965]: I0219 10:02:53.956516 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 10:02:53 crc kubenswrapper[4965]: I0219 10:02:53.969165 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 10:02:53 crc kubenswrapper[4965]: I0219 10:02:53.983954 4965 scope.go:117] "RemoveContainer" containerID="32920deaa83f3b1b8f6e1a25258268a7d5ed5637078ebe5b5591690438965122" Feb 19 10:02:54 crc kubenswrapper[4965]: I0219 10:02:54.020284 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 10:02:54 crc kubenswrapper[4965]: E0219 10:02:54.020794 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23eeb935-93ea-4fee-9140-89216cef2850" containerName="init" Feb 19 10:02:54 crc kubenswrapper[4965]: I0219 10:02:54.020813 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="23eeb935-93ea-4fee-9140-89216cef2850" containerName="init" Feb 19 10:02:54 crc kubenswrapper[4965]: E0219 10:02:54.020827 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5428ae1e-e83f-4cae-8b81-6a008058186f" containerName="glance-httpd" Feb 19 10:02:54 crc kubenswrapper[4965]: I0219 10:02:54.020835 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="5428ae1e-e83f-4cae-8b81-6a008058186f" containerName="glance-httpd" Feb 19 10:02:54 crc kubenswrapper[4965]: E0219 10:02:54.020867 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5428ae1e-e83f-4cae-8b81-6a008058186f" containerName="glance-log" Feb 19 10:02:54 crc kubenswrapper[4965]: I0219 10:02:54.020874 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="5428ae1e-e83f-4cae-8b81-6a008058186f" containerName="glance-log" Feb 19 10:02:54 crc kubenswrapper[4965]: I0219 10:02:54.021050 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="5428ae1e-e83f-4cae-8b81-6a008058186f" containerName="glance-httpd" Feb 19 10:02:54 crc kubenswrapper[4965]: I0219 10:02:54.021073 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="23eeb935-93ea-4fee-9140-89216cef2850" containerName="init" Feb 19 10:02:54 crc kubenswrapper[4965]: I0219 10:02:54.021084 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="5428ae1e-e83f-4cae-8b81-6a008058186f" containerName="glance-log" Feb 19 10:02:54 crc kubenswrapper[4965]: I0219 10:02:54.022298 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 10:02:54 crc kubenswrapper[4965]: I0219 10:02:54.029139 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 19 10:02:54 crc kubenswrapper[4965]: I0219 10:02:54.058227 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 10:02:54 crc kubenswrapper[4965]: I0219 10:02:54.137348 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ce0a116-70b2-4418-9b02-46ecb4e6d04f-logs\") pod \"glance-default-internal-api-0\" (UID: \"8ce0a116-70b2-4418-9b02-46ecb4e6d04f\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:54 crc kubenswrapper[4965]: I0219 10:02:54.137389 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dm66q\" (UniqueName: \"kubernetes.io/projected/8ce0a116-70b2-4418-9b02-46ecb4e6d04f-kube-api-access-dm66q\") pod \"glance-default-internal-api-0\" (UID: \"8ce0a116-70b2-4418-9b02-46ecb4e6d04f\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:54 crc kubenswrapper[4965]: I0219 10:02:54.137413 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ce0a116-70b2-4418-9b02-46ecb4e6d04f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8ce0a116-70b2-4418-9b02-46ecb4e6d04f\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:54 crc kubenswrapper[4965]: I0219 10:02:54.137445 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ce0a116-70b2-4418-9b02-46ecb4e6d04f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8ce0a116-70b2-4418-9b02-46ecb4e6d04f\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:54 crc kubenswrapper[4965]: I0219 10:02:54.137472 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8ce0a116-70b2-4418-9b02-46ecb4e6d04f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8ce0a116-70b2-4418-9b02-46ecb4e6d04f\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:54 crc kubenswrapper[4965]: I0219 10:02:54.138033 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-dd8fd35d-2a75-4403-b478-22ed0ff75616\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dd8fd35d-2a75-4403-b478-22ed0ff75616\") pod \"glance-default-internal-api-0\" (UID: \"8ce0a116-70b2-4418-9b02-46ecb4e6d04f\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:54 crc kubenswrapper[4965]: I0219 10:02:54.138134 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ce0a116-70b2-4418-9b02-46ecb4e6d04f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8ce0a116-70b2-4418-9b02-46ecb4e6d04f\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:54 crc kubenswrapper[4965]: I0219 10:02:54.239516 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ce0a116-70b2-4418-9b02-46ecb4e6d04f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8ce0a116-70b2-4418-9b02-46ecb4e6d04f\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:54 crc kubenswrapper[4965]: I0219 10:02:54.239657 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ce0a116-70b2-4418-9b02-46ecb4e6d04f-logs\") pod \"glance-default-internal-api-0\" (UID: \"8ce0a116-70b2-4418-9b02-46ecb4e6d04f\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:54 crc kubenswrapper[4965]: I0219 10:02:54.239680 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dm66q\" (UniqueName: \"kubernetes.io/projected/8ce0a116-70b2-4418-9b02-46ecb4e6d04f-kube-api-access-dm66q\") pod \"glance-default-internal-api-0\" (UID: \"8ce0a116-70b2-4418-9b02-46ecb4e6d04f\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:54 crc kubenswrapper[4965]: I0219 10:02:54.240305 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ce0a116-70b2-4418-9b02-46ecb4e6d04f-logs\") pod \"glance-default-internal-api-0\" (UID: \"8ce0a116-70b2-4418-9b02-46ecb4e6d04f\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:54 crc kubenswrapper[4965]: I0219 10:02:54.242418 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ce0a116-70b2-4418-9b02-46ecb4e6d04f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8ce0a116-70b2-4418-9b02-46ecb4e6d04f\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:54 crc kubenswrapper[4965]: I0219 10:02:54.242740 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ce0a116-70b2-4418-9b02-46ecb4e6d04f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8ce0a116-70b2-4418-9b02-46ecb4e6d04f\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:54 crc kubenswrapper[4965]: I0219 10:02:54.242961 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8ce0a116-70b2-4418-9b02-46ecb4e6d04f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8ce0a116-70b2-4418-9b02-46ecb4e6d04f\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:54 crc kubenswrapper[4965]: I0219 10:02:54.243076 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-dd8fd35d-2a75-4403-b478-22ed0ff75616\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dd8fd35d-2a75-4403-b478-22ed0ff75616\") pod \"glance-default-internal-api-0\" (UID: \"8ce0a116-70b2-4418-9b02-46ecb4e6d04f\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:54 crc kubenswrapper[4965]: I0219 10:02:54.243564 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8ce0a116-70b2-4418-9b02-46ecb4e6d04f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8ce0a116-70b2-4418-9b02-46ecb4e6d04f\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:54 crc kubenswrapper[4965]: I0219 10:02:54.246391 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ce0a116-70b2-4418-9b02-46ecb4e6d04f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8ce0a116-70b2-4418-9b02-46ecb4e6d04f\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:54 crc kubenswrapper[4965]: I0219 10:02:54.246522 4965 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 10:02:54 crc kubenswrapper[4965]: I0219 10:02:54.246562 4965 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-dd8fd35d-2a75-4403-b478-22ed0ff75616\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dd8fd35d-2a75-4403-b478-22ed0ff75616\") pod \"glance-default-internal-api-0\" (UID: \"8ce0a116-70b2-4418-9b02-46ecb4e6d04f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/bbd5634dc66e040ac4fcb8a10b0a021d0db9968a1cda30e816c0dbc4187cf813/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 19 10:02:54 crc kubenswrapper[4965]: I0219 10:02:54.247820 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ce0a116-70b2-4418-9b02-46ecb4e6d04f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8ce0a116-70b2-4418-9b02-46ecb4e6d04f\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:54 crc kubenswrapper[4965]: I0219 10:02:54.264419 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dm66q\" (UniqueName: \"kubernetes.io/projected/8ce0a116-70b2-4418-9b02-46ecb4e6d04f-kube-api-access-dm66q\") pod \"glance-default-internal-api-0\" (UID: \"8ce0a116-70b2-4418-9b02-46ecb4e6d04f\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:54 crc kubenswrapper[4965]: I0219 10:02:54.266148 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ce0a116-70b2-4418-9b02-46ecb4e6d04f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8ce0a116-70b2-4418-9b02-46ecb4e6d04f\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:54 crc kubenswrapper[4965]: I0219 10:02:54.328698 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-dd8fd35d-2a75-4403-b478-22ed0ff75616\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dd8fd35d-2a75-4403-b478-22ed0ff75616\") pod \"glance-default-internal-api-0\" (UID: \"8ce0a116-70b2-4418-9b02-46ecb4e6d04f\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:02:54 crc kubenswrapper[4965]: I0219 10:02:54.381079 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 10:02:54 crc kubenswrapper[4965]: I0219 10:02:54.476716 4965 scope.go:117] "RemoveContainer" containerID="8f1d186e9f54ad2ab3a77edb7fef2acf297a469344501335f2fb0abbec7f6ba9" Feb 19 10:02:54 crc kubenswrapper[4965]: E0219 10:02:54.477331 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f1d186e9f54ad2ab3a77edb7fef2acf297a469344501335f2fb0abbec7f6ba9\": container with ID starting with 8f1d186e9f54ad2ab3a77edb7fef2acf297a469344501335f2fb0abbec7f6ba9 not found: ID does not exist" containerID="8f1d186e9f54ad2ab3a77edb7fef2acf297a469344501335f2fb0abbec7f6ba9" Feb 19 10:02:54 crc kubenswrapper[4965]: I0219 10:02:54.477381 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f1d186e9f54ad2ab3a77edb7fef2acf297a469344501335f2fb0abbec7f6ba9"} err="failed to get container status \"8f1d186e9f54ad2ab3a77edb7fef2acf297a469344501335f2fb0abbec7f6ba9\": rpc error: code = NotFound desc = could not find container \"8f1d186e9f54ad2ab3a77edb7fef2acf297a469344501335f2fb0abbec7f6ba9\": container with ID starting with 8f1d186e9f54ad2ab3a77edb7fef2acf297a469344501335f2fb0abbec7f6ba9 not found: ID does not exist" Feb 19 10:02:54 crc kubenswrapper[4965]: I0219 10:02:54.477414 4965 scope.go:117] "RemoveContainer" containerID="32920deaa83f3b1b8f6e1a25258268a7d5ed5637078ebe5b5591690438965122" Feb 19 10:02:54 crc kubenswrapper[4965]: E0219 10:02:54.477777 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32920deaa83f3b1b8f6e1a25258268a7d5ed5637078ebe5b5591690438965122\": container with ID starting with 32920deaa83f3b1b8f6e1a25258268a7d5ed5637078ebe5b5591690438965122 not found: ID does not exist" containerID="32920deaa83f3b1b8f6e1a25258268a7d5ed5637078ebe5b5591690438965122" Feb 19 10:02:54 crc kubenswrapper[4965]: I0219 10:02:54.477809 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32920deaa83f3b1b8f6e1a25258268a7d5ed5637078ebe5b5591690438965122"} err="failed to get container status \"32920deaa83f3b1b8f6e1a25258268a7d5ed5637078ebe5b5591690438965122\": rpc error: code = NotFound desc = could not find container \"32920deaa83f3b1b8f6e1a25258268a7d5ed5637078ebe5b5591690438965122\": container with ID starting with 32920deaa83f3b1b8f6e1a25258268a7d5ed5637078ebe5b5591690438965122 not found: ID does not exist" Feb 19 10:02:54 crc kubenswrapper[4965]: I0219 10:02:54.477824 4965 scope.go:117] "RemoveContainer" containerID="8f1d186e9f54ad2ab3a77edb7fef2acf297a469344501335f2fb0abbec7f6ba9" Feb 19 10:02:54 crc kubenswrapper[4965]: I0219 10:02:54.478066 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f1d186e9f54ad2ab3a77edb7fef2acf297a469344501335f2fb0abbec7f6ba9"} err="failed to get container status \"8f1d186e9f54ad2ab3a77edb7fef2acf297a469344501335f2fb0abbec7f6ba9\": rpc error: code = NotFound desc = could not find container \"8f1d186e9f54ad2ab3a77edb7fef2acf297a469344501335f2fb0abbec7f6ba9\": container with ID starting with 8f1d186e9f54ad2ab3a77edb7fef2acf297a469344501335f2fb0abbec7f6ba9 not found: ID does not exist" Feb 19 10:02:54 crc kubenswrapper[4965]: I0219 10:02:54.478091 4965 scope.go:117] "RemoveContainer" containerID="32920deaa83f3b1b8f6e1a25258268a7d5ed5637078ebe5b5591690438965122" Feb 19 10:02:54 crc kubenswrapper[4965]: I0219 10:02:54.479066 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32920deaa83f3b1b8f6e1a25258268a7d5ed5637078ebe5b5591690438965122"} err="failed to get container status \"32920deaa83f3b1b8f6e1a25258268a7d5ed5637078ebe5b5591690438965122\": rpc error: code = NotFound desc = could not find container \"32920deaa83f3b1b8f6e1a25258268a7d5ed5637078ebe5b5591690438965122\": container with ID starting with 32920deaa83f3b1b8f6e1a25258268a7d5ed5637078ebe5b5591690438965122 not found: ID does not exist" Feb 19 10:02:54 crc kubenswrapper[4965]: I0219 10:02:54.902648 4965 generic.go:334] "Generic (PLEG): container finished" podID="277e909a-4dbb-48ae-941a-d9c5e6e22e36" containerID="6981547354c035d875761a9803ed33ad85751cb2aa13adceaf6db1d0fcdb156a" exitCode=143 Feb 19 10:02:54 crc kubenswrapper[4965]: I0219 10:02:54.903019 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"277e909a-4dbb-48ae-941a-d9c5e6e22e36","Type":"ContainerDied","Data":"6981547354c035d875761a9803ed33ad85751cb2aa13adceaf6db1d0fcdb156a"} Feb 19 10:02:55 crc kubenswrapper[4965]: I0219 10:02:55.113182 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 10:02:55 crc kubenswrapper[4965]: I0219 10:02:55.254438 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5428ae1e-e83f-4cae-8b81-6a008058186f" path="/var/lib/kubelet/pods/5428ae1e-e83f-4cae-8b81-6a008058186f/volumes" Feb 19 10:02:55 crc kubenswrapper[4965]: I0219 10:02:55.668040 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 19 10:02:55 crc kubenswrapper[4965]: I0219 10:02:55.674289 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 19 10:02:55 crc kubenswrapper[4965]: I0219 10:02:55.952367 4965 generic.go:334] "Generic (PLEG): container finished" podID="277e909a-4dbb-48ae-941a-d9c5e6e22e36" containerID="f7e0c555dbd3766b90e54bc910065cdf0bdd6ac1f12aeca09cea316496c8b790" exitCode=0 Feb 19 10:02:55 crc kubenswrapper[4965]: I0219 10:02:55.952436 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"277e909a-4dbb-48ae-941a-d9c5e6e22e36","Type":"ContainerDied","Data":"f7e0c555dbd3766b90e54bc910065cdf0bdd6ac1f12aeca09cea316496c8b790"} Feb 19 10:02:55 crc kubenswrapper[4965]: I0219 10:02:55.959914 4965 generic.go:334] "Generic (PLEG): container finished" podID="b5ef713d-f101-4d3f-bdb2-6fe4f2966380" containerID="d639b0e62244bd72cf3e36a38011fe222908c4f35ba3b0bc5b9e57e2d49084ed" exitCode=0 Feb 19 10:02:55 crc kubenswrapper[4965]: I0219 10:02:55.959985 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wqhc2" event={"ID":"b5ef713d-f101-4d3f-bdb2-6fe4f2966380","Type":"ContainerDied","Data":"d639b0e62244bd72cf3e36a38011fe222908c4f35ba3b0bc5b9e57e2d49084ed"} Feb 19 10:02:55 crc kubenswrapper[4965]: I0219 10:02:55.963033 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8ce0a116-70b2-4418-9b02-46ecb4e6d04f","Type":"ContainerStarted","Data":"119612af514ae07dc39f47e97fa0a8299833c2d2f7dda65b923c1aae4ac845b2"} Feb 19 10:02:55 crc kubenswrapper[4965]: I0219 10:02:55.963062 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8ce0a116-70b2-4418-9b02-46ecb4e6d04f","Type":"ContainerStarted","Data":"1ef1be41b753da07954703cea0857f37a604c2bd9219cf9c2e8c57429963cd68"} Feb 19 10:02:55 crc kubenswrapper[4965]: I0219 10:02:55.971286 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 19 10:02:56 crc kubenswrapper[4965]: I0219 10:02:56.163708 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 10:02:58 crc kubenswrapper[4965]: I0219 10:02:58.289754 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 10:02:58 crc kubenswrapper[4965]: I0219 10:02:58.296509 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wqhc2" Feb 19 10:02:58 crc kubenswrapper[4965]: I0219 10:02:58.443773 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8954926-b989-4d4b-b68d-eda06a80d48a\") pod \"277e909a-4dbb-48ae-941a-d9c5e6e22e36\" (UID: \"277e909a-4dbb-48ae-941a-d9c5e6e22e36\") " Feb 19 10:02:58 crc kubenswrapper[4965]: I0219 10:02:58.443875 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5ef713d-f101-4d3f-bdb2-6fe4f2966380-config-data\") pod \"b5ef713d-f101-4d3f-bdb2-6fe4f2966380\" (UID: \"b5ef713d-f101-4d3f-bdb2-6fe4f2966380\") " Feb 19 10:02:58 crc kubenswrapper[4965]: I0219 10:02:58.443932 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnmdn\" (UniqueName: \"kubernetes.io/projected/277e909a-4dbb-48ae-941a-d9c5e6e22e36-kube-api-access-dnmdn\") pod \"277e909a-4dbb-48ae-941a-d9c5e6e22e36\" (UID: \"277e909a-4dbb-48ae-941a-d9c5e6e22e36\") " Feb 19 10:02:58 crc kubenswrapper[4965]: I0219 10:02:58.443962 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/277e909a-4dbb-48ae-941a-d9c5e6e22e36-config-data\") pod \"277e909a-4dbb-48ae-941a-d9c5e6e22e36\" (UID: \"277e909a-4dbb-48ae-941a-d9c5e6e22e36\") " Feb 19 10:02:58 crc kubenswrapper[4965]: I0219 10:02:58.443977 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/277e909a-4dbb-48ae-941a-d9c5e6e22e36-combined-ca-bundle\") pod \"277e909a-4dbb-48ae-941a-d9c5e6e22e36\" (UID: \"277e909a-4dbb-48ae-941a-d9c5e6e22e36\") " Feb 19 10:02:58 crc kubenswrapper[4965]: I0219 10:02:58.444008 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5ef713d-f101-4d3f-bdb2-6fe4f2966380-combined-ca-bundle\") pod \"b5ef713d-f101-4d3f-bdb2-6fe4f2966380\" (UID: \"b5ef713d-f101-4d3f-bdb2-6fe4f2966380\") " Feb 19 10:02:58 crc kubenswrapper[4965]: I0219 10:02:58.444021 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b5ef713d-f101-4d3f-bdb2-6fe4f2966380-credential-keys\") pod \"b5ef713d-f101-4d3f-bdb2-6fe4f2966380\" (UID: \"b5ef713d-f101-4d3f-bdb2-6fe4f2966380\") " Feb 19 10:02:58 crc kubenswrapper[4965]: I0219 10:02:58.444039 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/277e909a-4dbb-48ae-941a-d9c5e6e22e36-scripts\") pod \"277e909a-4dbb-48ae-941a-d9c5e6e22e36\" (UID: \"277e909a-4dbb-48ae-941a-d9c5e6e22e36\") " Feb 19 10:02:58 crc kubenswrapper[4965]: I0219 10:02:58.444115 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b5ef713d-f101-4d3f-bdb2-6fe4f2966380-fernet-keys\") pod \"b5ef713d-f101-4d3f-bdb2-6fe4f2966380\" (UID: \"b5ef713d-f101-4d3f-bdb2-6fe4f2966380\") " Feb 19 10:02:58 crc kubenswrapper[4965]: I0219 10:02:58.444140 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5ef713d-f101-4d3f-bdb2-6fe4f2966380-scripts\") pod \"b5ef713d-f101-4d3f-bdb2-6fe4f2966380\" (UID: \"b5ef713d-f101-4d3f-bdb2-6fe4f2966380\") " Feb 19 10:02:58 crc kubenswrapper[4965]: I0219 10:02:58.444222 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/277e909a-4dbb-48ae-941a-d9c5e6e22e36-logs\") pod \"277e909a-4dbb-48ae-941a-d9c5e6e22e36\" (UID: \"277e909a-4dbb-48ae-941a-d9c5e6e22e36\") " Feb 19 10:02:58 crc kubenswrapper[4965]: I0219 10:02:58.444249 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-745g5\" (UniqueName: \"kubernetes.io/projected/b5ef713d-f101-4d3f-bdb2-6fe4f2966380-kube-api-access-745g5\") pod \"b5ef713d-f101-4d3f-bdb2-6fe4f2966380\" (UID: \"b5ef713d-f101-4d3f-bdb2-6fe4f2966380\") " Feb 19 10:02:58 crc kubenswrapper[4965]: I0219 10:02:58.444274 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/277e909a-4dbb-48ae-941a-d9c5e6e22e36-httpd-run\") pod \"277e909a-4dbb-48ae-941a-d9c5e6e22e36\" (UID: \"277e909a-4dbb-48ae-941a-d9c5e6e22e36\") " Feb 19 10:02:58 crc kubenswrapper[4965]: I0219 10:02:58.444976 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/277e909a-4dbb-48ae-941a-d9c5e6e22e36-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "277e909a-4dbb-48ae-941a-d9c5e6e22e36" (UID: "277e909a-4dbb-48ae-941a-d9c5e6e22e36"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:02:58 crc kubenswrapper[4965]: I0219 10:02:58.445429 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/277e909a-4dbb-48ae-941a-d9c5e6e22e36-logs" (OuterVolumeSpecName: "logs") pod "277e909a-4dbb-48ae-941a-d9c5e6e22e36" (UID: "277e909a-4dbb-48ae-941a-d9c5e6e22e36"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:02:58 crc kubenswrapper[4965]: I0219 10:02:58.451061 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5ef713d-f101-4d3f-bdb2-6fe4f2966380-scripts" (OuterVolumeSpecName: "scripts") pod "b5ef713d-f101-4d3f-bdb2-6fe4f2966380" (UID: "b5ef713d-f101-4d3f-bdb2-6fe4f2966380"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:58 crc kubenswrapper[4965]: I0219 10:02:58.451261 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/277e909a-4dbb-48ae-941a-d9c5e6e22e36-scripts" (OuterVolumeSpecName: "scripts") pod "277e909a-4dbb-48ae-941a-d9c5e6e22e36" (UID: "277e909a-4dbb-48ae-941a-d9c5e6e22e36"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:58 crc kubenswrapper[4965]: I0219 10:02:58.451896 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/277e909a-4dbb-48ae-941a-d9c5e6e22e36-kube-api-access-dnmdn" (OuterVolumeSpecName: "kube-api-access-dnmdn") pod "277e909a-4dbb-48ae-941a-d9c5e6e22e36" (UID: "277e909a-4dbb-48ae-941a-d9c5e6e22e36"). InnerVolumeSpecName "kube-api-access-dnmdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:02:58 crc kubenswrapper[4965]: I0219 10:02:58.453357 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5ef713d-f101-4d3f-bdb2-6fe4f2966380-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b5ef713d-f101-4d3f-bdb2-6fe4f2966380" (UID: "b5ef713d-f101-4d3f-bdb2-6fe4f2966380"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:58 crc kubenswrapper[4965]: I0219 10:02:58.454390 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5ef713d-f101-4d3f-bdb2-6fe4f2966380-kube-api-access-745g5" (OuterVolumeSpecName: "kube-api-access-745g5") pod "b5ef713d-f101-4d3f-bdb2-6fe4f2966380" (UID: "b5ef713d-f101-4d3f-bdb2-6fe4f2966380"). InnerVolumeSpecName "kube-api-access-745g5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:02:58 crc kubenswrapper[4965]: I0219 10:02:58.471914 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5ef713d-f101-4d3f-bdb2-6fe4f2966380-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "b5ef713d-f101-4d3f-bdb2-6fe4f2966380" (UID: "b5ef713d-f101-4d3f-bdb2-6fe4f2966380"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:58 crc kubenswrapper[4965]: I0219 10:02:58.472662 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8954926-b989-4d4b-b68d-eda06a80d48a" (OuterVolumeSpecName: "glance") pod "277e909a-4dbb-48ae-941a-d9c5e6e22e36" (UID: "277e909a-4dbb-48ae-941a-d9c5e6e22e36"). InnerVolumeSpecName "pvc-b8954926-b989-4d4b-b68d-eda06a80d48a". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 10:02:58 crc kubenswrapper[4965]: I0219 10:02:58.477118 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5ef713d-f101-4d3f-bdb2-6fe4f2966380-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b5ef713d-f101-4d3f-bdb2-6fe4f2966380" (UID: "b5ef713d-f101-4d3f-bdb2-6fe4f2966380"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:58 crc kubenswrapper[4965]: I0219 10:02:58.488120 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5ef713d-f101-4d3f-bdb2-6fe4f2966380-config-data" (OuterVolumeSpecName: "config-data") pod "b5ef713d-f101-4d3f-bdb2-6fe4f2966380" (UID: "b5ef713d-f101-4d3f-bdb2-6fe4f2966380"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:58 crc kubenswrapper[4965]: I0219 10:02:58.516508 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/277e909a-4dbb-48ae-941a-d9c5e6e22e36-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "277e909a-4dbb-48ae-941a-d9c5e6e22e36" (UID: "277e909a-4dbb-48ae-941a-d9c5e6e22e36"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:58 crc kubenswrapper[4965]: I0219 10:02:58.541073 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/277e909a-4dbb-48ae-941a-d9c5e6e22e36-config-data" (OuterVolumeSpecName: "config-data") pod "277e909a-4dbb-48ae-941a-d9c5e6e22e36" (UID: "277e909a-4dbb-48ae-941a-d9c5e6e22e36"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:02:58 crc kubenswrapper[4965]: I0219 10:02:58.549379 4965 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5ef713d-f101-4d3f-bdb2-6fe4f2966380-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:58 crc kubenswrapper[4965]: I0219 10:02:58.549405 4965 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/277e909a-4dbb-48ae-941a-d9c5e6e22e36-logs\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:58 crc kubenswrapper[4965]: I0219 10:02:58.549415 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-745g5\" (UniqueName: \"kubernetes.io/projected/b5ef713d-f101-4d3f-bdb2-6fe4f2966380-kube-api-access-745g5\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:58 crc kubenswrapper[4965]: I0219 10:02:58.549423 4965 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/277e909a-4dbb-48ae-941a-d9c5e6e22e36-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:58 crc kubenswrapper[4965]: I0219 10:02:58.549459 4965 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-b8954926-b989-4d4b-b68d-eda06a80d48a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8954926-b989-4d4b-b68d-eda06a80d48a\") on node \"crc\" " Feb 19 10:02:58 crc kubenswrapper[4965]: I0219 10:02:58.549470 4965 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5ef713d-f101-4d3f-bdb2-6fe4f2966380-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:58 crc kubenswrapper[4965]: I0219 10:02:58.549482 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnmdn\" (UniqueName: \"kubernetes.io/projected/277e909a-4dbb-48ae-941a-d9c5e6e22e36-kube-api-access-dnmdn\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:58 crc kubenswrapper[4965]: I0219 10:02:58.549493 4965 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/277e909a-4dbb-48ae-941a-d9c5e6e22e36-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:58 crc kubenswrapper[4965]: I0219 10:02:58.549501 4965 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/277e909a-4dbb-48ae-941a-d9c5e6e22e36-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:58 crc kubenswrapper[4965]: I0219 10:02:58.549509 4965 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5ef713d-f101-4d3f-bdb2-6fe4f2966380-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:58 crc kubenswrapper[4965]: I0219 10:02:58.549519 4965 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b5ef713d-f101-4d3f-bdb2-6fe4f2966380-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:58 crc kubenswrapper[4965]: I0219 10:02:58.549526 4965 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/277e909a-4dbb-48ae-941a-d9c5e6e22e36-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:58 crc kubenswrapper[4965]: I0219 10:02:58.549535 4965 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b5ef713d-f101-4d3f-bdb2-6fe4f2966380-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:58 crc kubenswrapper[4965]: I0219 10:02:58.580994 4965 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 19 10:02:58 crc kubenswrapper[4965]: I0219 10:02:58.581141 4965 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-b8954926-b989-4d4b-b68d-eda06a80d48a" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8954926-b989-4d4b-b68d-eda06a80d48a") on node "crc" Feb 19 10:02:58 crc kubenswrapper[4965]: I0219 10:02:58.659620 4965 reconciler_common.go:293] "Volume detached for volume \"pvc-b8954926-b989-4d4b-b68d-eda06a80d48a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8954926-b989-4d4b-b68d-eda06a80d48a\") on node \"crc\" DevicePath \"\"" Feb 19 10:02:58 crc kubenswrapper[4965]: I0219 10:02:58.990901 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wqhc2" event={"ID":"b5ef713d-f101-4d3f-bdb2-6fe4f2966380","Type":"ContainerDied","Data":"fe44f32e9d47ce76faf15ccfaa4c44c603408ce8f6a278bb887c5d10239ebc45"} Feb 19 10:02:58 crc kubenswrapper[4965]: I0219 10:02:58.990948 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe44f32e9d47ce76faf15ccfaa4c44c603408ce8f6a278bb887c5d10239ebc45" Feb 19 10:02:58 crc kubenswrapper[4965]: I0219 10:02:58.990953 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wqhc2" Feb 19 10:02:58 crc kubenswrapper[4965]: I0219 10:02:58.998793 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"277e909a-4dbb-48ae-941a-d9c5e6e22e36","Type":"ContainerDied","Data":"e8f41115a3bcbc26f31f212f34d5bce8874931c1a166efa68df27b4d44e60b95"} Feb 19 10:02:58 crc kubenswrapper[4965]: I0219 10:02:58.998837 4965 scope.go:117] "RemoveContainer" containerID="f7e0c555dbd3766b90e54bc910065cdf0bdd6ac1f12aeca09cea316496c8b790" Feb 19 10:02:58 crc kubenswrapper[4965]: I0219 10:02:58.998966 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.003501 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56df8fb6b7-bm75n" Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.060258 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.082416 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.117271 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-7nzqs"] Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.117573 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-7nzqs" podUID="59e04a77-6c47-4906-86c7-72e8a36e120c" containerName="dnsmasq-dns" containerID="cri-o://feeae2cc21bdacee5d2b3b592d0dc5c61c7070aee632a7ecbf902939ce3970f4" gracePeriod=10 Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.134409 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 10:02:59 crc kubenswrapper[4965]: E0219 10:02:59.134834 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5ef713d-f101-4d3f-bdb2-6fe4f2966380" containerName="keystone-bootstrap" Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.134850 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5ef713d-f101-4d3f-bdb2-6fe4f2966380" containerName="keystone-bootstrap" Feb 19 10:02:59 crc kubenswrapper[4965]: E0219 10:02:59.134860 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="277e909a-4dbb-48ae-941a-d9c5e6e22e36" containerName="glance-log" Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.134866 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="277e909a-4dbb-48ae-941a-d9c5e6e22e36" containerName="glance-log" Feb 19 10:02:59 crc kubenswrapper[4965]: E0219 10:02:59.134890 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="277e909a-4dbb-48ae-941a-d9c5e6e22e36" containerName="glance-httpd" Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.134896 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="277e909a-4dbb-48ae-941a-d9c5e6e22e36" containerName="glance-httpd" Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.135054 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="277e909a-4dbb-48ae-941a-d9c5e6e22e36" containerName="glance-log" Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.135072 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="277e909a-4dbb-48ae-941a-d9c5e6e22e36" containerName="glance-httpd" Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.135086 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5ef713d-f101-4d3f-bdb2-6fe4f2966380" containerName="keystone-bootstrap" Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.136093 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.145050 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.145306 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.151429 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.217385 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="277e909a-4dbb-48ae-941a-d9c5e6e22e36" path="/var/lib/kubelet/pods/277e909a-4dbb-48ae-941a-d9c5e6e22e36/volumes" Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.280422 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8a107a22-ae05-4559-aa4b-73a727fc2c29-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8a107a22-ae05-4559-aa4b-73a727fc2c29\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.280549 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a107a22-ae05-4559-aa4b-73a727fc2c29-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8a107a22-ae05-4559-aa4b-73a727fc2c29\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.280598 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjhz8\" (UniqueName: \"kubernetes.io/projected/8a107a22-ae05-4559-aa4b-73a727fc2c29-kube-api-access-jjhz8\") pod \"glance-default-external-api-0\" (UID: \"8a107a22-ae05-4559-aa4b-73a727fc2c29\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.280661 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a107a22-ae05-4559-aa4b-73a727fc2c29-config-data\") pod \"glance-default-external-api-0\" (UID: \"8a107a22-ae05-4559-aa4b-73a727fc2c29\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.280691 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a107a22-ae05-4559-aa4b-73a727fc2c29-logs\") pod \"glance-default-external-api-0\" (UID: \"8a107a22-ae05-4559-aa4b-73a727fc2c29\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.280740 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a107a22-ae05-4559-aa4b-73a727fc2c29-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8a107a22-ae05-4559-aa4b-73a727fc2c29\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.280798 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a107a22-ae05-4559-aa4b-73a727fc2c29-scripts\") pod \"glance-default-external-api-0\" (UID: \"8a107a22-ae05-4559-aa4b-73a727fc2c29\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.280843 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b8954926-b989-4d4b-b68d-eda06a80d48a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8954926-b989-4d4b-b68d-eda06a80d48a\") pod \"glance-default-external-api-0\" (UID: \"8a107a22-ae05-4559-aa4b-73a727fc2c29\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.382136 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a107a22-ae05-4559-aa4b-73a727fc2c29-config-data\") pod \"glance-default-external-api-0\" (UID: \"8a107a22-ae05-4559-aa4b-73a727fc2c29\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.382194 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a107a22-ae05-4559-aa4b-73a727fc2c29-logs\") pod \"glance-default-external-api-0\" (UID: \"8a107a22-ae05-4559-aa4b-73a727fc2c29\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.382739 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a107a22-ae05-4559-aa4b-73a727fc2c29-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8a107a22-ae05-4559-aa4b-73a727fc2c29\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.382807 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a107a22-ae05-4559-aa4b-73a727fc2c29-scripts\") pod \"glance-default-external-api-0\" (UID: \"8a107a22-ae05-4559-aa4b-73a727fc2c29\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.382880 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b8954926-b989-4d4b-b68d-eda06a80d48a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8954926-b989-4d4b-b68d-eda06a80d48a\") pod \"glance-default-external-api-0\" (UID: \"8a107a22-ae05-4559-aa4b-73a727fc2c29\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.382946 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8a107a22-ae05-4559-aa4b-73a727fc2c29-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8a107a22-ae05-4559-aa4b-73a727fc2c29\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.382994 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a107a22-ae05-4559-aa4b-73a727fc2c29-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8a107a22-ae05-4559-aa4b-73a727fc2c29\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.383048 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjhz8\" (UniqueName: \"kubernetes.io/projected/8a107a22-ae05-4559-aa4b-73a727fc2c29-kube-api-access-jjhz8\") pod \"glance-default-external-api-0\" (UID: \"8a107a22-ae05-4559-aa4b-73a727fc2c29\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.383267 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a107a22-ae05-4559-aa4b-73a727fc2c29-logs\") pod \"glance-default-external-api-0\" (UID: \"8a107a22-ae05-4559-aa4b-73a727fc2c29\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.383791 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8a107a22-ae05-4559-aa4b-73a727fc2c29-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8a107a22-ae05-4559-aa4b-73a727fc2c29\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.395810 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a107a22-ae05-4559-aa4b-73a727fc2c29-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8a107a22-ae05-4559-aa4b-73a727fc2c29\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.397924 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a107a22-ae05-4559-aa4b-73a727fc2c29-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8a107a22-ae05-4559-aa4b-73a727fc2c29\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.398946 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a107a22-ae05-4559-aa4b-73a727fc2c29-config-data\") pod \"glance-default-external-api-0\" (UID: \"8a107a22-ae05-4559-aa4b-73a727fc2c29\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.402662 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a107a22-ae05-4559-aa4b-73a727fc2c29-scripts\") pod \"glance-default-external-api-0\" (UID: \"8a107a22-ae05-4559-aa4b-73a727fc2c29\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.416617 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjhz8\" (UniqueName: \"kubernetes.io/projected/8a107a22-ae05-4559-aa4b-73a727fc2c29-kube-api-access-jjhz8\") pod \"glance-default-external-api-0\" (UID: \"8a107a22-ae05-4559-aa4b-73a727fc2c29\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.428500 4965 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.428554 4965 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b8954926-b989-4d4b-b68d-eda06a80d48a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8954926-b989-4d4b-b68d-eda06a80d48a\") pod \"glance-default-external-api-0\" (UID: \"8a107a22-ae05-4559-aa4b-73a727fc2c29\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6e7bd73c7e8cf1522bc205031417ace7701fabab6d8bd5d89d84d48b59498ea6/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.486637 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-wqhc2"] Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.503464 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-wqhc2"] Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.575216 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-6vqp7"] Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.578452 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6vqp7" Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.594646 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-9ln6f" Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.594853 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.594918 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.595124 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.603275 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-6vqp7"] Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.691700 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc4w2\" (UniqueName: \"kubernetes.io/projected/53127e22-4e09-45f9-a73b-641d087fd3cd-kube-api-access-xc4w2\") pod \"keystone-bootstrap-6vqp7\" (UID: \"53127e22-4e09-45f9-a73b-641d087fd3cd\") " pod="openstack/keystone-bootstrap-6vqp7" Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.691848 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/53127e22-4e09-45f9-a73b-641d087fd3cd-fernet-keys\") pod \"keystone-bootstrap-6vqp7\" (UID: \"53127e22-4e09-45f9-a73b-641d087fd3cd\") " pod="openstack/keystone-bootstrap-6vqp7" Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.691872 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53127e22-4e09-45f9-a73b-641d087fd3cd-config-data\") pod \"keystone-bootstrap-6vqp7\" (UID: \"53127e22-4e09-45f9-a73b-641d087fd3cd\") " pod="openstack/keystone-bootstrap-6vqp7" Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.691893 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/53127e22-4e09-45f9-a73b-641d087fd3cd-credential-keys\") pod \"keystone-bootstrap-6vqp7\" (UID: \"53127e22-4e09-45f9-a73b-641d087fd3cd\") " pod="openstack/keystone-bootstrap-6vqp7" Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.691930 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53127e22-4e09-45f9-a73b-641d087fd3cd-combined-ca-bundle\") pod \"keystone-bootstrap-6vqp7\" (UID: \"53127e22-4e09-45f9-a73b-641d087fd3cd\") " pod="openstack/keystone-bootstrap-6vqp7" Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.691947 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53127e22-4e09-45f9-a73b-641d087fd3cd-scripts\") pod \"keystone-bootstrap-6vqp7\" (UID: \"53127e22-4e09-45f9-a73b-641d087fd3cd\") " pod="openstack/keystone-bootstrap-6vqp7" Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.744256 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b8954926-b989-4d4b-b68d-eda06a80d48a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8954926-b989-4d4b-b68d-eda06a80d48a\") pod \"glance-default-external-api-0\" (UID: \"8a107a22-ae05-4559-aa4b-73a727fc2c29\") " pod="openstack/glance-default-external-api-0" Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.805401 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.811135 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/53127e22-4e09-45f9-a73b-641d087fd3cd-fernet-keys\") pod \"keystone-bootstrap-6vqp7\" (UID: \"53127e22-4e09-45f9-a73b-641d087fd3cd\") " pod="openstack/keystone-bootstrap-6vqp7" Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.811237 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53127e22-4e09-45f9-a73b-641d087fd3cd-config-data\") pod \"keystone-bootstrap-6vqp7\" (UID: \"53127e22-4e09-45f9-a73b-641d087fd3cd\") " pod="openstack/keystone-bootstrap-6vqp7" Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.811284 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/53127e22-4e09-45f9-a73b-641d087fd3cd-credential-keys\") pod \"keystone-bootstrap-6vqp7\" (UID: \"53127e22-4e09-45f9-a73b-641d087fd3cd\") " pod="openstack/keystone-bootstrap-6vqp7" Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.811353 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53127e22-4e09-45f9-a73b-641d087fd3cd-combined-ca-bundle\") pod \"keystone-bootstrap-6vqp7\" (UID: \"53127e22-4e09-45f9-a73b-641d087fd3cd\") " pod="openstack/keystone-bootstrap-6vqp7" Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.811396 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53127e22-4e09-45f9-a73b-641d087fd3cd-scripts\") pod \"keystone-bootstrap-6vqp7\" (UID: \"53127e22-4e09-45f9-a73b-641d087fd3cd\") " pod="openstack/keystone-bootstrap-6vqp7" Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.811470 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xc4w2\" (UniqueName: \"kubernetes.io/projected/53127e22-4e09-45f9-a73b-641d087fd3cd-kube-api-access-xc4w2\") pod \"keystone-bootstrap-6vqp7\" (UID: \"53127e22-4e09-45f9-a73b-641d087fd3cd\") " pod="openstack/keystone-bootstrap-6vqp7" Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.824068 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/53127e22-4e09-45f9-a73b-641d087fd3cd-fernet-keys\") pod \"keystone-bootstrap-6vqp7\" (UID: \"53127e22-4e09-45f9-a73b-641d087fd3cd\") " pod="openstack/keystone-bootstrap-6vqp7" Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.825416 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53127e22-4e09-45f9-a73b-641d087fd3cd-scripts\") pod \"keystone-bootstrap-6vqp7\" (UID: \"53127e22-4e09-45f9-a73b-641d087fd3cd\") " pod="openstack/keystone-bootstrap-6vqp7" Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.826349 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/53127e22-4e09-45f9-a73b-641d087fd3cd-credential-keys\") pod \"keystone-bootstrap-6vqp7\" (UID: \"53127e22-4e09-45f9-a73b-641d087fd3cd\") " pod="openstack/keystone-bootstrap-6vqp7" Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.826795 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53127e22-4e09-45f9-a73b-641d087fd3cd-combined-ca-bundle\") pod \"keystone-bootstrap-6vqp7\" (UID: \"53127e22-4e09-45f9-a73b-641d087fd3cd\") " pod="openstack/keystone-bootstrap-6vqp7" Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.829515 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xc4w2\" (UniqueName: \"kubernetes.io/projected/53127e22-4e09-45f9-a73b-641d087fd3cd-kube-api-access-xc4w2\") pod \"keystone-bootstrap-6vqp7\" (UID: \"53127e22-4e09-45f9-a73b-641d087fd3cd\") " pod="openstack/keystone-bootstrap-6vqp7" Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.829722 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53127e22-4e09-45f9-a73b-641d087fd3cd-config-data\") pod \"keystone-bootstrap-6vqp7\" (UID: \"53127e22-4e09-45f9-a73b-641d087fd3cd\") " pod="openstack/keystone-bootstrap-6vqp7" Feb 19 10:02:59 crc kubenswrapper[4965]: I0219 10:02:59.909811 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6vqp7" Feb 19 10:03:00 crc kubenswrapper[4965]: I0219 10:03:00.011787 4965 generic.go:334] "Generic (PLEG): container finished" podID="59e04a77-6c47-4906-86c7-72e8a36e120c" containerID="feeae2cc21bdacee5d2b3b592d0dc5c61c7070aee632a7ecbf902939ce3970f4" exitCode=0 Feb 19 10:03:00 crc kubenswrapper[4965]: I0219 10:03:00.011824 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-7nzqs" event={"ID":"59e04a77-6c47-4906-86c7-72e8a36e120c","Type":"ContainerDied","Data":"feeae2cc21bdacee5d2b3b592d0dc5c61c7070aee632a7ecbf902939ce3970f4"} Feb 19 10:03:01 crc kubenswrapper[4965]: I0219 10:03:01.212415 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5ef713d-f101-4d3f-bdb2-6fe4f2966380" path="/var/lib/kubelet/pods/b5ef713d-f101-4d3f-bdb2-6fe4f2966380/volumes" Feb 19 10:03:03 crc kubenswrapper[4965]: I0219 10:03:03.798062 4965 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-7nzqs" podUID="59e04a77-6c47-4906-86c7-72e8a36e120c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.130:5353: connect: connection refused" Feb 19 10:03:07 crc kubenswrapper[4965]: E0219 10:03:07.170069 4965 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Feb 19 10:03:07 crc kubenswrapper[4965]: E0219 10:03:07.171474 4965 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m4zwf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-ns8h9_openstack(8671fa02-a5fa-41f0-b232-fdfc4133ab58): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 10:03:07 crc kubenswrapper[4965]: E0219 10:03:07.173131 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-ns8h9" podUID="8671fa02-a5fa-41f0-b232-fdfc4133ab58" Feb 19 10:03:07 crc kubenswrapper[4965]: E0219 10:03:07.478411 4965 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Feb 19 10:03:07 crc kubenswrapper[4965]: E0219 10:03:07.479107 4965 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n57ch655hf6h684hd7h668h5cfh7dh66bhb5h5d7h5f5h87h65fhcdhd4h695hf8h555h88h5dbh5b8hc9h567hc6h577h68fh699h5b9hdbh9bh56dq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-flv7f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(fcad3660-ade7-407c-9d77-bb1c2c2721a8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 10:03:07 crc kubenswrapper[4965]: I0219 10:03:07.864613 4965 scope.go:117] "RemoveContainer" containerID="6981547354c035d875761a9803ed33ad85751cb2aa13adceaf6db1d0fcdb156a" Feb 19 10:03:08 crc kubenswrapper[4965]: E0219 10:03:08.099628 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-ns8h9" podUID="8671fa02-a5fa-41f0-b232-fdfc4133ab58" Feb 19 10:03:08 crc kubenswrapper[4965]: I0219 10:03:08.805895 4965 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-7nzqs" podUID="59e04a77-6c47-4906-86c7-72e8a36e120c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.130:5353: connect: connection refused" Feb 19 10:03:11 crc kubenswrapper[4965]: I0219 10:03:11.133527 4965 generic.go:334] "Generic (PLEG): container finished" podID="f4c38eda-5f59-4756-a3b7-2731c66ef436" containerID="ef632fe3e3bf9c0ed0f47da3d2d474210be51f58973ecfe5bb091366f7778748" exitCode=0 Feb 19 10:03:11 crc kubenswrapper[4965]: I0219 10:03:11.133661 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-wsss7" event={"ID":"f4c38eda-5f59-4756-a3b7-2731c66ef436","Type":"ContainerDied","Data":"ef632fe3e3bf9c0ed0f47da3d2d474210be51f58973ecfe5bb091366f7778748"} Feb 19 10:03:13 crc kubenswrapper[4965]: I0219 10:03:13.798141 4965 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-7nzqs" podUID="59e04a77-6c47-4906-86c7-72e8a36e120c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.130:5353: connect: connection refused" Feb 19 10:03:13 crc kubenswrapper[4965]: I0219 10:03:13.798513 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-7nzqs" Feb 19 10:03:16 crc kubenswrapper[4965]: E0219 10:03:16.739833 4965 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 19 10:03:16 crc kubenswrapper[4965]: E0219 10:03:16.740440 4965 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bbbv9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-qllz5_openstack(d7bc0481-970b-4e8e-868f-490ea553952e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 10:03:16 crc kubenswrapper[4965]: E0219 10:03:16.741588 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-qllz5" podUID="d7bc0481-970b-4e8e-868f-490ea553952e" Feb 19 10:03:17 crc kubenswrapper[4965]: E0219 10:03:17.257492 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-qllz5" podUID="d7bc0481-970b-4e8e-868f-490ea553952e" Feb 19 10:03:19 crc kubenswrapper[4965]: I0219 10:03:19.207084 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-wsss7" Feb 19 10:03:19 crc kubenswrapper[4965]: I0219 10:03:19.272901 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-wsss7" event={"ID":"f4c38eda-5f59-4756-a3b7-2731c66ef436","Type":"ContainerDied","Data":"662d1b50bf9c0e472fbf38f0d3fec5169b4ad9a22c977bcbc8387b8a5ccbfc43"} Feb 19 10:03:19 crc kubenswrapper[4965]: I0219 10:03:19.272942 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="662d1b50bf9c0e472fbf38f0d3fec5169b4ad9a22c977bcbc8387b8a5ccbfc43" Feb 19 10:03:19 crc kubenswrapper[4965]: I0219 10:03:19.273003 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-wsss7" Feb 19 10:03:19 crc kubenswrapper[4965]: I0219 10:03:19.340839 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f4c38eda-5f59-4756-a3b7-2731c66ef436-config\") pod \"f4c38eda-5f59-4756-a3b7-2731c66ef436\" (UID: \"f4c38eda-5f59-4756-a3b7-2731c66ef436\") " Feb 19 10:03:19 crc kubenswrapper[4965]: I0219 10:03:19.341016 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4c38eda-5f59-4756-a3b7-2731c66ef436-combined-ca-bundle\") pod \"f4c38eda-5f59-4756-a3b7-2731c66ef436\" (UID: \"f4c38eda-5f59-4756-a3b7-2731c66ef436\") " Feb 19 10:03:19 crc kubenswrapper[4965]: I0219 10:03:19.341231 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6sjwh\" (UniqueName: \"kubernetes.io/projected/f4c38eda-5f59-4756-a3b7-2731c66ef436-kube-api-access-6sjwh\") pod \"f4c38eda-5f59-4756-a3b7-2731c66ef436\" (UID: \"f4c38eda-5f59-4756-a3b7-2731c66ef436\") " Feb 19 10:03:19 crc kubenswrapper[4965]: I0219 10:03:19.346969 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4c38eda-5f59-4756-a3b7-2731c66ef436-kube-api-access-6sjwh" (OuterVolumeSpecName: "kube-api-access-6sjwh") pod "f4c38eda-5f59-4756-a3b7-2731c66ef436" (UID: "f4c38eda-5f59-4756-a3b7-2731c66ef436"). InnerVolumeSpecName "kube-api-access-6sjwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:03:19 crc kubenswrapper[4965]: I0219 10:03:19.367260 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4c38eda-5f59-4756-a3b7-2731c66ef436-config" (OuterVolumeSpecName: "config") pod "f4c38eda-5f59-4756-a3b7-2731c66ef436" (UID: "f4c38eda-5f59-4756-a3b7-2731c66ef436"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:19 crc kubenswrapper[4965]: I0219 10:03:19.372775 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4c38eda-5f59-4756-a3b7-2731c66ef436-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f4c38eda-5f59-4756-a3b7-2731c66ef436" (UID: "f4c38eda-5f59-4756-a3b7-2731c66ef436"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:19 crc kubenswrapper[4965]: I0219 10:03:19.446116 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6sjwh\" (UniqueName: \"kubernetes.io/projected/f4c38eda-5f59-4756-a3b7-2731c66ef436-kube-api-access-6sjwh\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:19 crc kubenswrapper[4965]: I0219 10:03:19.446160 4965 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/f4c38eda-5f59-4756-a3b7-2731c66ef436-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:19 crc kubenswrapper[4965]: I0219 10:03:19.446173 4965 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4c38eda-5f59-4756-a3b7-2731c66ef436-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:20 crc kubenswrapper[4965]: E0219 10:03:20.261296 4965 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 19 10:03:20 crc kubenswrapper[4965]: E0219 10:03:20.261904 4965 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-64rvr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-7rwpz_openstack(ce8bac0d-7aa6-437f-b234-370384cf1153): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 10:03:20 crc kubenswrapper[4965]: E0219 10:03:20.265510 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-7rwpz" podUID="ce8bac0d-7aa6-437f-b234-370384cf1153" Feb 19 10:03:20 crc kubenswrapper[4965]: I0219 10:03:20.295879 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-7nzqs" event={"ID":"59e04a77-6c47-4906-86c7-72e8a36e120c","Type":"ContainerDied","Data":"c6b7c57ae0381e69b560b0869d128239d6aad0a50a623cdaf0dbd386bb8ab27c"} Feb 19 10:03:20 crc kubenswrapper[4965]: I0219 10:03:20.295924 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6b7c57ae0381e69b560b0869d128239d6aad0a50a623cdaf0dbd386bb8ab27c" Feb 19 10:03:20 crc kubenswrapper[4965]: E0219 10:03:20.297713 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-7rwpz" podUID="ce8bac0d-7aa6-437f-b234-370384cf1153" Feb 19 10:03:20 crc kubenswrapper[4965]: I0219 10:03:20.358571 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-7nzqs" Feb 19 10:03:20 crc kubenswrapper[4965]: I0219 10:03:20.465734 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59e04a77-6c47-4906-86c7-72e8a36e120c-config\") pod \"59e04a77-6c47-4906-86c7-72e8a36e120c\" (UID: \"59e04a77-6c47-4906-86c7-72e8a36e120c\") " Feb 19 10:03:20 crc kubenswrapper[4965]: I0219 10:03:20.465819 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59e04a77-6c47-4906-86c7-72e8a36e120c-dns-svc\") pod \"59e04a77-6c47-4906-86c7-72e8a36e120c\" (UID: \"59e04a77-6c47-4906-86c7-72e8a36e120c\") " Feb 19 10:03:20 crc kubenswrapper[4965]: I0219 10:03:20.465985 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wb4x5\" (UniqueName: \"kubernetes.io/projected/59e04a77-6c47-4906-86c7-72e8a36e120c-kube-api-access-wb4x5\") pod \"59e04a77-6c47-4906-86c7-72e8a36e120c\" (UID: \"59e04a77-6c47-4906-86c7-72e8a36e120c\") " Feb 19 10:03:20 crc kubenswrapper[4965]: I0219 10:03:20.466044 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/59e04a77-6c47-4906-86c7-72e8a36e120c-ovsdbserver-nb\") pod \"59e04a77-6c47-4906-86c7-72e8a36e120c\" (UID: \"59e04a77-6c47-4906-86c7-72e8a36e120c\") " Feb 19 10:03:20 crc kubenswrapper[4965]: I0219 10:03:20.466145 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/59e04a77-6c47-4906-86c7-72e8a36e120c-ovsdbserver-sb\") pod \"59e04a77-6c47-4906-86c7-72e8a36e120c\" (UID: \"59e04a77-6c47-4906-86c7-72e8a36e120c\") " Feb 19 10:03:20 crc kubenswrapper[4965]: I0219 10:03:20.505522 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59e04a77-6c47-4906-86c7-72e8a36e120c-kube-api-access-wb4x5" (OuterVolumeSpecName: "kube-api-access-wb4x5") pod "59e04a77-6c47-4906-86c7-72e8a36e120c" (UID: "59e04a77-6c47-4906-86c7-72e8a36e120c"). InnerVolumeSpecName "kube-api-access-wb4x5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:03:20 crc kubenswrapper[4965]: I0219 10:03:20.539086 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-hprpt"] Feb 19 10:03:20 crc kubenswrapper[4965]: E0219 10:03:20.539561 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59e04a77-6c47-4906-86c7-72e8a36e120c" containerName="init" Feb 19 10:03:20 crc kubenswrapper[4965]: I0219 10:03:20.539578 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="59e04a77-6c47-4906-86c7-72e8a36e120c" containerName="init" Feb 19 10:03:20 crc kubenswrapper[4965]: E0219 10:03:20.539593 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59e04a77-6c47-4906-86c7-72e8a36e120c" containerName="dnsmasq-dns" Feb 19 10:03:20 crc kubenswrapper[4965]: I0219 10:03:20.539601 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="59e04a77-6c47-4906-86c7-72e8a36e120c" containerName="dnsmasq-dns" Feb 19 10:03:20 crc kubenswrapper[4965]: E0219 10:03:20.539614 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4c38eda-5f59-4756-a3b7-2731c66ef436" containerName="neutron-db-sync" Feb 19 10:03:20 crc kubenswrapper[4965]: I0219 10:03:20.539622 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4c38eda-5f59-4756-a3b7-2731c66ef436" containerName="neutron-db-sync" Feb 19 10:03:20 crc kubenswrapper[4965]: I0219 10:03:20.539854 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="59e04a77-6c47-4906-86c7-72e8a36e120c" containerName="dnsmasq-dns" Feb 19 10:03:20 crc kubenswrapper[4965]: I0219 10:03:20.539876 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4c38eda-5f59-4756-a3b7-2731c66ef436" containerName="neutron-db-sync" Feb 19 10:03:20 crc kubenswrapper[4965]: I0219 10:03:20.547112 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-hprpt" Feb 19 10:03:20 crc kubenswrapper[4965]: I0219 10:03:20.582901 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wb4x5\" (UniqueName: \"kubernetes.io/projected/59e04a77-6c47-4906-86c7-72e8a36e120c-kube-api-access-wb4x5\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:20 crc kubenswrapper[4965]: I0219 10:03:20.598333 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-hprpt"] Feb 19 10:03:20 crc kubenswrapper[4965]: I0219 10:03:20.607555 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59e04a77-6c47-4906-86c7-72e8a36e120c-config" (OuterVolumeSpecName: "config") pod "59e04a77-6c47-4906-86c7-72e8a36e120c" (UID: "59e04a77-6c47-4906-86c7-72e8a36e120c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:03:20 crc kubenswrapper[4965]: I0219 10:03:20.617801 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59e04a77-6c47-4906-86c7-72e8a36e120c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "59e04a77-6c47-4906-86c7-72e8a36e120c" (UID: "59e04a77-6c47-4906-86c7-72e8a36e120c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:03:20 crc kubenswrapper[4965]: I0219 10:03:20.659005 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59e04a77-6c47-4906-86c7-72e8a36e120c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "59e04a77-6c47-4906-86c7-72e8a36e120c" (UID: "59e04a77-6c47-4906-86c7-72e8a36e120c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:03:20 crc kubenswrapper[4965]: I0219 10:03:20.684439 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0acf6ac2-e0ae-4315-9e82-656caeeedbb6-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-hprpt\" (UID: \"0acf6ac2-e0ae-4315-9e82-656caeeedbb6\") " pod="openstack/dnsmasq-dns-6b7b667979-hprpt" Feb 19 10:03:20 crc kubenswrapper[4965]: I0219 10:03:20.684536 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0acf6ac2-e0ae-4315-9e82-656caeeedbb6-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-hprpt\" (UID: \"0acf6ac2-e0ae-4315-9e82-656caeeedbb6\") " pod="openstack/dnsmasq-dns-6b7b667979-hprpt" Feb 19 10:03:20 crc kubenswrapper[4965]: I0219 10:03:20.684633 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0acf6ac2-e0ae-4315-9e82-656caeeedbb6-config\") pod \"dnsmasq-dns-6b7b667979-hprpt\" (UID: \"0acf6ac2-e0ae-4315-9e82-656caeeedbb6\") " pod="openstack/dnsmasq-dns-6b7b667979-hprpt" Feb 19 10:03:20 crc kubenswrapper[4965]: I0219 10:03:20.684856 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8qj7\" (UniqueName: \"kubernetes.io/projected/0acf6ac2-e0ae-4315-9e82-656caeeedbb6-kube-api-access-s8qj7\") pod \"dnsmasq-dns-6b7b667979-hprpt\" (UID: \"0acf6ac2-e0ae-4315-9e82-656caeeedbb6\") " pod="openstack/dnsmasq-dns-6b7b667979-hprpt" Feb 19 10:03:20 crc kubenswrapper[4965]: I0219 10:03:20.684996 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0acf6ac2-e0ae-4315-9e82-656caeeedbb6-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-hprpt\" (UID: \"0acf6ac2-e0ae-4315-9e82-656caeeedbb6\") " pod="openstack/dnsmasq-dns-6b7b667979-hprpt" Feb 19 10:03:20 crc kubenswrapper[4965]: I0219 10:03:20.685110 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0acf6ac2-e0ae-4315-9e82-656caeeedbb6-dns-svc\") pod \"dnsmasq-dns-6b7b667979-hprpt\" (UID: \"0acf6ac2-e0ae-4315-9e82-656caeeedbb6\") " pod="openstack/dnsmasq-dns-6b7b667979-hprpt" Feb 19 10:03:20 crc kubenswrapper[4965]: I0219 10:03:20.685462 4965 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59e04a77-6c47-4906-86c7-72e8a36e120c-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:20 crc kubenswrapper[4965]: I0219 10:03:20.685486 4965 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/59e04a77-6c47-4906-86c7-72e8a36e120c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:20 crc kubenswrapper[4965]: I0219 10:03:20.685499 4965 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/59e04a77-6c47-4906-86c7-72e8a36e120c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:20 crc kubenswrapper[4965]: I0219 10:03:20.687011 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59e04a77-6c47-4906-86c7-72e8a36e120c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "59e04a77-6c47-4906-86c7-72e8a36e120c" (UID: "59e04a77-6c47-4906-86c7-72e8a36e120c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:03:20 crc kubenswrapper[4965]: I0219 10:03:20.711838 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-54864c6876-6fmg4"] Feb 19 10:03:20 crc kubenswrapper[4965]: I0219 10:03:20.715425 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-54864c6876-6fmg4" Feb 19 10:03:20 crc kubenswrapper[4965]: I0219 10:03:20.718089 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 19 10:03:20 crc kubenswrapper[4965]: I0219 10:03:20.718323 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 19 10:03:20 crc kubenswrapper[4965]: I0219 10:03:20.718714 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 19 10:03:20 crc kubenswrapper[4965]: I0219 10:03:20.718752 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-p2rjz" Feb 19 10:03:20 crc kubenswrapper[4965]: I0219 10:03:20.734376 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-54864c6876-6fmg4"] Feb 19 10:03:20 crc kubenswrapper[4965]: I0219 10:03:20.787844 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0acf6ac2-e0ae-4315-9e82-656caeeedbb6-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-hprpt\" (UID: \"0acf6ac2-e0ae-4315-9e82-656caeeedbb6\") " pod="openstack/dnsmasq-dns-6b7b667979-hprpt" Feb 19 10:03:20 crc kubenswrapper[4965]: I0219 10:03:20.787910 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0acf6ac2-e0ae-4315-9e82-656caeeedbb6-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-hprpt\" (UID: \"0acf6ac2-e0ae-4315-9e82-656caeeedbb6\") " pod="openstack/dnsmasq-dns-6b7b667979-hprpt" Feb 19 10:03:20 crc kubenswrapper[4965]: I0219 10:03:20.787966 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0acf6ac2-e0ae-4315-9e82-656caeeedbb6-config\") pod \"dnsmasq-dns-6b7b667979-hprpt\" (UID: \"0acf6ac2-e0ae-4315-9e82-656caeeedbb6\") " pod="openstack/dnsmasq-dns-6b7b667979-hprpt" Feb 19 10:03:20 crc kubenswrapper[4965]: I0219 10:03:20.788007 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8qj7\" (UniqueName: \"kubernetes.io/projected/0acf6ac2-e0ae-4315-9e82-656caeeedbb6-kube-api-access-s8qj7\") pod \"dnsmasq-dns-6b7b667979-hprpt\" (UID: \"0acf6ac2-e0ae-4315-9e82-656caeeedbb6\") " pod="openstack/dnsmasq-dns-6b7b667979-hprpt" Feb 19 10:03:20 crc kubenswrapper[4965]: I0219 10:03:20.788026 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0acf6ac2-e0ae-4315-9e82-656caeeedbb6-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-hprpt\" (UID: \"0acf6ac2-e0ae-4315-9e82-656caeeedbb6\") " pod="openstack/dnsmasq-dns-6b7b667979-hprpt" Feb 19 10:03:20 crc kubenswrapper[4965]: I0219 10:03:20.788065 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0acf6ac2-e0ae-4315-9e82-656caeeedbb6-dns-svc\") pod \"dnsmasq-dns-6b7b667979-hprpt\" (UID: \"0acf6ac2-e0ae-4315-9e82-656caeeedbb6\") " pod="openstack/dnsmasq-dns-6b7b667979-hprpt" Feb 19 10:03:20 crc kubenswrapper[4965]: I0219 10:03:20.788117 4965 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59e04a77-6c47-4906-86c7-72e8a36e120c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:20 crc kubenswrapper[4965]: I0219 10:03:20.788963 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0acf6ac2-e0ae-4315-9e82-656caeeedbb6-dns-svc\") pod \"dnsmasq-dns-6b7b667979-hprpt\" (UID: \"0acf6ac2-e0ae-4315-9e82-656caeeedbb6\") " pod="openstack/dnsmasq-dns-6b7b667979-hprpt" Feb 19 10:03:20 crc kubenswrapper[4965]: I0219 10:03:20.789256 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0acf6ac2-e0ae-4315-9e82-656caeeedbb6-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-hprpt\" (UID: \"0acf6ac2-e0ae-4315-9e82-656caeeedbb6\") " pod="openstack/dnsmasq-dns-6b7b667979-hprpt" Feb 19 10:03:20 crc kubenswrapper[4965]: I0219 10:03:20.789577 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0acf6ac2-e0ae-4315-9e82-656caeeedbb6-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-hprpt\" (UID: \"0acf6ac2-e0ae-4315-9e82-656caeeedbb6\") " pod="openstack/dnsmasq-dns-6b7b667979-hprpt" Feb 19 10:03:20 crc kubenswrapper[4965]: I0219 10:03:20.790843 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0acf6ac2-e0ae-4315-9e82-656caeeedbb6-config\") pod \"dnsmasq-dns-6b7b667979-hprpt\" (UID: \"0acf6ac2-e0ae-4315-9e82-656caeeedbb6\") " pod="openstack/dnsmasq-dns-6b7b667979-hprpt" Feb 19 10:03:20 crc kubenswrapper[4965]: I0219 10:03:20.791604 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0acf6ac2-e0ae-4315-9e82-656caeeedbb6-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-hprpt\" (UID: \"0acf6ac2-e0ae-4315-9e82-656caeeedbb6\") " pod="openstack/dnsmasq-dns-6b7b667979-hprpt" Feb 19 10:03:20 crc kubenswrapper[4965]: I0219 10:03:20.811016 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8qj7\" (UniqueName: \"kubernetes.io/projected/0acf6ac2-e0ae-4315-9e82-656caeeedbb6-kube-api-access-s8qj7\") pod \"dnsmasq-dns-6b7b667979-hprpt\" (UID: \"0acf6ac2-e0ae-4315-9e82-656caeeedbb6\") " pod="openstack/dnsmasq-dns-6b7b667979-hprpt" Feb 19 10:03:20 crc kubenswrapper[4965]: I0219 10:03:20.891376 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9qwr\" (UniqueName: \"kubernetes.io/projected/4bccdd96-d87f-4f40-979a-b650eabac24f-kube-api-access-n9qwr\") pod \"neutron-54864c6876-6fmg4\" (UID: \"4bccdd96-d87f-4f40-979a-b650eabac24f\") " pod="openstack/neutron-54864c6876-6fmg4" Feb 19 10:03:20 crc kubenswrapper[4965]: I0219 10:03:20.891457 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bccdd96-d87f-4f40-979a-b650eabac24f-ovndb-tls-certs\") pod \"neutron-54864c6876-6fmg4\" (UID: \"4bccdd96-d87f-4f40-979a-b650eabac24f\") " pod="openstack/neutron-54864c6876-6fmg4" Feb 19 10:03:20 crc kubenswrapper[4965]: I0219 10:03:20.891507 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4bccdd96-d87f-4f40-979a-b650eabac24f-httpd-config\") pod \"neutron-54864c6876-6fmg4\" (UID: \"4bccdd96-d87f-4f40-979a-b650eabac24f\") " pod="openstack/neutron-54864c6876-6fmg4" Feb 19 10:03:20 crc kubenswrapper[4965]: I0219 10:03:20.891579 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4bccdd96-d87f-4f40-979a-b650eabac24f-config\") pod \"neutron-54864c6876-6fmg4\" (UID: \"4bccdd96-d87f-4f40-979a-b650eabac24f\") " pod="openstack/neutron-54864c6876-6fmg4" Feb 19 10:03:20 crc kubenswrapper[4965]: I0219 10:03:20.891599 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bccdd96-d87f-4f40-979a-b650eabac24f-combined-ca-bundle\") pod \"neutron-54864c6876-6fmg4\" (UID: \"4bccdd96-d87f-4f40-979a-b650eabac24f\") " pod="openstack/neutron-54864c6876-6fmg4" Feb 19 10:03:20 crc kubenswrapper[4965]: I0219 10:03:20.947659 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-hprpt" Feb 19 10:03:20 crc kubenswrapper[4965]: I0219 10:03:20.993501 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4bccdd96-d87f-4f40-979a-b650eabac24f-config\") pod \"neutron-54864c6876-6fmg4\" (UID: \"4bccdd96-d87f-4f40-979a-b650eabac24f\") " pod="openstack/neutron-54864c6876-6fmg4" Feb 19 10:03:20 crc kubenswrapper[4965]: I0219 10:03:20.993563 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bccdd96-d87f-4f40-979a-b650eabac24f-combined-ca-bundle\") pod \"neutron-54864c6876-6fmg4\" (UID: \"4bccdd96-d87f-4f40-979a-b650eabac24f\") " pod="openstack/neutron-54864c6876-6fmg4" Feb 19 10:03:20 crc kubenswrapper[4965]: I0219 10:03:20.993665 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9qwr\" (UniqueName: \"kubernetes.io/projected/4bccdd96-d87f-4f40-979a-b650eabac24f-kube-api-access-n9qwr\") pod \"neutron-54864c6876-6fmg4\" (UID: \"4bccdd96-d87f-4f40-979a-b650eabac24f\") " pod="openstack/neutron-54864c6876-6fmg4" Feb 19 10:03:20 crc kubenswrapper[4965]: I0219 10:03:20.993710 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bccdd96-d87f-4f40-979a-b650eabac24f-ovndb-tls-certs\") pod \"neutron-54864c6876-6fmg4\" (UID: \"4bccdd96-d87f-4f40-979a-b650eabac24f\") " pod="openstack/neutron-54864c6876-6fmg4" Feb 19 10:03:20 crc kubenswrapper[4965]: I0219 10:03:20.993760 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4bccdd96-d87f-4f40-979a-b650eabac24f-httpd-config\") pod \"neutron-54864c6876-6fmg4\" (UID: \"4bccdd96-d87f-4f40-979a-b650eabac24f\") " pod="openstack/neutron-54864c6876-6fmg4" Feb 19 10:03:21 crc kubenswrapper[4965]: I0219 10:03:21.006515 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4bccdd96-d87f-4f40-979a-b650eabac24f-config\") pod \"neutron-54864c6876-6fmg4\" (UID: \"4bccdd96-d87f-4f40-979a-b650eabac24f\") " pod="openstack/neutron-54864c6876-6fmg4" Feb 19 10:03:21 crc kubenswrapper[4965]: I0219 10:03:21.007316 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4bccdd96-d87f-4f40-979a-b650eabac24f-httpd-config\") pod \"neutron-54864c6876-6fmg4\" (UID: \"4bccdd96-d87f-4f40-979a-b650eabac24f\") " pod="openstack/neutron-54864c6876-6fmg4" Feb 19 10:03:21 crc kubenswrapper[4965]: I0219 10:03:21.012020 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bccdd96-d87f-4f40-979a-b650eabac24f-combined-ca-bundle\") pod \"neutron-54864c6876-6fmg4\" (UID: \"4bccdd96-d87f-4f40-979a-b650eabac24f\") " pod="openstack/neutron-54864c6876-6fmg4" Feb 19 10:03:21 crc kubenswrapper[4965]: I0219 10:03:21.023031 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bccdd96-d87f-4f40-979a-b650eabac24f-ovndb-tls-certs\") pod \"neutron-54864c6876-6fmg4\" (UID: \"4bccdd96-d87f-4f40-979a-b650eabac24f\") " pod="openstack/neutron-54864c6876-6fmg4" Feb 19 10:03:21 crc kubenswrapper[4965]: I0219 10:03:21.035891 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9qwr\" (UniqueName: \"kubernetes.io/projected/4bccdd96-d87f-4f40-979a-b650eabac24f-kube-api-access-n9qwr\") pod \"neutron-54864c6876-6fmg4\" (UID: \"4bccdd96-d87f-4f40-979a-b650eabac24f\") " pod="openstack/neutron-54864c6876-6fmg4" Feb 19 10:03:21 crc kubenswrapper[4965]: I0219 10:03:21.046753 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-54864c6876-6fmg4" Feb 19 10:03:21 crc kubenswrapper[4965]: I0219 10:03:21.306039 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-7nzqs" Feb 19 10:03:21 crc kubenswrapper[4965]: I0219 10:03:21.329654 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-7nzqs"] Feb 19 10:03:21 crc kubenswrapper[4965]: I0219 10:03:21.337596 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-7nzqs"] Feb 19 10:03:22 crc kubenswrapper[4965]: I0219 10:03:22.787812 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5cbc96c99c-x6mpf"] Feb 19 10:03:22 crc kubenswrapper[4965]: I0219 10:03:22.789753 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5cbc96c99c-x6mpf" Feb 19 10:03:22 crc kubenswrapper[4965]: I0219 10:03:22.793412 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 19 10:03:22 crc kubenswrapper[4965]: I0219 10:03:22.793442 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 19 10:03:22 crc kubenswrapper[4965]: I0219 10:03:22.800096 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5cbc96c99c-x6mpf"] Feb 19 10:03:22 crc kubenswrapper[4965]: I0219 10:03:22.938245 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54-ovndb-tls-certs\") pod \"neutron-5cbc96c99c-x6mpf\" (UID: \"9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54\") " pod="openstack/neutron-5cbc96c99c-x6mpf" Feb 19 10:03:22 crc kubenswrapper[4965]: I0219 10:03:22.938295 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54-internal-tls-certs\") pod \"neutron-5cbc96c99c-x6mpf\" (UID: \"9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54\") " pod="openstack/neutron-5cbc96c99c-x6mpf" Feb 19 10:03:22 crc kubenswrapper[4965]: I0219 10:03:22.938620 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54-public-tls-certs\") pod \"neutron-5cbc96c99c-x6mpf\" (UID: \"9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54\") " pod="openstack/neutron-5cbc96c99c-x6mpf" Feb 19 10:03:22 crc kubenswrapper[4965]: I0219 10:03:22.938720 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brvgc\" (UniqueName: \"kubernetes.io/projected/9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54-kube-api-access-brvgc\") pod \"neutron-5cbc96c99c-x6mpf\" (UID: \"9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54\") " pod="openstack/neutron-5cbc96c99c-x6mpf" Feb 19 10:03:22 crc kubenswrapper[4965]: I0219 10:03:22.938864 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54-combined-ca-bundle\") pod \"neutron-5cbc96c99c-x6mpf\" (UID: \"9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54\") " pod="openstack/neutron-5cbc96c99c-x6mpf" Feb 19 10:03:22 crc kubenswrapper[4965]: I0219 10:03:22.939052 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54-config\") pod \"neutron-5cbc96c99c-x6mpf\" (UID: \"9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54\") " pod="openstack/neutron-5cbc96c99c-x6mpf" Feb 19 10:03:22 crc kubenswrapper[4965]: I0219 10:03:22.939113 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54-httpd-config\") pod \"neutron-5cbc96c99c-x6mpf\" (UID: \"9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54\") " pod="openstack/neutron-5cbc96c99c-x6mpf" Feb 19 10:03:23 crc kubenswrapper[4965]: I0219 10:03:23.040512 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54-public-tls-certs\") pod \"neutron-5cbc96c99c-x6mpf\" (UID: \"9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54\") " pod="openstack/neutron-5cbc96c99c-x6mpf" Feb 19 10:03:23 crc kubenswrapper[4965]: I0219 10:03:23.040569 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brvgc\" (UniqueName: \"kubernetes.io/projected/9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54-kube-api-access-brvgc\") pod \"neutron-5cbc96c99c-x6mpf\" (UID: \"9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54\") " pod="openstack/neutron-5cbc96c99c-x6mpf" Feb 19 10:03:23 crc kubenswrapper[4965]: I0219 10:03:23.040621 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54-combined-ca-bundle\") pod \"neutron-5cbc96c99c-x6mpf\" (UID: \"9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54\") " pod="openstack/neutron-5cbc96c99c-x6mpf" Feb 19 10:03:23 crc kubenswrapper[4965]: I0219 10:03:23.040681 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54-config\") pod \"neutron-5cbc96c99c-x6mpf\" (UID: \"9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54\") " pod="openstack/neutron-5cbc96c99c-x6mpf" Feb 19 10:03:23 crc kubenswrapper[4965]: I0219 10:03:23.040707 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54-httpd-config\") pod \"neutron-5cbc96c99c-x6mpf\" (UID: \"9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54\") " pod="openstack/neutron-5cbc96c99c-x6mpf" Feb 19 10:03:23 crc kubenswrapper[4965]: I0219 10:03:23.040724 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54-ovndb-tls-certs\") pod \"neutron-5cbc96c99c-x6mpf\" (UID: \"9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54\") " pod="openstack/neutron-5cbc96c99c-x6mpf" Feb 19 10:03:23 crc kubenswrapper[4965]: I0219 10:03:23.040740 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54-internal-tls-certs\") pod \"neutron-5cbc96c99c-x6mpf\" (UID: \"9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54\") " pod="openstack/neutron-5cbc96c99c-x6mpf" Feb 19 10:03:23 crc kubenswrapper[4965]: I0219 10:03:23.044617 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54-internal-tls-certs\") pod \"neutron-5cbc96c99c-x6mpf\" (UID: \"9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54\") " pod="openstack/neutron-5cbc96c99c-x6mpf" Feb 19 10:03:23 crc kubenswrapper[4965]: I0219 10:03:23.044941 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54-ovndb-tls-certs\") pod \"neutron-5cbc96c99c-x6mpf\" (UID: \"9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54\") " pod="openstack/neutron-5cbc96c99c-x6mpf" Feb 19 10:03:23 crc kubenswrapper[4965]: I0219 10:03:23.045368 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54-public-tls-certs\") pod \"neutron-5cbc96c99c-x6mpf\" (UID: \"9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54\") " pod="openstack/neutron-5cbc96c99c-x6mpf" Feb 19 10:03:23 crc kubenswrapper[4965]: I0219 10:03:23.046969 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54-combined-ca-bundle\") pod \"neutron-5cbc96c99c-x6mpf\" (UID: \"9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54\") " pod="openstack/neutron-5cbc96c99c-x6mpf" Feb 19 10:03:23 crc kubenswrapper[4965]: I0219 10:03:23.047126 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54-httpd-config\") pod \"neutron-5cbc96c99c-x6mpf\" (UID: \"9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54\") " pod="openstack/neutron-5cbc96c99c-x6mpf" Feb 19 10:03:23 crc kubenswrapper[4965]: I0219 10:03:23.047183 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54-config\") pod \"neutron-5cbc96c99c-x6mpf\" (UID: \"9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54\") " pod="openstack/neutron-5cbc96c99c-x6mpf" Feb 19 10:03:23 crc kubenswrapper[4965]: I0219 10:03:23.063960 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brvgc\" (UniqueName: \"kubernetes.io/projected/9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54-kube-api-access-brvgc\") pod \"neutron-5cbc96c99c-x6mpf\" (UID: \"9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54\") " pod="openstack/neutron-5cbc96c99c-x6mpf" Feb 19 10:03:23 crc kubenswrapper[4965]: I0219 10:03:23.117021 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5cbc96c99c-x6mpf" Feb 19 10:03:23 crc kubenswrapper[4965]: I0219 10:03:23.209044 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59e04a77-6c47-4906-86c7-72e8a36e120c" path="/var/lib/kubelet/pods/59e04a77-6c47-4906-86c7-72e8a36e120c/volumes" Feb 19 10:03:23 crc kubenswrapper[4965]: I0219 10:03:23.798477 4965 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-7nzqs" podUID="59e04a77-6c47-4906-86c7-72e8a36e120c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.130:5353: i/o timeout" Feb 19 10:03:24 crc kubenswrapper[4965]: I0219 10:03:24.036016 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-6vqp7"] Feb 19 10:03:24 crc kubenswrapper[4965]: I0219 10:03:24.125736 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 10:03:24 crc kubenswrapper[4965]: E0219 10:03:24.486039 4965 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 19 10:03:24 crc kubenswrapper[4965]: E0219 10:03:24.486095 4965 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 19 10:03:24 crc kubenswrapper[4965]: E0219 10:03:24.486293 4965 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-whzwf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-wh9q9_openstack(e4e3779f-9f25-4334-97f9-a3778bd78d5e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 10:03:24 crc kubenswrapper[4965]: E0219 10:03:24.487844 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cloudkitty-db-sync-wh9q9" podUID="e4e3779f-9f25-4334-97f9-a3778bd78d5e" Feb 19 10:03:24 crc kubenswrapper[4965]: W0219 10:03:24.814821 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a107a22_ae05_4559_aa4b_73a727fc2c29.slice/crio-0b6183937f51b8faa2ef4499a4ab2c59e567a700f7a270bee37294ec72975af4 WatchSource:0}: Error finding container 0b6183937f51b8faa2ef4499a4ab2c59e567a700f7a270bee37294ec72975af4: Status 404 returned error can't find the container with id 0b6183937f51b8faa2ef4499a4ab2c59e567a700f7a270bee37294ec72975af4 Feb 19 10:03:25 crc kubenswrapper[4965]: I0219 10:03:25.375059 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8a107a22-ae05-4559-aa4b-73a727fc2c29","Type":"ContainerStarted","Data":"0b6183937f51b8faa2ef4499a4ab2c59e567a700f7a270bee37294ec72975af4"} Feb 19 10:03:25 crc kubenswrapper[4965]: I0219 10:03:25.383249 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6vqp7" event={"ID":"53127e22-4e09-45f9-a73b-641d087fd3cd","Type":"ContainerStarted","Data":"3ebc0b821875d223cce0fb49a14bd7d11f2df4cf95d0585b31c4c625055df862"} Feb 19 10:03:25 crc kubenswrapper[4965]: I0219 10:03:25.383308 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6vqp7" event={"ID":"53127e22-4e09-45f9-a73b-641d087fd3cd","Type":"ContainerStarted","Data":"5a891c142875f44939191d5490ddd7aaa095fb6310cf6e242b2fd1dc13f2b6bf"} Feb 19 10:03:25 crc kubenswrapper[4965]: I0219 10:03:25.389395 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fcad3660-ade7-407c-9d77-bb1c2c2721a8","Type":"ContainerStarted","Data":"02531bd557132191c6be3729ea9f4a171329a4568aabbf64acc9d9438d720853"} Feb 19 10:03:25 crc kubenswrapper[4965]: E0219 10:03:25.397180 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-wh9q9" podUID="e4e3779f-9f25-4334-97f9-a3778bd78d5e" Feb 19 10:03:25 crc kubenswrapper[4965]: I0219 10:03:25.433494 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-6vqp7" podStartSLOduration=26.433472598 podStartE2EDuration="26.433472598s" podCreationTimestamp="2026-02-19 10:02:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:03:25.403285005 +0000 UTC m=+1261.024606305" watchObservedRunningTime="2026-02-19 10:03:25.433472598 +0000 UTC m=+1261.054793908" Feb 19 10:03:25 crc kubenswrapper[4965]: I0219 10:03:25.465768 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-hprpt"] Feb 19 10:03:25 crc kubenswrapper[4965]: I0219 10:03:25.539329 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-54864c6876-6fmg4"] Feb 19 10:03:25 crc kubenswrapper[4965]: I0219 10:03:25.813254 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5cbc96c99c-x6mpf"] Feb 19 10:03:26 crc kubenswrapper[4965]: I0219 10:03:26.423080 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54864c6876-6fmg4" event={"ID":"4bccdd96-d87f-4f40-979a-b650eabac24f","Type":"ContainerStarted","Data":"03979efa8ac1d4da20fb280931fde41b1fc59b331cb50e516ea29ab30a6bde45"} Feb 19 10:03:26 crc kubenswrapper[4965]: I0219 10:03:26.423512 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54864c6876-6fmg4" event={"ID":"4bccdd96-d87f-4f40-979a-b650eabac24f","Type":"ContainerStarted","Data":"220b144dc90ee6d345a2a8093536ebb7bb664734a54933d23c9b3c921826f885"} Feb 19 10:03:26 crc kubenswrapper[4965]: I0219 10:03:26.438741 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8ce0a116-70b2-4418-9b02-46ecb4e6d04f","Type":"ContainerStarted","Data":"048c6109dbd01e56dc48ca304a3bcb62b308a2485d77b4e21d9251fc4d644369"} Feb 19 10:03:26 crc kubenswrapper[4965]: I0219 10:03:26.438893 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="8ce0a116-70b2-4418-9b02-46ecb4e6d04f" containerName="glance-log" containerID="cri-o://119612af514ae07dc39f47e97fa0a8299833c2d2f7dda65b923c1aae4ac845b2" gracePeriod=30 Feb 19 10:03:26 crc kubenswrapper[4965]: I0219 10:03:26.440818 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="8ce0a116-70b2-4418-9b02-46ecb4e6d04f" containerName="glance-httpd" containerID="cri-o://048c6109dbd01e56dc48ca304a3bcb62b308a2485d77b4e21d9251fc4d644369" gracePeriod=30 Feb 19 10:03:26 crc kubenswrapper[4965]: I0219 10:03:26.450585 4965 generic.go:334] "Generic (PLEG): container finished" podID="0acf6ac2-e0ae-4315-9e82-656caeeedbb6" containerID="e7d1a98e558e3d8f1a25ef24f0c59ef5df97d74fff3ed683d7f605119b1cad68" exitCode=0 Feb 19 10:03:26 crc kubenswrapper[4965]: I0219 10:03:26.450643 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-hprpt" event={"ID":"0acf6ac2-e0ae-4315-9e82-656caeeedbb6","Type":"ContainerDied","Data":"e7d1a98e558e3d8f1a25ef24f0c59ef5df97d74fff3ed683d7f605119b1cad68"} Feb 19 10:03:26 crc kubenswrapper[4965]: I0219 10:03:26.450668 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-hprpt" event={"ID":"0acf6ac2-e0ae-4315-9e82-656caeeedbb6","Type":"ContainerStarted","Data":"ff95e90c0fbbfd5543f9e93a9adcc1b359959fbc0657dc387f2a2fb094e681ff"} Feb 19 10:03:26 crc kubenswrapper[4965]: I0219 10:03:26.480951 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=33.48092884 podStartE2EDuration="33.48092884s" podCreationTimestamp="2026-02-19 10:02:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:03:26.466424708 +0000 UTC m=+1262.087746018" watchObservedRunningTime="2026-02-19 10:03:26.48092884 +0000 UTC m=+1262.102250150" Feb 19 10:03:26 crc kubenswrapper[4965]: I0219 10:03:26.493906 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-ns8h9" event={"ID":"8671fa02-a5fa-41f0-b232-fdfc4133ab58","Type":"ContainerStarted","Data":"aa4c026affdae8309e122dbdb7ae257dfa05e7aaab8f20bcd8b59c50364162e4"} Feb 19 10:03:26 crc kubenswrapper[4965]: I0219 10:03:26.527779 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8a107a22-ae05-4559-aa4b-73a727fc2c29","Type":"ContainerStarted","Data":"9ca77900431a612cdfef278233ad7dcc12b792210dfff4bd5d9ad5548faa2706"} Feb 19 10:03:26 crc kubenswrapper[4965]: I0219 10:03:26.559618 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5cbc96c99c-x6mpf" event={"ID":"9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54","Type":"ContainerStarted","Data":"137ccfcadc3afe6b21bbd9c90f0f855e1edb71876c6ad8e76ee2d91c8bb38ab5"} Feb 19 10:03:26 crc kubenswrapper[4965]: I0219 10:03:26.559660 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5cbc96c99c-x6mpf" event={"ID":"9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54","Type":"ContainerStarted","Data":"b66fa0c769f044d4209b4ad5b159d613000a9930a66e8345c73f73a42dd6320f"} Feb 19 10:03:27 crc kubenswrapper[4965]: I0219 10:03:27.614325 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-hprpt" event={"ID":"0acf6ac2-e0ae-4315-9e82-656caeeedbb6","Type":"ContainerStarted","Data":"af67119bac7aa3bad15b61e62d797becae8b4f67ddcd5a580a1250f541c52fd0"} Feb 19 10:03:27 crc kubenswrapper[4965]: I0219 10:03:27.615321 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7b667979-hprpt" Feb 19 10:03:27 crc kubenswrapper[4965]: I0219 10:03:27.633810 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8a107a22-ae05-4559-aa4b-73a727fc2c29","Type":"ContainerStarted","Data":"d4b61b76e38165e1d4dada1b2501f40b791392e37e333c8f2a65659c6dbc7a61"} Feb 19 10:03:27 crc kubenswrapper[4965]: I0219 10:03:27.636553 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7b667979-hprpt" podStartSLOduration=7.636531968 podStartE2EDuration="7.636531968s" podCreationTimestamp="2026-02-19 10:03:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:03:27.634039757 +0000 UTC m=+1263.255361067" watchObservedRunningTime="2026-02-19 10:03:27.636531968 +0000 UTC m=+1263.257853278" Feb 19 10:03:27 crc kubenswrapper[4965]: I0219 10:03:27.643747 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-ns8h9" podStartSLOduration=4.919945487 podStartE2EDuration="39.643734643s" podCreationTimestamp="2026-02-19 10:02:48 +0000 UTC" firstStartedPulling="2026-02-19 10:02:50.209650032 +0000 UTC m=+1225.830971342" lastFinishedPulling="2026-02-19 10:03:24.933439188 +0000 UTC m=+1260.554760498" observedRunningTime="2026-02-19 10:03:26.523466743 +0000 UTC m=+1262.144788053" watchObservedRunningTime="2026-02-19 10:03:27.643734643 +0000 UTC m=+1263.265055953" Feb 19 10:03:27 crc kubenswrapper[4965]: I0219 10:03:27.651163 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5cbc96c99c-x6mpf" event={"ID":"9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54","Type":"ContainerStarted","Data":"0dd5ff578f88784de0c00bb232e2b39066e6bda734b72385e0e2339a6febfd1a"} Feb 19 10:03:27 crc kubenswrapper[4965]: I0219 10:03:27.652051 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5cbc96c99c-x6mpf" Feb 19 10:03:27 crc kubenswrapper[4965]: I0219 10:03:27.657785 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54864c6876-6fmg4" event={"ID":"4bccdd96-d87f-4f40-979a-b650eabac24f","Type":"ContainerStarted","Data":"ff0f6d36cd2dc7803669894a9345a3b8a8cc724d286dfbef459b4b0ac0db8074"} Feb 19 10:03:27 crc kubenswrapper[4965]: I0219 10:03:27.657927 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-54864c6876-6fmg4" Feb 19 10:03:27 crc kubenswrapper[4965]: I0219 10:03:27.669493 4965 generic.go:334] "Generic (PLEG): container finished" podID="8ce0a116-70b2-4418-9b02-46ecb4e6d04f" containerID="048c6109dbd01e56dc48ca304a3bcb62b308a2485d77b4e21d9251fc4d644369" exitCode=0 Feb 19 10:03:27 crc kubenswrapper[4965]: I0219 10:03:27.669541 4965 generic.go:334] "Generic (PLEG): container finished" podID="8ce0a116-70b2-4418-9b02-46ecb4e6d04f" containerID="119612af514ae07dc39f47e97fa0a8299833c2d2f7dda65b923c1aae4ac845b2" exitCode=143 Feb 19 10:03:27 crc kubenswrapper[4965]: I0219 10:03:27.669571 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8ce0a116-70b2-4418-9b02-46ecb4e6d04f","Type":"ContainerDied","Data":"048c6109dbd01e56dc48ca304a3bcb62b308a2485d77b4e21d9251fc4d644369"} Feb 19 10:03:27 crc kubenswrapper[4965]: I0219 10:03:27.669605 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8ce0a116-70b2-4418-9b02-46ecb4e6d04f","Type":"ContainerDied","Data":"119612af514ae07dc39f47e97fa0a8299833c2d2f7dda65b923c1aae4ac845b2"} Feb 19 10:03:27 crc kubenswrapper[4965]: I0219 10:03:27.704167 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=28.704144539 podStartE2EDuration="28.704144539s" podCreationTimestamp="2026-02-19 10:02:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:03:27.677592265 +0000 UTC m=+1263.298913575" watchObservedRunningTime="2026-02-19 10:03:27.704144539 +0000 UTC m=+1263.325465849" Feb 19 10:03:27 crc kubenswrapper[4965]: I0219 10:03:27.724783 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5cbc96c99c-x6mpf" podStartSLOduration=5.72476641 podStartE2EDuration="5.72476641s" podCreationTimestamp="2026-02-19 10:03:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:03:27.716038418 +0000 UTC m=+1263.337359728" watchObservedRunningTime="2026-02-19 10:03:27.72476641 +0000 UTC m=+1263.346087710" Feb 19 10:03:27 crc kubenswrapper[4965]: I0219 10:03:27.729776 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-54864c6876-6fmg4" podStartSLOduration=7.729767681 podStartE2EDuration="7.729767681s" podCreationTimestamp="2026-02-19 10:03:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:03:27.69638568 +0000 UTC m=+1263.317707010" watchObservedRunningTime="2026-02-19 10:03:27.729767681 +0000 UTC m=+1263.351088991" Feb 19 10:03:28 crc kubenswrapper[4965]: I0219 10:03:28.017081 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 10:03:28 crc kubenswrapper[4965]: I0219 10:03:28.203294 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dm66q\" (UniqueName: \"kubernetes.io/projected/8ce0a116-70b2-4418-9b02-46ecb4e6d04f-kube-api-access-dm66q\") pod \"8ce0a116-70b2-4418-9b02-46ecb4e6d04f\" (UID: \"8ce0a116-70b2-4418-9b02-46ecb4e6d04f\") " Feb 19 10:03:28 crc kubenswrapper[4965]: I0219 10:03:28.203661 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ce0a116-70b2-4418-9b02-46ecb4e6d04f-logs\") pod \"8ce0a116-70b2-4418-9b02-46ecb4e6d04f\" (UID: \"8ce0a116-70b2-4418-9b02-46ecb4e6d04f\") " Feb 19 10:03:28 crc kubenswrapper[4965]: I0219 10:03:28.203728 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ce0a116-70b2-4418-9b02-46ecb4e6d04f-config-data\") pod \"8ce0a116-70b2-4418-9b02-46ecb4e6d04f\" (UID: \"8ce0a116-70b2-4418-9b02-46ecb4e6d04f\") " Feb 19 10:03:28 crc kubenswrapper[4965]: I0219 10:03:28.203784 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8ce0a116-70b2-4418-9b02-46ecb4e6d04f-httpd-run\") pod \"8ce0a116-70b2-4418-9b02-46ecb4e6d04f\" (UID: \"8ce0a116-70b2-4418-9b02-46ecb4e6d04f\") " Feb 19 10:03:28 crc kubenswrapper[4965]: I0219 10:03:28.203817 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ce0a116-70b2-4418-9b02-46ecb4e6d04f-combined-ca-bundle\") pod \"8ce0a116-70b2-4418-9b02-46ecb4e6d04f\" (UID: \"8ce0a116-70b2-4418-9b02-46ecb4e6d04f\") " Feb 19 10:03:28 crc kubenswrapper[4965]: I0219 10:03:28.203970 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dd8fd35d-2a75-4403-b478-22ed0ff75616\") pod \"8ce0a116-70b2-4418-9b02-46ecb4e6d04f\" (UID: \"8ce0a116-70b2-4418-9b02-46ecb4e6d04f\") " Feb 19 10:03:28 crc kubenswrapper[4965]: I0219 10:03:28.204023 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ce0a116-70b2-4418-9b02-46ecb4e6d04f-scripts\") pod \"8ce0a116-70b2-4418-9b02-46ecb4e6d04f\" (UID: \"8ce0a116-70b2-4418-9b02-46ecb4e6d04f\") " Feb 19 10:03:28 crc kubenswrapper[4965]: I0219 10:03:28.206664 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ce0a116-70b2-4418-9b02-46ecb4e6d04f-logs" (OuterVolumeSpecName: "logs") pod "8ce0a116-70b2-4418-9b02-46ecb4e6d04f" (UID: "8ce0a116-70b2-4418-9b02-46ecb4e6d04f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:03:28 crc kubenswrapper[4965]: I0219 10:03:28.206923 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ce0a116-70b2-4418-9b02-46ecb4e6d04f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "8ce0a116-70b2-4418-9b02-46ecb4e6d04f" (UID: "8ce0a116-70b2-4418-9b02-46ecb4e6d04f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:03:28 crc kubenswrapper[4965]: I0219 10:03:28.214006 4965 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ce0a116-70b2-4418-9b02-46ecb4e6d04f-logs\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:28 crc kubenswrapper[4965]: I0219 10:03:28.214029 4965 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8ce0a116-70b2-4418-9b02-46ecb4e6d04f-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:28 crc kubenswrapper[4965]: I0219 10:03:28.229406 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ce0a116-70b2-4418-9b02-46ecb4e6d04f-kube-api-access-dm66q" (OuterVolumeSpecName: "kube-api-access-dm66q") pod "8ce0a116-70b2-4418-9b02-46ecb4e6d04f" (UID: "8ce0a116-70b2-4418-9b02-46ecb4e6d04f"). InnerVolumeSpecName "kube-api-access-dm66q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:03:28 crc kubenswrapper[4965]: I0219 10:03:28.239370 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ce0a116-70b2-4418-9b02-46ecb4e6d04f-scripts" (OuterVolumeSpecName: "scripts") pod "8ce0a116-70b2-4418-9b02-46ecb4e6d04f" (UID: "8ce0a116-70b2-4418-9b02-46ecb4e6d04f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:28 crc kubenswrapper[4965]: I0219 10:03:28.239958 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dd8fd35d-2a75-4403-b478-22ed0ff75616" (OuterVolumeSpecName: "glance") pod "8ce0a116-70b2-4418-9b02-46ecb4e6d04f" (UID: "8ce0a116-70b2-4418-9b02-46ecb4e6d04f"). InnerVolumeSpecName "pvc-dd8fd35d-2a75-4403-b478-22ed0ff75616". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 10:03:28 crc kubenswrapper[4965]: I0219 10:03:28.256429 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ce0a116-70b2-4418-9b02-46ecb4e6d04f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8ce0a116-70b2-4418-9b02-46ecb4e6d04f" (UID: "8ce0a116-70b2-4418-9b02-46ecb4e6d04f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:28 crc kubenswrapper[4965]: I0219 10:03:28.268577 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ce0a116-70b2-4418-9b02-46ecb4e6d04f-config-data" (OuterVolumeSpecName: "config-data") pod "8ce0a116-70b2-4418-9b02-46ecb4e6d04f" (UID: "8ce0a116-70b2-4418-9b02-46ecb4e6d04f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:28 crc kubenswrapper[4965]: I0219 10:03:28.317150 4965 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ce0a116-70b2-4418-9b02-46ecb4e6d04f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:28 crc kubenswrapper[4965]: I0219 10:03:28.317238 4965 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-dd8fd35d-2a75-4403-b478-22ed0ff75616\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dd8fd35d-2a75-4403-b478-22ed0ff75616\") on node \"crc\" " Feb 19 10:03:28 crc kubenswrapper[4965]: I0219 10:03:28.317295 4965 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ce0a116-70b2-4418-9b02-46ecb4e6d04f-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:28 crc kubenswrapper[4965]: I0219 10:03:28.317310 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dm66q\" (UniqueName: \"kubernetes.io/projected/8ce0a116-70b2-4418-9b02-46ecb4e6d04f-kube-api-access-dm66q\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:28 crc kubenswrapper[4965]: I0219 10:03:28.317324 4965 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ce0a116-70b2-4418-9b02-46ecb4e6d04f-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:28 crc kubenswrapper[4965]: I0219 10:03:28.361526 4965 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 19 10:03:28 crc kubenswrapper[4965]: I0219 10:03:28.361662 4965 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-dd8fd35d-2a75-4403-b478-22ed0ff75616" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dd8fd35d-2a75-4403-b478-22ed0ff75616") on node "crc" Feb 19 10:03:28 crc kubenswrapper[4965]: I0219 10:03:28.419654 4965 reconciler_common.go:293] "Volume detached for volume \"pvc-dd8fd35d-2a75-4403-b478-22ed0ff75616\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dd8fd35d-2a75-4403-b478-22ed0ff75616\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:28 crc kubenswrapper[4965]: I0219 10:03:28.681049 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8ce0a116-70b2-4418-9b02-46ecb4e6d04f","Type":"ContainerDied","Data":"1ef1be41b753da07954703cea0857f37a604c2bd9219cf9c2e8c57429963cd68"} Feb 19 10:03:28 crc kubenswrapper[4965]: I0219 10:03:28.681078 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 10:03:28 crc kubenswrapper[4965]: I0219 10:03:28.681104 4965 scope.go:117] "RemoveContainer" containerID="048c6109dbd01e56dc48ca304a3bcb62b308a2485d77b4e21d9251fc4d644369" Feb 19 10:03:28 crc kubenswrapper[4965]: I0219 10:03:28.719944 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 10:03:28 crc kubenswrapper[4965]: I0219 10:03:28.724115 4965 scope.go:117] "RemoveContainer" containerID="119612af514ae07dc39f47e97fa0a8299833c2d2f7dda65b923c1aae4ac845b2" Feb 19 10:03:28 crc kubenswrapper[4965]: I0219 10:03:28.728486 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 10:03:28 crc kubenswrapper[4965]: I0219 10:03:28.759268 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 10:03:28 crc kubenswrapper[4965]: E0219 10:03:28.759914 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ce0a116-70b2-4418-9b02-46ecb4e6d04f" containerName="glance-httpd" Feb 19 10:03:28 crc kubenswrapper[4965]: I0219 10:03:28.759931 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ce0a116-70b2-4418-9b02-46ecb4e6d04f" containerName="glance-httpd" Feb 19 10:03:28 crc kubenswrapper[4965]: E0219 10:03:28.759965 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ce0a116-70b2-4418-9b02-46ecb4e6d04f" containerName="glance-log" Feb 19 10:03:28 crc kubenswrapper[4965]: I0219 10:03:28.759971 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ce0a116-70b2-4418-9b02-46ecb4e6d04f" containerName="glance-log" Feb 19 10:03:28 crc kubenswrapper[4965]: I0219 10:03:28.760243 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ce0a116-70b2-4418-9b02-46ecb4e6d04f" containerName="glance-log" Feb 19 10:03:28 crc kubenswrapper[4965]: I0219 10:03:28.760284 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ce0a116-70b2-4418-9b02-46ecb4e6d04f" containerName="glance-httpd" Feb 19 10:03:28 crc kubenswrapper[4965]: I0219 10:03:28.761301 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 10:03:28 crc kubenswrapper[4965]: I0219 10:03:28.766425 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 19 10:03:28 crc kubenswrapper[4965]: I0219 10:03:28.766840 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 19 10:03:28 crc kubenswrapper[4965]: I0219 10:03:28.817176 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 10:03:28 crc kubenswrapper[4965]: I0219 10:03:28.936216 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4-logs\") pod \"glance-default-internal-api-0\" (UID: \"5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:03:28 crc kubenswrapper[4965]: I0219 10:03:28.936296 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:03:28 crc kubenswrapper[4965]: I0219 10:03:28.936336 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95d9w\" (UniqueName: \"kubernetes.io/projected/5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4-kube-api-access-95d9w\") pod \"glance-default-internal-api-0\" (UID: \"5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:03:28 crc kubenswrapper[4965]: I0219 10:03:28.936424 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:03:28 crc kubenswrapper[4965]: I0219 10:03:28.936457 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:03:28 crc kubenswrapper[4965]: I0219 10:03:28.936485 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:03:28 crc kubenswrapper[4965]: I0219 10:03:28.936511 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-dd8fd35d-2a75-4403-b478-22ed0ff75616\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dd8fd35d-2a75-4403-b478-22ed0ff75616\") pod \"glance-default-internal-api-0\" (UID: \"5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:03:28 crc kubenswrapper[4965]: I0219 10:03:28.936553 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:03:29 crc kubenswrapper[4965]: I0219 10:03:29.038693 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:03:29 crc kubenswrapper[4965]: I0219 10:03:29.038771 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95d9w\" (UniqueName: \"kubernetes.io/projected/5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4-kube-api-access-95d9w\") pod \"glance-default-internal-api-0\" (UID: \"5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:03:29 crc kubenswrapper[4965]: I0219 10:03:29.038867 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:03:29 crc kubenswrapper[4965]: I0219 10:03:29.038917 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:03:29 crc kubenswrapper[4965]: I0219 10:03:29.039451 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:03:29 crc kubenswrapper[4965]: I0219 10:03:29.039511 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-dd8fd35d-2a75-4403-b478-22ed0ff75616\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dd8fd35d-2a75-4403-b478-22ed0ff75616\") pod \"glance-default-internal-api-0\" (UID: \"5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:03:29 crc kubenswrapper[4965]: I0219 10:03:29.039566 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:03:29 crc kubenswrapper[4965]: I0219 10:03:29.039651 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4-logs\") pod \"glance-default-internal-api-0\" (UID: \"5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:03:29 crc kubenswrapper[4965]: I0219 10:03:29.039664 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:03:29 crc kubenswrapper[4965]: I0219 10:03:29.040957 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4-logs\") pod \"glance-default-internal-api-0\" (UID: \"5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:03:29 crc kubenswrapper[4965]: I0219 10:03:29.042626 4965 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 10:03:29 crc kubenswrapper[4965]: I0219 10:03:29.042673 4965 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-dd8fd35d-2a75-4403-b478-22ed0ff75616\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dd8fd35d-2a75-4403-b478-22ed0ff75616\") pod \"glance-default-internal-api-0\" (UID: \"5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/bbd5634dc66e040ac4fcb8a10b0a021d0db9968a1cda30e816c0dbc4187cf813/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 19 10:03:29 crc kubenswrapper[4965]: I0219 10:03:29.047471 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:03:29 crc kubenswrapper[4965]: I0219 10:03:29.049744 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:03:29 crc kubenswrapper[4965]: I0219 10:03:29.050481 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:03:29 crc kubenswrapper[4965]: I0219 10:03:29.050703 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:03:29 crc kubenswrapper[4965]: I0219 10:03:29.056956 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95d9w\" (UniqueName: \"kubernetes.io/projected/5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4-kube-api-access-95d9w\") pod \"glance-default-internal-api-0\" (UID: \"5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:03:29 crc kubenswrapper[4965]: I0219 10:03:29.079435 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-dd8fd35d-2a75-4403-b478-22ed0ff75616\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dd8fd35d-2a75-4403-b478-22ed0ff75616\") pod \"glance-default-internal-api-0\" (UID: \"5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:03:29 crc kubenswrapper[4965]: I0219 10:03:29.130721 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 10:03:29 crc kubenswrapper[4965]: I0219 10:03:29.224775 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ce0a116-70b2-4418-9b02-46ecb4e6d04f" path="/var/lib/kubelet/pods/8ce0a116-70b2-4418-9b02-46ecb4e6d04f/volumes" Feb 19 10:03:29 crc kubenswrapper[4965]: I0219 10:03:29.692159 4965 generic.go:334] "Generic (PLEG): container finished" podID="8671fa02-a5fa-41f0-b232-fdfc4133ab58" containerID="aa4c026affdae8309e122dbdb7ae257dfa05e7aaab8f20bcd8b59c50364162e4" exitCode=0 Feb 19 10:03:29 crc kubenswrapper[4965]: I0219 10:03:29.692220 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-ns8h9" event={"ID":"8671fa02-a5fa-41f0-b232-fdfc4133ab58","Type":"ContainerDied","Data":"aa4c026affdae8309e122dbdb7ae257dfa05e7aaab8f20bcd8b59c50364162e4"} Feb 19 10:03:29 crc kubenswrapper[4965]: I0219 10:03:29.806763 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 19 10:03:29 crc kubenswrapper[4965]: I0219 10:03:29.806816 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 19 10:03:29 crc kubenswrapper[4965]: I0219 10:03:29.806830 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 19 10:03:29 crc kubenswrapper[4965]: I0219 10:03:29.806839 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 19 10:03:29 crc kubenswrapper[4965]: I0219 10:03:29.841986 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 19 10:03:29 crc kubenswrapper[4965]: I0219 10:03:29.851456 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 19 10:03:30 crc kubenswrapper[4965]: I0219 10:03:30.702094 4965 generic.go:334] "Generic (PLEG): container finished" podID="53127e22-4e09-45f9-a73b-641d087fd3cd" containerID="3ebc0b821875d223cce0fb49a14bd7d11f2df4cf95d0585b31c4c625055df862" exitCode=0 Feb 19 10:03:30 crc kubenswrapper[4965]: I0219 10:03:30.702130 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6vqp7" event={"ID":"53127e22-4e09-45f9-a73b-641d087fd3cd","Type":"ContainerDied","Data":"3ebc0b821875d223cce0fb49a14bd7d11f2df4cf95d0585b31c4c625055df862"} Feb 19 10:03:32 crc kubenswrapper[4965]: I0219 10:03:32.392402 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-ns8h9" Feb 19 10:03:32 crc kubenswrapper[4965]: I0219 10:03:32.408128 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8671fa02-a5fa-41f0-b232-fdfc4133ab58-combined-ca-bundle\") pod \"8671fa02-a5fa-41f0-b232-fdfc4133ab58\" (UID: \"8671fa02-a5fa-41f0-b232-fdfc4133ab58\") " Feb 19 10:03:32 crc kubenswrapper[4965]: I0219 10:03:32.408219 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8671fa02-a5fa-41f0-b232-fdfc4133ab58-logs\") pod \"8671fa02-a5fa-41f0-b232-fdfc4133ab58\" (UID: \"8671fa02-a5fa-41f0-b232-fdfc4133ab58\") " Feb 19 10:03:32 crc kubenswrapper[4965]: I0219 10:03:32.408244 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8671fa02-a5fa-41f0-b232-fdfc4133ab58-scripts\") pod \"8671fa02-a5fa-41f0-b232-fdfc4133ab58\" (UID: \"8671fa02-a5fa-41f0-b232-fdfc4133ab58\") " Feb 19 10:03:32 crc kubenswrapper[4965]: I0219 10:03:32.408275 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8671fa02-a5fa-41f0-b232-fdfc4133ab58-config-data\") pod \"8671fa02-a5fa-41f0-b232-fdfc4133ab58\" (UID: \"8671fa02-a5fa-41f0-b232-fdfc4133ab58\") " Feb 19 10:03:32 crc kubenswrapper[4965]: I0219 10:03:32.408369 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4zwf\" (UniqueName: \"kubernetes.io/projected/8671fa02-a5fa-41f0-b232-fdfc4133ab58-kube-api-access-m4zwf\") pod \"8671fa02-a5fa-41f0-b232-fdfc4133ab58\" (UID: \"8671fa02-a5fa-41f0-b232-fdfc4133ab58\") " Feb 19 10:03:32 crc kubenswrapper[4965]: I0219 10:03:32.410982 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8671fa02-a5fa-41f0-b232-fdfc4133ab58-logs" (OuterVolumeSpecName: "logs") pod "8671fa02-a5fa-41f0-b232-fdfc4133ab58" (UID: "8671fa02-a5fa-41f0-b232-fdfc4133ab58"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:03:32 crc kubenswrapper[4965]: I0219 10:03:32.413837 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8671fa02-a5fa-41f0-b232-fdfc4133ab58-scripts" (OuterVolumeSpecName: "scripts") pod "8671fa02-a5fa-41f0-b232-fdfc4133ab58" (UID: "8671fa02-a5fa-41f0-b232-fdfc4133ab58"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:32 crc kubenswrapper[4965]: I0219 10:03:32.422366 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8671fa02-a5fa-41f0-b232-fdfc4133ab58-kube-api-access-m4zwf" (OuterVolumeSpecName: "kube-api-access-m4zwf") pod "8671fa02-a5fa-41f0-b232-fdfc4133ab58" (UID: "8671fa02-a5fa-41f0-b232-fdfc4133ab58"). InnerVolumeSpecName "kube-api-access-m4zwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:03:32 crc kubenswrapper[4965]: I0219 10:03:32.476259 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8671fa02-a5fa-41f0-b232-fdfc4133ab58-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8671fa02-a5fa-41f0-b232-fdfc4133ab58" (UID: "8671fa02-a5fa-41f0-b232-fdfc4133ab58"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:32 crc kubenswrapper[4965]: I0219 10:03:32.479714 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8671fa02-a5fa-41f0-b232-fdfc4133ab58-config-data" (OuterVolumeSpecName: "config-data") pod "8671fa02-a5fa-41f0-b232-fdfc4133ab58" (UID: "8671fa02-a5fa-41f0-b232-fdfc4133ab58"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:32 crc kubenswrapper[4965]: I0219 10:03:32.510795 4965 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8671fa02-a5fa-41f0-b232-fdfc4133ab58-logs\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:32 crc kubenswrapper[4965]: I0219 10:03:32.510823 4965 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8671fa02-a5fa-41f0-b232-fdfc4133ab58-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:32 crc kubenswrapper[4965]: I0219 10:03:32.510832 4965 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8671fa02-a5fa-41f0-b232-fdfc4133ab58-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:32 crc kubenswrapper[4965]: I0219 10:03:32.510841 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4zwf\" (UniqueName: \"kubernetes.io/projected/8671fa02-a5fa-41f0-b232-fdfc4133ab58-kube-api-access-m4zwf\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:32 crc kubenswrapper[4965]: I0219 10:03:32.510850 4965 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8671fa02-a5fa-41f0-b232-fdfc4133ab58-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:32 crc kubenswrapper[4965]: I0219 10:03:32.577784 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6vqp7" Feb 19 10:03:32 crc kubenswrapper[4965]: I0219 10:03:32.612766 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/53127e22-4e09-45f9-a73b-641d087fd3cd-fernet-keys\") pod \"53127e22-4e09-45f9-a73b-641d087fd3cd\" (UID: \"53127e22-4e09-45f9-a73b-641d087fd3cd\") " Feb 19 10:03:32 crc kubenswrapper[4965]: I0219 10:03:32.612827 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53127e22-4e09-45f9-a73b-641d087fd3cd-config-data\") pod \"53127e22-4e09-45f9-a73b-641d087fd3cd\" (UID: \"53127e22-4e09-45f9-a73b-641d087fd3cd\") " Feb 19 10:03:32 crc kubenswrapper[4965]: I0219 10:03:32.612857 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xc4w2\" (UniqueName: \"kubernetes.io/projected/53127e22-4e09-45f9-a73b-641d087fd3cd-kube-api-access-xc4w2\") pod \"53127e22-4e09-45f9-a73b-641d087fd3cd\" (UID: \"53127e22-4e09-45f9-a73b-641d087fd3cd\") " Feb 19 10:03:32 crc kubenswrapper[4965]: I0219 10:03:32.612984 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53127e22-4e09-45f9-a73b-641d087fd3cd-combined-ca-bundle\") pod \"53127e22-4e09-45f9-a73b-641d087fd3cd\" (UID: \"53127e22-4e09-45f9-a73b-641d087fd3cd\") " Feb 19 10:03:32 crc kubenswrapper[4965]: I0219 10:03:32.613078 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53127e22-4e09-45f9-a73b-641d087fd3cd-scripts\") pod \"53127e22-4e09-45f9-a73b-641d087fd3cd\" (UID: \"53127e22-4e09-45f9-a73b-641d087fd3cd\") " Feb 19 10:03:32 crc kubenswrapper[4965]: I0219 10:03:32.613104 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/53127e22-4e09-45f9-a73b-641d087fd3cd-credential-keys\") pod \"53127e22-4e09-45f9-a73b-641d087fd3cd\" (UID: \"53127e22-4e09-45f9-a73b-641d087fd3cd\") " Feb 19 10:03:32 crc kubenswrapper[4965]: I0219 10:03:32.627733 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53127e22-4e09-45f9-a73b-641d087fd3cd-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "53127e22-4e09-45f9-a73b-641d087fd3cd" (UID: "53127e22-4e09-45f9-a73b-641d087fd3cd"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:32 crc kubenswrapper[4965]: I0219 10:03:32.629672 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53127e22-4e09-45f9-a73b-641d087fd3cd-kube-api-access-xc4w2" (OuterVolumeSpecName: "kube-api-access-xc4w2") pod "53127e22-4e09-45f9-a73b-641d087fd3cd" (UID: "53127e22-4e09-45f9-a73b-641d087fd3cd"). InnerVolumeSpecName "kube-api-access-xc4w2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:03:32 crc kubenswrapper[4965]: I0219 10:03:32.637923 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53127e22-4e09-45f9-a73b-641d087fd3cd-scripts" (OuterVolumeSpecName: "scripts") pod "53127e22-4e09-45f9-a73b-641d087fd3cd" (UID: "53127e22-4e09-45f9-a73b-641d087fd3cd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:32 crc kubenswrapper[4965]: I0219 10:03:32.641419 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53127e22-4e09-45f9-a73b-641d087fd3cd-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "53127e22-4e09-45f9-a73b-641d087fd3cd" (UID: "53127e22-4e09-45f9-a73b-641d087fd3cd"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:32 crc kubenswrapper[4965]: I0219 10:03:32.673405 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53127e22-4e09-45f9-a73b-641d087fd3cd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "53127e22-4e09-45f9-a73b-641d087fd3cd" (UID: "53127e22-4e09-45f9-a73b-641d087fd3cd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:32 crc kubenswrapper[4965]: I0219 10:03:32.686644 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53127e22-4e09-45f9-a73b-641d087fd3cd-config-data" (OuterVolumeSpecName: "config-data") pod "53127e22-4e09-45f9-a73b-641d087fd3cd" (UID: "53127e22-4e09-45f9-a73b-641d087fd3cd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:32 crc kubenswrapper[4965]: I0219 10:03:32.722506 4965 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53127e22-4e09-45f9-a73b-641d087fd3cd-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:32 crc kubenswrapper[4965]: I0219 10:03:32.722537 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xc4w2\" (UniqueName: \"kubernetes.io/projected/53127e22-4e09-45f9-a73b-641d087fd3cd-kube-api-access-xc4w2\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:32 crc kubenswrapper[4965]: I0219 10:03:32.722550 4965 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53127e22-4e09-45f9-a73b-641d087fd3cd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:32 crc kubenswrapper[4965]: I0219 10:03:32.722558 4965 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53127e22-4e09-45f9-a73b-641d087fd3cd-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:32 crc kubenswrapper[4965]: I0219 10:03:32.722566 4965 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/53127e22-4e09-45f9-a73b-641d087fd3cd-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:32 crc kubenswrapper[4965]: I0219 10:03:32.722575 4965 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/53127e22-4e09-45f9-a73b-641d087fd3cd-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:32 crc kubenswrapper[4965]: I0219 10:03:32.732458 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6vqp7" Feb 19 10:03:32 crc kubenswrapper[4965]: I0219 10:03:32.736056 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6vqp7" event={"ID":"53127e22-4e09-45f9-a73b-641d087fd3cd","Type":"ContainerDied","Data":"5a891c142875f44939191d5490ddd7aaa095fb6310cf6e242b2fd1dc13f2b6bf"} Feb 19 10:03:32 crc kubenswrapper[4965]: I0219 10:03:32.736097 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a891c142875f44939191d5490ddd7aaa095fb6310cf6e242b2fd1dc13f2b6bf" Feb 19 10:03:32 crc kubenswrapper[4965]: I0219 10:03:32.743595 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-ns8h9" event={"ID":"8671fa02-a5fa-41f0-b232-fdfc4133ab58","Type":"ContainerDied","Data":"3f7f9e8fff1e38ec6842f289fc0af124699ecb07eabc4eb86b086867e8d004bb"} Feb 19 10:03:32 crc kubenswrapper[4965]: I0219 10:03:32.743631 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f7f9e8fff1e38ec6842f289fc0af124699ecb07eabc4eb86b086867e8d004bb" Feb 19 10:03:32 crc kubenswrapper[4965]: I0219 10:03:32.743689 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-ns8h9" Feb 19 10:03:32 crc kubenswrapper[4965]: I0219 10:03:32.939183 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6dbb44f597-5cgmc"] Feb 19 10:03:32 crc kubenswrapper[4965]: E0219 10:03:32.940264 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53127e22-4e09-45f9-a73b-641d087fd3cd" containerName="keystone-bootstrap" Feb 19 10:03:32 crc kubenswrapper[4965]: I0219 10:03:32.940283 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="53127e22-4e09-45f9-a73b-641d087fd3cd" containerName="keystone-bootstrap" Feb 19 10:03:32 crc kubenswrapper[4965]: E0219 10:03:32.940304 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8671fa02-a5fa-41f0-b232-fdfc4133ab58" containerName="placement-db-sync" Feb 19 10:03:32 crc kubenswrapper[4965]: I0219 10:03:32.940311 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="8671fa02-a5fa-41f0-b232-fdfc4133ab58" containerName="placement-db-sync" Feb 19 10:03:32 crc kubenswrapper[4965]: I0219 10:03:32.940538 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="8671fa02-a5fa-41f0-b232-fdfc4133ab58" containerName="placement-db-sync" Feb 19 10:03:32 crc kubenswrapper[4965]: I0219 10:03:32.940562 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="53127e22-4e09-45f9-a73b-641d087fd3cd" containerName="keystone-bootstrap" Feb 19 10:03:32 crc kubenswrapper[4965]: I0219 10:03:32.941726 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6dbb44f597-5cgmc" Feb 19 10:03:32 crc kubenswrapper[4965]: I0219 10:03:32.970523 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6dbb44f597-5cgmc"] Feb 19 10:03:32 crc kubenswrapper[4965]: I0219 10:03:32.971291 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-9ln6f" Feb 19 10:03:32 crc kubenswrapper[4965]: I0219 10:03:32.972387 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 19 10:03:32 crc kubenswrapper[4965]: I0219 10:03:32.972697 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 19 10:03:32 crc kubenswrapper[4965]: I0219 10:03:32.972816 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 19 10:03:32 crc kubenswrapper[4965]: I0219 10:03:32.972824 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 19 10:03:32 crc kubenswrapper[4965]: I0219 10:03:32.972915 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 19 10:03:33 crc kubenswrapper[4965]: I0219 10:03:33.031603 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a9b7b7c-7f72-46f8-aa26-1f03e4f0fd4b-public-tls-certs\") pod \"keystone-6dbb44f597-5cgmc\" (UID: \"3a9b7b7c-7f72-46f8-aa26-1f03e4f0fd4b\") " pod="openstack/keystone-6dbb44f597-5cgmc" Feb 19 10:03:33 crc kubenswrapper[4965]: I0219 10:03:33.031648 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3a9b7b7c-7f72-46f8-aa26-1f03e4f0fd4b-credential-keys\") pod \"keystone-6dbb44f597-5cgmc\" (UID: \"3a9b7b7c-7f72-46f8-aa26-1f03e4f0fd4b\") " pod="openstack/keystone-6dbb44f597-5cgmc" Feb 19 10:03:33 crc kubenswrapper[4965]: I0219 10:03:33.031686 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3a9b7b7c-7f72-46f8-aa26-1f03e4f0fd4b-fernet-keys\") pod \"keystone-6dbb44f597-5cgmc\" (UID: \"3a9b7b7c-7f72-46f8-aa26-1f03e4f0fd4b\") " pod="openstack/keystone-6dbb44f597-5cgmc" Feb 19 10:03:33 crc kubenswrapper[4965]: I0219 10:03:33.031705 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a9b7b7c-7f72-46f8-aa26-1f03e4f0fd4b-scripts\") pod \"keystone-6dbb44f597-5cgmc\" (UID: \"3a9b7b7c-7f72-46f8-aa26-1f03e4f0fd4b\") " pod="openstack/keystone-6dbb44f597-5cgmc" Feb 19 10:03:33 crc kubenswrapper[4965]: I0219 10:03:33.031733 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a9b7b7c-7f72-46f8-aa26-1f03e4f0fd4b-combined-ca-bundle\") pod \"keystone-6dbb44f597-5cgmc\" (UID: \"3a9b7b7c-7f72-46f8-aa26-1f03e4f0fd4b\") " pod="openstack/keystone-6dbb44f597-5cgmc" Feb 19 10:03:33 crc kubenswrapper[4965]: I0219 10:03:33.031766 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a9b7b7c-7f72-46f8-aa26-1f03e4f0fd4b-config-data\") pod \"keystone-6dbb44f597-5cgmc\" (UID: \"3a9b7b7c-7f72-46f8-aa26-1f03e4f0fd4b\") " pod="openstack/keystone-6dbb44f597-5cgmc" Feb 19 10:03:33 crc kubenswrapper[4965]: I0219 10:03:33.031849 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9x62\" (UniqueName: \"kubernetes.io/projected/3a9b7b7c-7f72-46f8-aa26-1f03e4f0fd4b-kube-api-access-z9x62\") pod \"keystone-6dbb44f597-5cgmc\" (UID: \"3a9b7b7c-7f72-46f8-aa26-1f03e4f0fd4b\") " pod="openstack/keystone-6dbb44f597-5cgmc" Feb 19 10:03:33 crc kubenswrapper[4965]: I0219 10:03:33.031869 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a9b7b7c-7f72-46f8-aa26-1f03e4f0fd4b-internal-tls-certs\") pod \"keystone-6dbb44f597-5cgmc\" (UID: \"3a9b7b7c-7f72-46f8-aa26-1f03e4f0fd4b\") " pod="openstack/keystone-6dbb44f597-5cgmc" Feb 19 10:03:33 crc kubenswrapper[4965]: I0219 10:03:33.069916 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 10:03:33 crc kubenswrapper[4965]: I0219 10:03:33.134245 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a9b7b7c-7f72-46f8-aa26-1f03e4f0fd4b-config-data\") pod \"keystone-6dbb44f597-5cgmc\" (UID: \"3a9b7b7c-7f72-46f8-aa26-1f03e4f0fd4b\") " pod="openstack/keystone-6dbb44f597-5cgmc" Feb 19 10:03:33 crc kubenswrapper[4965]: I0219 10:03:33.134705 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9x62\" (UniqueName: \"kubernetes.io/projected/3a9b7b7c-7f72-46f8-aa26-1f03e4f0fd4b-kube-api-access-z9x62\") pod \"keystone-6dbb44f597-5cgmc\" (UID: \"3a9b7b7c-7f72-46f8-aa26-1f03e4f0fd4b\") " pod="openstack/keystone-6dbb44f597-5cgmc" Feb 19 10:03:33 crc kubenswrapper[4965]: I0219 10:03:33.134748 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a9b7b7c-7f72-46f8-aa26-1f03e4f0fd4b-internal-tls-certs\") pod \"keystone-6dbb44f597-5cgmc\" (UID: \"3a9b7b7c-7f72-46f8-aa26-1f03e4f0fd4b\") " pod="openstack/keystone-6dbb44f597-5cgmc" Feb 19 10:03:33 crc kubenswrapper[4965]: I0219 10:03:33.134803 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a9b7b7c-7f72-46f8-aa26-1f03e4f0fd4b-public-tls-certs\") pod \"keystone-6dbb44f597-5cgmc\" (UID: \"3a9b7b7c-7f72-46f8-aa26-1f03e4f0fd4b\") " pod="openstack/keystone-6dbb44f597-5cgmc" Feb 19 10:03:33 crc kubenswrapper[4965]: I0219 10:03:33.134867 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3a9b7b7c-7f72-46f8-aa26-1f03e4f0fd4b-credential-keys\") pod \"keystone-6dbb44f597-5cgmc\" (UID: \"3a9b7b7c-7f72-46f8-aa26-1f03e4f0fd4b\") " pod="openstack/keystone-6dbb44f597-5cgmc" Feb 19 10:03:33 crc kubenswrapper[4965]: I0219 10:03:33.135859 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3a9b7b7c-7f72-46f8-aa26-1f03e4f0fd4b-fernet-keys\") pod \"keystone-6dbb44f597-5cgmc\" (UID: \"3a9b7b7c-7f72-46f8-aa26-1f03e4f0fd4b\") " pod="openstack/keystone-6dbb44f597-5cgmc" Feb 19 10:03:33 crc kubenswrapper[4965]: I0219 10:03:33.135986 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a9b7b7c-7f72-46f8-aa26-1f03e4f0fd4b-scripts\") pod \"keystone-6dbb44f597-5cgmc\" (UID: \"3a9b7b7c-7f72-46f8-aa26-1f03e4f0fd4b\") " pod="openstack/keystone-6dbb44f597-5cgmc" Feb 19 10:03:33 crc kubenswrapper[4965]: I0219 10:03:33.136184 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a9b7b7c-7f72-46f8-aa26-1f03e4f0fd4b-combined-ca-bundle\") pod \"keystone-6dbb44f597-5cgmc\" (UID: \"3a9b7b7c-7f72-46f8-aa26-1f03e4f0fd4b\") " pod="openstack/keystone-6dbb44f597-5cgmc" Feb 19 10:03:33 crc kubenswrapper[4965]: I0219 10:03:33.139853 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3a9b7b7c-7f72-46f8-aa26-1f03e4f0fd4b-credential-keys\") pod \"keystone-6dbb44f597-5cgmc\" (UID: \"3a9b7b7c-7f72-46f8-aa26-1f03e4f0fd4b\") " pod="openstack/keystone-6dbb44f597-5cgmc" Feb 19 10:03:33 crc kubenswrapper[4965]: I0219 10:03:33.139970 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3a9b7b7c-7f72-46f8-aa26-1f03e4f0fd4b-fernet-keys\") pod \"keystone-6dbb44f597-5cgmc\" (UID: \"3a9b7b7c-7f72-46f8-aa26-1f03e4f0fd4b\") " pod="openstack/keystone-6dbb44f597-5cgmc" Feb 19 10:03:33 crc kubenswrapper[4965]: I0219 10:03:33.140301 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a9b7b7c-7f72-46f8-aa26-1f03e4f0fd4b-internal-tls-certs\") pod \"keystone-6dbb44f597-5cgmc\" (UID: \"3a9b7b7c-7f72-46f8-aa26-1f03e4f0fd4b\") " pod="openstack/keystone-6dbb44f597-5cgmc" Feb 19 10:03:33 crc kubenswrapper[4965]: I0219 10:03:33.141101 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a9b7b7c-7f72-46f8-aa26-1f03e4f0fd4b-config-data\") pod \"keystone-6dbb44f597-5cgmc\" (UID: \"3a9b7b7c-7f72-46f8-aa26-1f03e4f0fd4b\") " pod="openstack/keystone-6dbb44f597-5cgmc" Feb 19 10:03:33 crc kubenswrapper[4965]: I0219 10:03:33.141802 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a9b7b7c-7f72-46f8-aa26-1f03e4f0fd4b-scripts\") pod \"keystone-6dbb44f597-5cgmc\" (UID: \"3a9b7b7c-7f72-46f8-aa26-1f03e4f0fd4b\") " pod="openstack/keystone-6dbb44f597-5cgmc" Feb 19 10:03:33 crc kubenswrapper[4965]: I0219 10:03:33.142857 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a9b7b7c-7f72-46f8-aa26-1f03e4f0fd4b-combined-ca-bundle\") pod \"keystone-6dbb44f597-5cgmc\" (UID: \"3a9b7b7c-7f72-46f8-aa26-1f03e4f0fd4b\") " pod="openstack/keystone-6dbb44f597-5cgmc" Feb 19 10:03:33 crc kubenswrapper[4965]: I0219 10:03:33.143330 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a9b7b7c-7f72-46f8-aa26-1f03e4f0fd4b-public-tls-certs\") pod \"keystone-6dbb44f597-5cgmc\" (UID: \"3a9b7b7c-7f72-46f8-aa26-1f03e4f0fd4b\") " pod="openstack/keystone-6dbb44f597-5cgmc" Feb 19 10:03:33 crc kubenswrapper[4965]: I0219 10:03:33.151114 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9x62\" (UniqueName: \"kubernetes.io/projected/3a9b7b7c-7f72-46f8-aa26-1f03e4f0fd4b-kube-api-access-z9x62\") pod \"keystone-6dbb44f597-5cgmc\" (UID: \"3a9b7b7c-7f72-46f8-aa26-1f03e4f0fd4b\") " pod="openstack/keystone-6dbb44f597-5cgmc" Feb 19 10:03:33 crc kubenswrapper[4965]: I0219 10:03:33.289888 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6dbb44f597-5cgmc" Feb 19 10:03:33 crc kubenswrapper[4965]: I0219 10:03:33.728285 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-fbb65bccb-zmlg7"] Feb 19 10:03:33 crc kubenswrapper[4965]: I0219 10:03:33.730407 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-fbb65bccb-zmlg7" Feb 19 10:03:33 crc kubenswrapper[4965]: I0219 10:03:33.734671 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 19 10:03:33 crc kubenswrapper[4965]: I0219 10:03:33.734958 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 19 10:03:33 crc kubenswrapper[4965]: I0219 10:03:33.735124 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-sbdps" Feb 19 10:03:33 crc kubenswrapper[4965]: I0219 10:03:33.735335 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 19 10:03:33 crc kubenswrapper[4965]: I0219 10:03:33.738311 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 19 10:03:33 crc kubenswrapper[4965]: I0219 10:03:33.751207 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-fbb65bccb-zmlg7"] Feb 19 10:03:33 crc kubenswrapper[4965]: I0219 10:03:33.767278 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8505e9f1-238a-4f32-95a4-95979a4f7bac-public-tls-certs\") pod \"placement-fbb65bccb-zmlg7\" (UID: \"8505e9f1-238a-4f32-95a4-95979a4f7bac\") " pod="openstack/placement-fbb65bccb-zmlg7" Feb 19 10:03:33 crc kubenswrapper[4965]: I0219 10:03:33.767438 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8505e9f1-238a-4f32-95a4-95979a4f7bac-internal-tls-certs\") pod \"placement-fbb65bccb-zmlg7\" (UID: \"8505e9f1-238a-4f32-95a4-95979a4f7bac\") " pod="openstack/placement-fbb65bccb-zmlg7" Feb 19 10:03:33 crc kubenswrapper[4965]: I0219 10:03:33.767469 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8505e9f1-238a-4f32-95a4-95979a4f7bac-config-data\") pod \"placement-fbb65bccb-zmlg7\" (UID: \"8505e9f1-238a-4f32-95a4-95979a4f7bac\") " pod="openstack/placement-fbb65bccb-zmlg7" Feb 19 10:03:33 crc kubenswrapper[4965]: I0219 10:03:33.767486 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8505e9f1-238a-4f32-95a4-95979a4f7bac-combined-ca-bundle\") pod \"placement-fbb65bccb-zmlg7\" (UID: \"8505e9f1-238a-4f32-95a4-95979a4f7bac\") " pod="openstack/placement-fbb65bccb-zmlg7" Feb 19 10:03:33 crc kubenswrapper[4965]: I0219 10:03:33.767513 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8505e9f1-238a-4f32-95a4-95979a4f7bac-scripts\") pod \"placement-fbb65bccb-zmlg7\" (UID: \"8505e9f1-238a-4f32-95a4-95979a4f7bac\") " pod="openstack/placement-fbb65bccb-zmlg7" Feb 19 10:03:33 crc kubenswrapper[4965]: I0219 10:03:33.767551 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8505e9f1-238a-4f32-95a4-95979a4f7bac-logs\") pod \"placement-fbb65bccb-zmlg7\" (UID: \"8505e9f1-238a-4f32-95a4-95979a4f7bac\") " pod="openstack/placement-fbb65bccb-zmlg7" Feb 19 10:03:33 crc kubenswrapper[4965]: I0219 10:03:33.767590 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9dlh\" (UniqueName: \"kubernetes.io/projected/8505e9f1-238a-4f32-95a4-95979a4f7bac-kube-api-access-v9dlh\") pod \"placement-fbb65bccb-zmlg7\" (UID: \"8505e9f1-238a-4f32-95a4-95979a4f7bac\") " pod="openstack/placement-fbb65bccb-zmlg7" Feb 19 10:03:33 crc kubenswrapper[4965]: I0219 10:03:33.786924 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fcad3660-ade7-407c-9d77-bb1c2c2721a8","Type":"ContainerStarted","Data":"5154361b55a0d174534bfd97c9f04b35cfce5c6ecac8e56a0dc33ee4d28e72bc"} Feb 19 10:03:33 crc kubenswrapper[4965]: I0219 10:03:33.789123 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4","Type":"ContainerStarted","Data":"c6b7bdfc41c1ed122f6a4051463c7fa61d7e6bb34c838c8b7f40d4625f4665cb"} Feb 19 10:03:33 crc kubenswrapper[4965]: I0219 10:03:33.794782 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-qllz5" event={"ID":"d7bc0481-970b-4e8e-868f-490ea553952e","Type":"ContainerStarted","Data":"5ab05f3592b2f219ed44ac0e86ad608bd92a1ab776be7429a7ef3f0bd8f2b808"} Feb 19 10:03:33 crc kubenswrapper[4965]: I0219 10:03:33.833743 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-qllz5" podStartSLOduration=3.634939458 podStartE2EDuration="45.833726793s" podCreationTimestamp="2026-02-19 10:02:48 +0000 UTC" firstStartedPulling="2026-02-19 10:02:50.191303677 +0000 UTC m=+1225.812624987" lastFinishedPulling="2026-02-19 10:03:32.390091012 +0000 UTC m=+1268.011412322" observedRunningTime="2026-02-19 10:03:33.831625791 +0000 UTC m=+1269.452947101" watchObservedRunningTime="2026-02-19 10:03:33.833726793 +0000 UTC m=+1269.455048103" Feb 19 10:03:33 crc kubenswrapper[4965]: I0219 10:03:33.880942 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8505e9f1-238a-4f32-95a4-95979a4f7bac-public-tls-certs\") pod \"placement-fbb65bccb-zmlg7\" (UID: \"8505e9f1-238a-4f32-95a4-95979a4f7bac\") " pod="openstack/placement-fbb65bccb-zmlg7" Feb 19 10:03:33 crc kubenswrapper[4965]: I0219 10:03:33.887660 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8505e9f1-238a-4f32-95a4-95979a4f7bac-internal-tls-certs\") pod \"placement-fbb65bccb-zmlg7\" (UID: \"8505e9f1-238a-4f32-95a4-95979a4f7bac\") " pod="openstack/placement-fbb65bccb-zmlg7" Feb 19 10:03:33 crc kubenswrapper[4965]: I0219 10:03:33.890398 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8505e9f1-238a-4f32-95a4-95979a4f7bac-config-data\") pod \"placement-fbb65bccb-zmlg7\" (UID: \"8505e9f1-238a-4f32-95a4-95979a4f7bac\") " pod="openstack/placement-fbb65bccb-zmlg7" Feb 19 10:03:33 crc kubenswrapper[4965]: I0219 10:03:33.890587 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8505e9f1-238a-4f32-95a4-95979a4f7bac-combined-ca-bundle\") pod \"placement-fbb65bccb-zmlg7\" (UID: \"8505e9f1-238a-4f32-95a4-95979a4f7bac\") " pod="openstack/placement-fbb65bccb-zmlg7" Feb 19 10:03:33 crc kubenswrapper[4965]: I0219 10:03:33.891438 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8505e9f1-238a-4f32-95a4-95979a4f7bac-scripts\") pod \"placement-fbb65bccb-zmlg7\" (UID: \"8505e9f1-238a-4f32-95a4-95979a4f7bac\") " pod="openstack/placement-fbb65bccb-zmlg7" Feb 19 10:03:33 crc kubenswrapper[4965]: I0219 10:03:33.892312 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8505e9f1-238a-4f32-95a4-95979a4f7bac-logs\") pod \"placement-fbb65bccb-zmlg7\" (UID: \"8505e9f1-238a-4f32-95a4-95979a4f7bac\") " pod="openstack/placement-fbb65bccb-zmlg7" Feb 19 10:03:33 crc kubenswrapper[4965]: I0219 10:03:33.892502 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9dlh\" (UniqueName: \"kubernetes.io/projected/8505e9f1-238a-4f32-95a4-95979a4f7bac-kube-api-access-v9dlh\") pod \"placement-fbb65bccb-zmlg7\" (UID: \"8505e9f1-238a-4f32-95a4-95979a4f7bac\") " pod="openstack/placement-fbb65bccb-zmlg7" Feb 19 10:03:33 crc kubenswrapper[4965]: I0219 10:03:33.893631 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8505e9f1-238a-4f32-95a4-95979a4f7bac-logs\") pod \"placement-fbb65bccb-zmlg7\" (UID: \"8505e9f1-238a-4f32-95a4-95979a4f7bac\") " pod="openstack/placement-fbb65bccb-zmlg7" Feb 19 10:03:33 crc kubenswrapper[4965]: I0219 10:03:33.967326 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8505e9f1-238a-4f32-95a4-95979a4f7bac-scripts\") pod \"placement-fbb65bccb-zmlg7\" (UID: \"8505e9f1-238a-4f32-95a4-95979a4f7bac\") " pod="openstack/placement-fbb65bccb-zmlg7" Feb 19 10:03:33 crc kubenswrapper[4965]: I0219 10:03:33.977560 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8505e9f1-238a-4f32-95a4-95979a4f7bac-public-tls-certs\") pod \"placement-fbb65bccb-zmlg7\" (UID: \"8505e9f1-238a-4f32-95a4-95979a4f7bac\") " pod="openstack/placement-fbb65bccb-zmlg7" Feb 19 10:03:33 crc kubenswrapper[4965]: I0219 10:03:33.985019 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8505e9f1-238a-4f32-95a4-95979a4f7bac-internal-tls-certs\") pod \"placement-fbb65bccb-zmlg7\" (UID: \"8505e9f1-238a-4f32-95a4-95979a4f7bac\") " pod="openstack/placement-fbb65bccb-zmlg7" Feb 19 10:03:34 crc kubenswrapper[4965]: I0219 10:03:34.004346 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9dlh\" (UniqueName: \"kubernetes.io/projected/8505e9f1-238a-4f32-95a4-95979a4f7bac-kube-api-access-v9dlh\") pod \"placement-fbb65bccb-zmlg7\" (UID: \"8505e9f1-238a-4f32-95a4-95979a4f7bac\") " pod="openstack/placement-fbb65bccb-zmlg7" Feb 19 10:03:34 crc kubenswrapper[4965]: I0219 10:03:34.004471 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8505e9f1-238a-4f32-95a4-95979a4f7bac-config-data\") pod \"placement-fbb65bccb-zmlg7\" (UID: \"8505e9f1-238a-4f32-95a4-95979a4f7bac\") " pod="openstack/placement-fbb65bccb-zmlg7" Feb 19 10:03:34 crc kubenswrapper[4965]: I0219 10:03:34.005083 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8505e9f1-238a-4f32-95a4-95979a4f7bac-combined-ca-bundle\") pod \"placement-fbb65bccb-zmlg7\" (UID: \"8505e9f1-238a-4f32-95a4-95979a4f7bac\") " pod="openstack/placement-fbb65bccb-zmlg7" Feb 19 10:03:34 crc kubenswrapper[4965]: I0219 10:03:34.071861 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-fbb65bccb-zmlg7" Feb 19 10:03:34 crc kubenswrapper[4965]: I0219 10:03:34.129861 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6dbb44f597-5cgmc"] Feb 19 10:03:34 crc kubenswrapper[4965]: I0219 10:03:34.510903 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 19 10:03:34 crc kubenswrapper[4965]: I0219 10:03:34.534111 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 19 10:03:34 crc kubenswrapper[4965]: I0219 10:03:34.673707 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-fbb65bccb-zmlg7"] Feb 19 10:03:34 crc kubenswrapper[4965]: W0219 10:03:34.698442 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8505e9f1_238a_4f32_95a4_95979a4f7bac.slice/crio-9c01e6c8ce0b14ad7f32c16789a00354dd57643f9b7218722635add62e16b0ae WatchSource:0}: Error finding container 9c01e6c8ce0b14ad7f32c16789a00354dd57643f9b7218722635add62e16b0ae: Status 404 returned error can't find the container with id 9c01e6c8ce0b14ad7f32c16789a00354dd57643f9b7218722635add62e16b0ae Feb 19 10:03:34 crc kubenswrapper[4965]: I0219 10:03:34.806811 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-fbb65bccb-zmlg7" event={"ID":"8505e9f1-238a-4f32-95a4-95979a4f7bac","Type":"ContainerStarted","Data":"9c01e6c8ce0b14ad7f32c16789a00354dd57643f9b7218722635add62e16b0ae"} Feb 19 10:03:34 crc kubenswrapper[4965]: I0219 10:03:34.808114 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6dbb44f597-5cgmc" event={"ID":"3a9b7b7c-7f72-46f8-aa26-1f03e4f0fd4b","Type":"ContainerStarted","Data":"39900e808f623c1d57161f8c2d50cb8fb57418b49a636dbb83c5523d50c619d8"} Feb 19 10:03:35 crc kubenswrapper[4965]: I0219 10:03:35.950464 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7b667979-hprpt" Feb 19 10:03:36 crc kubenswrapper[4965]: I0219 10:03:36.019612 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-bm75n"] Feb 19 10:03:36 crc kubenswrapper[4965]: I0219 10:03:36.019865 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56df8fb6b7-bm75n" podUID="ed4a364a-14cc-442e-9297-ca9497e633ca" containerName="dnsmasq-dns" containerID="cri-o://9f0d43a53c0c1183f82212afcc663221a869cf526ccfe5bbd507e34de318526b" gracePeriod=10 Feb 19 10:03:36 crc kubenswrapper[4965]: E0219 10:03:36.074526 4965 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded4a364a_14cc_442e_9297_ca9497e633ca.slice/crio-9f0d43a53c0c1183f82212afcc663221a869cf526ccfe5bbd507e34de318526b.scope\": RecentStats: unable to find data in memory cache]" Feb 19 10:03:38 crc kubenswrapper[4965]: I0219 10:03:38.861798 4965 generic.go:334] "Generic (PLEG): container finished" podID="ed4a364a-14cc-442e-9297-ca9497e633ca" containerID="9f0d43a53c0c1183f82212afcc663221a869cf526ccfe5bbd507e34de318526b" exitCode=0 Feb 19 10:03:38 crc kubenswrapper[4965]: I0219 10:03:38.862382 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-bm75n" event={"ID":"ed4a364a-14cc-442e-9297-ca9497e633ca","Type":"ContainerDied","Data":"9f0d43a53c0c1183f82212afcc663221a869cf526ccfe5bbd507e34de318526b"} Feb 19 10:03:38 crc kubenswrapper[4965]: I0219 10:03:38.871617 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4","Type":"ContainerStarted","Data":"49fcaca484061b23abe5163e6400766b358539b4d4e48b2ae87e123e3faf885b"} Feb 19 10:03:38 crc kubenswrapper[4965]: I0219 10:03:38.877482 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7rwpz" event={"ID":"ce8bac0d-7aa6-437f-b234-370384cf1153","Type":"ContainerStarted","Data":"79cbeeb63a28f5c389b9ac0a139d4e62412e586c54b91d1d69a4dc78b98f0110"} Feb 19 10:03:38 crc kubenswrapper[4965]: I0219 10:03:38.885318 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-fbb65bccb-zmlg7" event={"ID":"8505e9f1-238a-4f32-95a4-95979a4f7bac","Type":"ContainerStarted","Data":"e3d2741dbf2ece37d0935d2c4fde450cc43a557aa5f8aebf05e9b14ce6ac0047"} Feb 19 10:03:38 crc kubenswrapper[4965]: I0219 10:03:38.900361 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6dbb44f597-5cgmc" event={"ID":"3a9b7b7c-7f72-46f8-aa26-1f03e4f0fd4b","Type":"ContainerStarted","Data":"5997e7a002c15a32cff27817e215df6a0f85b6fe044975fc8154f13b83be9506"} Feb 19 10:03:38 crc kubenswrapper[4965]: I0219 10:03:38.900776 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-6dbb44f597-5cgmc" Feb 19 10:03:38 crc kubenswrapper[4965]: I0219 10:03:38.914340 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-7rwpz" podStartSLOduration=8.613748671 podStartE2EDuration="51.914324137s" podCreationTimestamp="2026-02-19 10:02:47 +0000 UTC" firstStartedPulling="2026-02-19 10:02:49.362237267 +0000 UTC m=+1224.983558577" lastFinishedPulling="2026-02-19 10:03:32.662812733 +0000 UTC m=+1268.284134043" observedRunningTime="2026-02-19 10:03:38.901173518 +0000 UTC m=+1274.522494828" watchObservedRunningTime="2026-02-19 10:03:38.914324137 +0000 UTC m=+1274.535645447" Feb 19 10:03:38 crc kubenswrapper[4965]: I0219 10:03:38.926948 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-6dbb44f597-5cgmc" podStartSLOduration=6.926931023 podStartE2EDuration="6.926931023s" podCreationTimestamp="2026-02-19 10:03:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:03:38.92271218 +0000 UTC m=+1274.544033480" watchObservedRunningTime="2026-02-19 10:03:38.926931023 +0000 UTC m=+1274.548252333" Feb 19 10:03:39 crc kubenswrapper[4965]: I0219 10:03:39.001566 4965 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-56df8fb6b7-bm75n" podUID="ed4a364a-14cc-442e-9297-ca9497e633ca" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.164:5353: connect: connection refused" Feb 19 10:03:39 crc kubenswrapper[4965]: I0219 10:03:39.287235 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-bm75n" Feb 19 10:03:39 crc kubenswrapper[4965]: I0219 10:03:39.446862 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed4a364a-14cc-442e-9297-ca9497e633ca-config\") pod \"ed4a364a-14cc-442e-9297-ca9497e633ca\" (UID: \"ed4a364a-14cc-442e-9297-ca9497e633ca\") " Feb 19 10:03:39 crc kubenswrapper[4965]: I0219 10:03:39.447010 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ed4a364a-14cc-442e-9297-ca9497e633ca-dns-swift-storage-0\") pod \"ed4a364a-14cc-442e-9297-ca9497e633ca\" (UID: \"ed4a364a-14cc-442e-9297-ca9497e633ca\") " Feb 19 10:03:39 crc kubenswrapper[4965]: I0219 10:03:39.447053 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed4a364a-14cc-442e-9297-ca9497e633ca-dns-svc\") pod \"ed4a364a-14cc-442e-9297-ca9497e633ca\" (UID: \"ed4a364a-14cc-442e-9297-ca9497e633ca\") " Feb 19 10:03:39 crc kubenswrapper[4965]: I0219 10:03:39.447133 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnfjw\" (UniqueName: \"kubernetes.io/projected/ed4a364a-14cc-442e-9297-ca9497e633ca-kube-api-access-nnfjw\") pod \"ed4a364a-14cc-442e-9297-ca9497e633ca\" (UID: \"ed4a364a-14cc-442e-9297-ca9497e633ca\") " Feb 19 10:03:39 crc kubenswrapper[4965]: I0219 10:03:39.447261 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed4a364a-14cc-442e-9297-ca9497e633ca-ovsdbserver-nb\") pod \"ed4a364a-14cc-442e-9297-ca9497e633ca\" (UID: \"ed4a364a-14cc-442e-9297-ca9497e633ca\") " Feb 19 10:03:39 crc kubenswrapper[4965]: I0219 10:03:39.447348 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed4a364a-14cc-442e-9297-ca9497e633ca-ovsdbserver-sb\") pod \"ed4a364a-14cc-442e-9297-ca9497e633ca\" (UID: \"ed4a364a-14cc-442e-9297-ca9497e633ca\") " Feb 19 10:03:39 crc kubenswrapper[4965]: I0219 10:03:39.459407 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed4a364a-14cc-442e-9297-ca9497e633ca-kube-api-access-nnfjw" (OuterVolumeSpecName: "kube-api-access-nnfjw") pod "ed4a364a-14cc-442e-9297-ca9497e633ca" (UID: "ed4a364a-14cc-442e-9297-ca9497e633ca"). InnerVolumeSpecName "kube-api-access-nnfjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:03:39 crc kubenswrapper[4965]: I0219 10:03:39.558380 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnfjw\" (UniqueName: \"kubernetes.io/projected/ed4a364a-14cc-442e-9297-ca9497e633ca-kube-api-access-nnfjw\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:39 crc kubenswrapper[4965]: I0219 10:03:39.568864 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed4a364a-14cc-442e-9297-ca9497e633ca-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ed4a364a-14cc-442e-9297-ca9497e633ca" (UID: "ed4a364a-14cc-442e-9297-ca9497e633ca"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:03:39 crc kubenswrapper[4965]: I0219 10:03:39.578569 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed4a364a-14cc-442e-9297-ca9497e633ca-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ed4a364a-14cc-442e-9297-ca9497e633ca" (UID: "ed4a364a-14cc-442e-9297-ca9497e633ca"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:03:39 crc kubenswrapper[4965]: I0219 10:03:39.599243 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed4a364a-14cc-442e-9297-ca9497e633ca-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ed4a364a-14cc-442e-9297-ca9497e633ca" (UID: "ed4a364a-14cc-442e-9297-ca9497e633ca"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:03:39 crc kubenswrapper[4965]: I0219 10:03:39.604263 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed4a364a-14cc-442e-9297-ca9497e633ca-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ed4a364a-14cc-442e-9297-ca9497e633ca" (UID: "ed4a364a-14cc-442e-9297-ca9497e633ca"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:03:39 crc kubenswrapper[4965]: I0219 10:03:39.622483 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed4a364a-14cc-442e-9297-ca9497e633ca-config" (OuterVolumeSpecName: "config") pod "ed4a364a-14cc-442e-9297-ca9497e633ca" (UID: "ed4a364a-14cc-442e-9297-ca9497e633ca"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:03:39 crc kubenswrapper[4965]: I0219 10:03:39.660519 4965 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed4a364a-14cc-442e-9297-ca9497e633ca-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:39 crc kubenswrapper[4965]: I0219 10:03:39.660559 4965 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed4a364a-14cc-442e-9297-ca9497e633ca-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:39 crc kubenswrapper[4965]: I0219 10:03:39.660573 4965 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed4a364a-14cc-442e-9297-ca9497e633ca-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:39 crc kubenswrapper[4965]: I0219 10:03:39.660584 4965 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ed4a364a-14cc-442e-9297-ca9497e633ca-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:39 crc kubenswrapper[4965]: I0219 10:03:39.660594 4965 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed4a364a-14cc-442e-9297-ca9497e633ca-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:39 crc kubenswrapper[4965]: I0219 10:03:39.918719 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-fbb65bccb-zmlg7" event={"ID":"8505e9f1-238a-4f32-95a4-95979a4f7bac","Type":"ContainerStarted","Data":"4bfe31abf29499a1ad658941feacb4f812314d64142ce43957957a6cb76946ab"} Feb 19 10:03:39 crc kubenswrapper[4965]: I0219 10:03:39.918766 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-fbb65bccb-zmlg7" Feb 19 10:03:39 crc kubenswrapper[4965]: I0219 10:03:39.918788 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-fbb65bccb-zmlg7" Feb 19 10:03:39 crc kubenswrapper[4965]: I0219 10:03:39.926147 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-bm75n" event={"ID":"ed4a364a-14cc-442e-9297-ca9497e633ca","Type":"ContainerDied","Data":"d8daa333b4a3c0c0a759d1e07534be2227a9380e0a9f631ed0f3dfec9a787422"} Feb 19 10:03:39 crc kubenswrapper[4965]: I0219 10:03:39.926213 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-bm75n" Feb 19 10:03:39 crc kubenswrapper[4965]: I0219 10:03:39.926278 4965 scope.go:117] "RemoveContainer" containerID="9f0d43a53c0c1183f82212afcc663221a869cf526ccfe5bbd507e34de318526b" Feb 19 10:03:39 crc kubenswrapper[4965]: I0219 10:03:39.928494 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4","Type":"ContainerStarted","Data":"875e5c0e930aae00aa948239350bc6c796d241e6326b1a4f9a9c5c8717df04da"} Feb 19 10:03:39 crc kubenswrapper[4965]: I0219 10:03:39.948495 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-wh9q9" event={"ID":"e4e3779f-9f25-4334-97f9-a3778bd78d5e","Type":"ContainerStarted","Data":"0b8cd47096442e6c01e98dc21447886c4820940445b68906e0b057f855e32074"} Feb 19 10:03:39 crc kubenswrapper[4965]: I0219 10:03:39.976501 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-fbb65bccb-zmlg7" podStartSLOduration=6.976476576 podStartE2EDuration="6.976476576s" podCreationTimestamp="2026-02-19 10:03:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:03:39.938428132 +0000 UTC m=+1275.559749452" watchObservedRunningTime="2026-02-19 10:03:39.976476576 +0000 UTC m=+1275.597797896" Feb 19 10:03:39 crc kubenswrapper[4965]: I0219 10:03:39.982699 4965 scope.go:117] "RemoveContainer" containerID="7ef8b8f21e2500aba83002ffad9fe586d9b0e8e2132b8274d346397a4ae1ecde" Feb 19 10:03:40 crc kubenswrapper[4965]: I0219 10:03:40.004390 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=12.004372413 podStartE2EDuration="12.004372413s" podCreationTimestamp="2026-02-19 10:03:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:03:39.975181284 +0000 UTC m=+1275.596502604" watchObservedRunningTime="2026-02-19 10:03:40.004372413 +0000 UTC m=+1275.625693723" Feb 19 10:03:40 crc kubenswrapper[4965]: I0219 10:03:40.019374 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-bm75n"] Feb 19 10:03:40 crc kubenswrapper[4965]: I0219 10:03:40.029849 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-bm75n"] Feb 19 10:03:40 crc kubenswrapper[4965]: I0219 10:03:40.031567 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-db-sync-wh9q9" podStartSLOduration=2.998940995 podStartE2EDuration="52.031555743s" podCreationTimestamp="2026-02-19 10:02:48 +0000 UTC" firstStartedPulling="2026-02-19 10:02:49.832623557 +0000 UTC m=+1225.453944867" lastFinishedPulling="2026-02-19 10:03:38.865238295 +0000 UTC m=+1274.486559615" observedRunningTime="2026-02-19 10:03:40.007813016 +0000 UTC m=+1275.629134326" watchObservedRunningTime="2026-02-19 10:03:40.031555743 +0000 UTC m=+1275.652877043" Feb 19 10:03:40 crc kubenswrapper[4965]: I0219 10:03:40.972268 4965 generic.go:334] "Generic (PLEG): container finished" podID="d7bc0481-970b-4e8e-868f-490ea553952e" containerID="5ab05f3592b2f219ed44ac0e86ad608bd92a1ab776be7429a7ef3f0bd8f2b808" exitCode=0 Feb 19 10:03:40 crc kubenswrapper[4965]: I0219 10:03:40.972387 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-qllz5" event={"ID":"d7bc0481-970b-4e8e-868f-490ea553952e","Type":"ContainerDied","Data":"5ab05f3592b2f219ed44ac0e86ad608bd92a1ab776be7429a7ef3f0bd8f2b808"} Feb 19 10:03:41 crc kubenswrapper[4965]: I0219 10:03:41.212227 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed4a364a-14cc-442e-9297-ca9497e633ca" path="/var/lib/kubelet/pods/ed4a364a-14cc-442e-9297-ca9497e633ca/volumes" Feb 19 10:03:45 crc kubenswrapper[4965]: I0219 10:03:45.040145 4965 generic.go:334] "Generic (PLEG): container finished" podID="ce8bac0d-7aa6-437f-b234-370384cf1153" containerID="79cbeeb63a28f5c389b9ac0a139d4e62412e586c54b91d1d69a4dc78b98f0110" exitCode=0 Feb 19 10:03:45 crc kubenswrapper[4965]: I0219 10:03:45.040206 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7rwpz" event={"ID":"ce8bac0d-7aa6-437f-b234-370384cf1153","Type":"ContainerDied","Data":"79cbeeb63a28f5c389b9ac0a139d4e62412e586c54b91d1d69a4dc78b98f0110"} Feb 19 10:03:45 crc kubenswrapper[4965]: I0219 10:03:45.042482 4965 generic.go:334] "Generic (PLEG): container finished" podID="e4e3779f-9f25-4334-97f9-a3778bd78d5e" containerID="0b8cd47096442e6c01e98dc21447886c4820940445b68906e0b057f855e32074" exitCode=0 Feb 19 10:03:45 crc kubenswrapper[4965]: I0219 10:03:45.042504 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-wh9q9" event={"ID":"e4e3779f-9f25-4334-97f9-a3778bd78d5e","Type":"ContainerDied","Data":"0b8cd47096442e6c01e98dc21447886c4820940445b68906e0b057f855e32074"} Feb 19 10:03:45 crc kubenswrapper[4965]: I0219 10:03:45.411596 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-qllz5" Feb 19 10:03:45 crc kubenswrapper[4965]: I0219 10:03:45.490047 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d7bc0481-970b-4e8e-868f-490ea553952e-db-sync-config-data\") pod \"d7bc0481-970b-4e8e-868f-490ea553952e\" (UID: \"d7bc0481-970b-4e8e-868f-490ea553952e\") " Feb 19 10:03:45 crc kubenswrapper[4965]: I0219 10:03:45.490101 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbbv9\" (UniqueName: \"kubernetes.io/projected/d7bc0481-970b-4e8e-868f-490ea553952e-kube-api-access-bbbv9\") pod \"d7bc0481-970b-4e8e-868f-490ea553952e\" (UID: \"d7bc0481-970b-4e8e-868f-490ea553952e\") " Feb 19 10:03:45 crc kubenswrapper[4965]: I0219 10:03:45.490323 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7bc0481-970b-4e8e-868f-490ea553952e-combined-ca-bundle\") pod \"d7bc0481-970b-4e8e-868f-490ea553952e\" (UID: \"d7bc0481-970b-4e8e-868f-490ea553952e\") " Feb 19 10:03:45 crc kubenswrapper[4965]: I0219 10:03:45.494078 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7bc0481-970b-4e8e-868f-490ea553952e-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d7bc0481-970b-4e8e-868f-490ea553952e" (UID: "d7bc0481-970b-4e8e-868f-490ea553952e"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:45 crc kubenswrapper[4965]: I0219 10:03:45.494742 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7bc0481-970b-4e8e-868f-490ea553952e-kube-api-access-bbbv9" (OuterVolumeSpecName: "kube-api-access-bbbv9") pod "d7bc0481-970b-4e8e-868f-490ea553952e" (UID: "d7bc0481-970b-4e8e-868f-490ea553952e"). InnerVolumeSpecName "kube-api-access-bbbv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:03:45 crc kubenswrapper[4965]: I0219 10:03:45.530406 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7bc0481-970b-4e8e-868f-490ea553952e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d7bc0481-970b-4e8e-868f-490ea553952e" (UID: "d7bc0481-970b-4e8e-868f-490ea553952e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:45 crc kubenswrapper[4965]: I0219 10:03:45.592900 4965 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d7bc0481-970b-4e8e-868f-490ea553952e-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:45 crc kubenswrapper[4965]: I0219 10:03:45.592959 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bbbv9\" (UniqueName: \"kubernetes.io/projected/d7bc0481-970b-4e8e-868f-490ea553952e-kube-api-access-bbbv9\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:45 crc kubenswrapper[4965]: I0219 10:03:45.592974 4965 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7bc0481-970b-4e8e-868f-490ea553952e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:45 crc kubenswrapper[4965]: E0219 10:03:45.649169 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="fcad3660-ade7-407c-9d77-bb1c2c2721a8" Feb 19 10:03:46 crc kubenswrapper[4965]: I0219 10:03:46.056451 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fcad3660-ade7-407c-9d77-bb1c2c2721a8","Type":"ContainerStarted","Data":"edb5d7aa57299a1cf933c795be74a347670604b28e56544f77858b059f9c915d"} Feb 19 10:03:46 crc kubenswrapper[4965]: I0219 10:03:46.056837 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fcad3660-ade7-407c-9d77-bb1c2c2721a8" containerName="ceilometer-notification-agent" containerID="cri-o://02531bd557132191c6be3729ea9f4a171329a4568aabbf64acc9d9438d720853" gracePeriod=30 Feb 19 10:03:46 crc kubenswrapper[4965]: I0219 10:03:46.056956 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fcad3660-ade7-407c-9d77-bb1c2c2721a8" containerName="sg-core" containerID="cri-o://5154361b55a0d174534bfd97c9f04b35cfce5c6ecac8e56a0dc33ee4d28e72bc" gracePeriod=30 Feb 19 10:03:46 crc kubenswrapper[4965]: I0219 10:03:46.056975 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 10:03:46 crc kubenswrapper[4965]: I0219 10:03:46.056970 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fcad3660-ade7-407c-9d77-bb1c2c2721a8" containerName="proxy-httpd" containerID="cri-o://edb5d7aa57299a1cf933c795be74a347670604b28e56544f77858b059f9c915d" gracePeriod=30 Feb 19 10:03:46 crc kubenswrapper[4965]: I0219 10:03:46.062560 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-qllz5" event={"ID":"d7bc0481-970b-4e8e-868f-490ea553952e","Type":"ContainerDied","Data":"07ce3cdc4fb747d1465483be806cb2cc3d95a8a99e404860088a0ad42519d50e"} Feb 19 10:03:46 crc kubenswrapper[4965]: I0219 10:03:46.062598 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07ce3cdc4fb747d1465483be806cb2cc3d95a8a99e404860088a0ad42519d50e" Feb 19 10:03:46 crc kubenswrapper[4965]: I0219 10:03:46.062974 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-qllz5" Feb 19 10:03:46 crc kubenswrapper[4965]: E0219 10:03:46.325954 4965 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfcad3660_ade7_407c_9d77_bb1c2c2721a8.slice/crio-edb5d7aa57299a1cf933c795be74a347670604b28e56544f77858b059f9c915d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfcad3660_ade7_407c_9d77_bb1c2c2721a8.slice/crio-conmon-edb5d7aa57299a1cf933c795be74a347670604b28e56544f77858b059f9c915d.scope\": RecentStats: unable to find data in memory cache]" Feb 19 10:03:46 crc kubenswrapper[4965]: I0219 10:03:46.590828 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-wh9q9" Feb 19 10:03:46 crc kubenswrapper[4965]: I0219 10:03:46.598022 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7rwpz" Feb 19 10:03:46 crc kubenswrapper[4965]: I0219 10:03:46.785828 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce8bac0d-7aa6-437f-b234-370384cf1153-config-data\") pod \"ce8bac0d-7aa6-437f-b234-370384cf1153\" (UID: \"ce8bac0d-7aa6-437f-b234-370384cf1153\") " Feb 19 10:03:46 crc kubenswrapper[4965]: I0219 10:03:46.785930 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ce8bac0d-7aa6-437f-b234-370384cf1153-etc-machine-id\") pod \"ce8bac0d-7aa6-437f-b234-370384cf1153\" (UID: \"ce8bac0d-7aa6-437f-b234-370384cf1153\") " Feb 19 10:03:46 crc kubenswrapper[4965]: I0219 10:03:46.785989 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4e3779f-9f25-4334-97f9-a3778bd78d5e-combined-ca-bundle\") pod \"e4e3779f-9f25-4334-97f9-a3778bd78d5e\" (UID: \"e4e3779f-9f25-4334-97f9-a3778bd78d5e\") " Feb 19 10:03:46 crc kubenswrapper[4965]: I0219 10:03:46.786009 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4e3779f-9f25-4334-97f9-a3778bd78d5e-config-data\") pod \"e4e3779f-9f25-4334-97f9-a3778bd78d5e\" (UID: \"e4e3779f-9f25-4334-97f9-a3778bd78d5e\") " Feb 19 10:03:46 crc kubenswrapper[4965]: I0219 10:03:46.786050 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4e3779f-9f25-4334-97f9-a3778bd78d5e-scripts\") pod \"e4e3779f-9f25-4334-97f9-a3778bd78d5e\" (UID: \"e4e3779f-9f25-4334-97f9-a3778bd78d5e\") " Feb 19 10:03:46 crc kubenswrapper[4965]: I0219 10:03:46.786114 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whzwf\" (UniqueName: \"kubernetes.io/projected/e4e3779f-9f25-4334-97f9-a3778bd78d5e-kube-api-access-whzwf\") pod \"e4e3779f-9f25-4334-97f9-a3778bd78d5e\" (UID: \"e4e3779f-9f25-4334-97f9-a3778bd78d5e\") " Feb 19 10:03:46 crc kubenswrapper[4965]: I0219 10:03:46.786169 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/e4e3779f-9f25-4334-97f9-a3778bd78d5e-certs\") pod \"e4e3779f-9f25-4334-97f9-a3778bd78d5e\" (UID: \"e4e3779f-9f25-4334-97f9-a3778bd78d5e\") " Feb 19 10:03:46 crc kubenswrapper[4965]: I0219 10:03:46.786203 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce8bac0d-7aa6-437f-b234-370384cf1153-combined-ca-bundle\") pod \"ce8bac0d-7aa6-437f-b234-370384cf1153\" (UID: \"ce8bac0d-7aa6-437f-b234-370384cf1153\") " Feb 19 10:03:46 crc kubenswrapper[4965]: I0219 10:03:46.786230 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64rvr\" (UniqueName: \"kubernetes.io/projected/ce8bac0d-7aa6-437f-b234-370384cf1153-kube-api-access-64rvr\") pod \"ce8bac0d-7aa6-437f-b234-370384cf1153\" (UID: \"ce8bac0d-7aa6-437f-b234-370384cf1153\") " Feb 19 10:03:46 crc kubenswrapper[4965]: I0219 10:03:46.786268 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ce8bac0d-7aa6-437f-b234-370384cf1153-db-sync-config-data\") pod \"ce8bac0d-7aa6-437f-b234-370384cf1153\" (UID: \"ce8bac0d-7aa6-437f-b234-370384cf1153\") " Feb 19 10:03:46 crc kubenswrapper[4965]: I0219 10:03:46.786315 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce8bac0d-7aa6-437f-b234-370384cf1153-scripts\") pod \"ce8bac0d-7aa6-437f-b234-370384cf1153\" (UID: \"ce8bac0d-7aa6-437f-b234-370384cf1153\") " Feb 19 10:03:46 crc kubenswrapper[4965]: I0219 10:03:46.787448 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce8bac0d-7aa6-437f-b234-370384cf1153-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ce8bac0d-7aa6-437f-b234-370384cf1153" (UID: "ce8bac0d-7aa6-437f-b234-370384cf1153"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 10:03:46 crc kubenswrapper[4965]: I0219 10:03:46.799262 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4e3779f-9f25-4334-97f9-a3778bd78d5e-certs" (OuterVolumeSpecName: "certs") pod "e4e3779f-9f25-4334-97f9-a3778bd78d5e" (UID: "e4e3779f-9f25-4334-97f9-a3778bd78d5e"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:03:46 crc kubenswrapper[4965]: I0219 10:03:46.807176 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce8bac0d-7aa6-437f-b234-370384cf1153-kube-api-access-64rvr" (OuterVolumeSpecName: "kube-api-access-64rvr") pod "ce8bac0d-7aa6-437f-b234-370384cf1153" (UID: "ce8bac0d-7aa6-437f-b234-370384cf1153"). InnerVolumeSpecName "kube-api-access-64rvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:03:46 crc kubenswrapper[4965]: I0219 10:03:46.816418 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4e3779f-9f25-4334-97f9-a3778bd78d5e-scripts" (OuterVolumeSpecName: "scripts") pod "e4e3779f-9f25-4334-97f9-a3778bd78d5e" (UID: "e4e3779f-9f25-4334-97f9-a3778bd78d5e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:46 crc kubenswrapper[4965]: I0219 10:03:46.816508 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce8bac0d-7aa6-437f-b234-370384cf1153-scripts" (OuterVolumeSpecName: "scripts") pod "ce8bac0d-7aa6-437f-b234-370384cf1153" (UID: "ce8bac0d-7aa6-437f-b234-370384cf1153"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:46 crc kubenswrapper[4965]: I0219 10:03:46.816592 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4e3779f-9f25-4334-97f9-a3778bd78d5e-kube-api-access-whzwf" (OuterVolumeSpecName: "kube-api-access-whzwf") pod "e4e3779f-9f25-4334-97f9-a3778bd78d5e" (UID: "e4e3779f-9f25-4334-97f9-a3778bd78d5e"). InnerVolumeSpecName "kube-api-access-whzwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:03:46 crc kubenswrapper[4965]: I0219 10:03:46.823619 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce8bac0d-7aa6-437f-b234-370384cf1153-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "ce8bac0d-7aa6-437f-b234-370384cf1153" (UID: "ce8bac0d-7aa6-437f-b234-370384cf1153"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:46 crc kubenswrapper[4965]: I0219 10:03:46.834679 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-6855d4854d-gc94v"] Feb 19 10:03:46 crc kubenswrapper[4965]: E0219 10:03:46.835308 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4e3779f-9f25-4334-97f9-a3778bd78d5e" containerName="cloudkitty-db-sync" Feb 19 10:03:46 crc kubenswrapper[4965]: I0219 10:03:46.835322 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4e3779f-9f25-4334-97f9-a3778bd78d5e" containerName="cloudkitty-db-sync" Feb 19 10:03:46 crc kubenswrapper[4965]: E0219 10:03:46.835344 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed4a364a-14cc-442e-9297-ca9497e633ca" containerName="dnsmasq-dns" Feb 19 10:03:46 crc kubenswrapper[4965]: I0219 10:03:46.835351 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed4a364a-14cc-442e-9297-ca9497e633ca" containerName="dnsmasq-dns" Feb 19 10:03:46 crc kubenswrapper[4965]: E0219 10:03:46.835369 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce8bac0d-7aa6-437f-b234-370384cf1153" containerName="cinder-db-sync" Feb 19 10:03:46 crc kubenswrapper[4965]: I0219 10:03:46.835376 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce8bac0d-7aa6-437f-b234-370384cf1153" containerName="cinder-db-sync" Feb 19 10:03:46 crc kubenswrapper[4965]: E0219 10:03:46.835393 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7bc0481-970b-4e8e-868f-490ea553952e" containerName="barbican-db-sync" Feb 19 10:03:46 crc kubenswrapper[4965]: I0219 10:03:46.835400 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7bc0481-970b-4e8e-868f-490ea553952e" containerName="barbican-db-sync" Feb 19 10:03:46 crc kubenswrapper[4965]: E0219 10:03:46.835410 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed4a364a-14cc-442e-9297-ca9497e633ca" containerName="init" Feb 19 10:03:46 crc kubenswrapper[4965]: I0219 10:03:46.835415 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed4a364a-14cc-442e-9297-ca9497e633ca" containerName="init" Feb 19 10:03:46 crc kubenswrapper[4965]: I0219 10:03:46.835602 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7bc0481-970b-4e8e-868f-490ea553952e" containerName="barbican-db-sync" Feb 19 10:03:46 crc kubenswrapper[4965]: I0219 10:03:46.835615 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed4a364a-14cc-442e-9297-ca9497e633ca" containerName="dnsmasq-dns" Feb 19 10:03:46 crc kubenswrapper[4965]: I0219 10:03:46.835633 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4e3779f-9f25-4334-97f9-a3778bd78d5e" containerName="cloudkitty-db-sync" Feb 19 10:03:46 crc kubenswrapper[4965]: I0219 10:03:46.835640 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce8bac0d-7aa6-437f-b234-370384cf1153" containerName="cinder-db-sync" Feb 19 10:03:46 crc kubenswrapper[4965]: I0219 10:03:46.836636 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6855d4854d-gc94v" Feb 19 10:03:46 crc kubenswrapper[4965]: I0219 10:03:46.848596 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-nhmv6" Feb 19 10:03:46 crc kubenswrapper[4965]: I0219 10:03:46.873594 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 19 10:03:46 crc kubenswrapper[4965]: I0219 10:03:46.873854 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 19 10:03:46 crc kubenswrapper[4965]: I0219 10:03:46.873970 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-55c5f69ff7-qk9n8"] Feb 19 10:03:46 crc kubenswrapper[4965]: I0219 10:03:46.875934 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-55c5f69ff7-qk9n8" Feb 19 10:03:46 crc kubenswrapper[4965]: I0219 10:03:46.877174 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 19 10:03:46 crc kubenswrapper[4965]: I0219 10:03:46.887309 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce8bac0d-7aa6-437f-b234-370384cf1153-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ce8bac0d-7aa6-437f-b234-370384cf1153" (UID: "ce8bac0d-7aa6-437f-b234-370384cf1153"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:46 crc kubenswrapper[4965]: I0219 10:03:46.889071 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efe10142-642a-45d3-9f5a-8d1f2cb717e9-logs\") pod \"barbican-keystone-listener-6855d4854d-gc94v\" (UID: \"efe10142-642a-45d3-9f5a-8d1f2cb717e9\") " pod="openstack/barbican-keystone-listener-6855d4854d-gc94v" Feb 19 10:03:46 crc kubenswrapper[4965]: I0219 10:03:46.889123 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efe10142-642a-45d3-9f5a-8d1f2cb717e9-config-data\") pod \"barbican-keystone-listener-6855d4854d-gc94v\" (UID: \"efe10142-642a-45d3-9f5a-8d1f2cb717e9\") " pod="openstack/barbican-keystone-listener-6855d4854d-gc94v" Feb 19 10:03:46 crc kubenswrapper[4965]: I0219 10:03:46.889178 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/efe10142-642a-45d3-9f5a-8d1f2cb717e9-config-data-custom\") pod \"barbican-keystone-listener-6855d4854d-gc94v\" (UID: \"efe10142-642a-45d3-9f5a-8d1f2cb717e9\") " pod="openstack/barbican-keystone-listener-6855d4854d-gc94v" Feb 19 10:03:46 crc kubenswrapper[4965]: I0219 10:03:46.889221 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84hw5\" (UniqueName: \"kubernetes.io/projected/efe10142-642a-45d3-9f5a-8d1f2cb717e9-kube-api-access-84hw5\") pod \"barbican-keystone-listener-6855d4854d-gc94v\" (UID: \"efe10142-642a-45d3-9f5a-8d1f2cb717e9\") " pod="openstack/barbican-keystone-listener-6855d4854d-gc94v" Feb 19 10:03:46 crc kubenswrapper[4965]: I0219 10:03:46.889251 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efe10142-642a-45d3-9f5a-8d1f2cb717e9-combined-ca-bundle\") pod \"barbican-keystone-listener-6855d4854d-gc94v\" (UID: \"efe10142-642a-45d3-9f5a-8d1f2cb717e9\") " pod="openstack/barbican-keystone-listener-6855d4854d-gc94v" Feb 19 10:03:46 crc kubenswrapper[4965]: I0219 10:03:46.889311 4965 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ce8bac0d-7aa6-437f-b234-370384cf1153-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:46 crc kubenswrapper[4965]: I0219 10:03:46.889323 4965 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce8bac0d-7aa6-437f-b234-370384cf1153-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:46 crc kubenswrapper[4965]: I0219 10:03:46.889331 4965 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ce8bac0d-7aa6-437f-b234-370384cf1153-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:46 crc kubenswrapper[4965]: I0219 10:03:46.889341 4965 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4e3779f-9f25-4334-97f9-a3778bd78d5e-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:46 crc kubenswrapper[4965]: I0219 10:03:46.889349 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whzwf\" (UniqueName: \"kubernetes.io/projected/e4e3779f-9f25-4334-97f9-a3778bd78d5e-kube-api-access-whzwf\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:46 crc kubenswrapper[4965]: I0219 10:03:46.889357 4965 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/e4e3779f-9f25-4334-97f9-a3778bd78d5e-certs\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:46 crc kubenswrapper[4965]: I0219 10:03:46.889440 4965 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce8bac0d-7aa6-437f-b234-370384cf1153-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:46 crc kubenswrapper[4965]: I0219 10:03:46.889449 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64rvr\" (UniqueName: \"kubernetes.io/projected/ce8bac0d-7aa6-437f-b234-370384cf1153-kube-api-access-64rvr\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:46 crc kubenswrapper[4965]: I0219 10:03:46.902094 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6855d4854d-gc94v"] Feb 19 10:03:46 crc kubenswrapper[4965]: I0219 10:03:46.905326 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4e3779f-9f25-4334-97f9-a3778bd78d5e-config-data" (OuterVolumeSpecName: "config-data") pod "e4e3779f-9f25-4334-97f9-a3778bd78d5e" (UID: "e4e3779f-9f25-4334-97f9-a3778bd78d5e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:46 crc kubenswrapper[4965]: I0219 10:03:46.918309 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-55c5f69ff7-qk9n8"] Feb 19 10:03:46 crc kubenswrapper[4965]: I0219 10:03:46.948502 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-pv8l6"] Feb 19 10:03:46 crc kubenswrapper[4965]: I0219 10:03:46.950124 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-pv8l6" Feb 19 10:03:46 crc kubenswrapper[4965]: I0219 10:03:46.987870 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4e3779f-9f25-4334-97f9-a3778bd78d5e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e4e3779f-9f25-4334-97f9-a3778bd78d5e" (UID: "e4e3779f-9f25-4334-97f9-a3778bd78d5e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:46 crc kubenswrapper[4965]: I0219 10:03:46.989677 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-pv8l6"] Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:46.991064 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efe10142-642a-45d3-9f5a-8d1f2cb717e9-combined-ca-bundle\") pod \"barbican-keystone-listener-6855d4854d-gc94v\" (UID: \"efe10142-642a-45d3-9f5a-8d1f2cb717e9\") " pod="openstack/barbican-keystone-listener-6855d4854d-gc94v" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:46.991101 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a32c3eed-880c-428c-b58e-d89c763d11b9-combined-ca-bundle\") pod \"barbican-worker-55c5f69ff7-qk9n8\" (UID: \"a32c3eed-880c-428c-b58e-d89c763d11b9\") " pod="openstack/barbican-worker-55c5f69ff7-qk9n8" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:46.991182 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/06e93745-1a03-492b-be3e-233b34d13bff-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-pv8l6\" (UID: \"06e93745-1a03-492b-be3e-233b34d13bff\") " pod="openstack/dnsmasq-dns-848cf88cfc-pv8l6" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:46.991223 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm86t\" (UniqueName: \"kubernetes.io/projected/06e93745-1a03-492b-be3e-233b34d13bff-kube-api-access-bm86t\") pod \"dnsmasq-dns-848cf88cfc-pv8l6\" (UID: \"06e93745-1a03-492b-be3e-233b34d13bff\") " pod="openstack/dnsmasq-dns-848cf88cfc-pv8l6" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:46.991269 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/06e93745-1a03-492b-be3e-233b34d13bff-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-pv8l6\" (UID: \"06e93745-1a03-492b-be3e-233b34d13bff\") " pod="openstack/dnsmasq-dns-848cf88cfc-pv8l6" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:46.991304 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftf6q\" (UniqueName: \"kubernetes.io/projected/a32c3eed-880c-428c-b58e-d89c763d11b9-kube-api-access-ftf6q\") pod \"barbican-worker-55c5f69ff7-qk9n8\" (UID: \"a32c3eed-880c-428c-b58e-d89c763d11b9\") " pod="openstack/barbican-worker-55c5f69ff7-qk9n8" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:46.991326 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efe10142-642a-45d3-9f5a-8d1f2cb717e9-logs\") pod \"barbican-keystone-listener-6855d4854d-gc94v\" (UID: \"efe10142-642a-45d3-9f5a-8d1f2cb717e9\") " pod="openstack/barbican-keystone-listener-6855d4854d-gc94v" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:46.991348 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efe10142-642a-45d3-9f5a-8d1f2cb717e9-config-data\") pod \"barbican-keystone-listener-6855d4854d-gc94v\" (UID: \"efe10142-642a-45d3-9f5a-8d1f2cb717e9\") " pod="openstack/barbican-keystone-listener-6855d4854d-gc94v" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:46.991369 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06e93745-1a03-492b-be3e-233b34d13bff-config\") pod \"dnsmasq-dns-848cf88cfc-pv8l6\" (UID: \"06e93745-1a03-492b-be3e-233b34d13bff\") " pod="openstack/dnsmasq-dns-848cf88cfc-pv8l6" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:46.991387 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a32c3eed-880c-428c-b58e-d89c763d11b9-config-data-custom\") pod \"barbican-worker-55c5f69ff7-qk9n8\" (UID: \"a32c3eed-880c-428c-b58e-d89c763d11b9\") " pod="openstack/barbican-worker-55c5f69ff7-qk9n8" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:46.991404 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a32c3eed-880c-428c-b58e-d89c763d11b9-config-data\") pod \"barbican-worker-55c5f69ff7-qk9n8\" (UID: \"a32c3eed-880c-428c-b58e-d89c763d11b9\") " pod="openstack/barbican-worker-55c5f69ff7-qk9n8" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:46.991426 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06e93745-1a03-492b-be3e-233b34d13bff-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-pv8l6\" (UID: \"06e93745-1a03-492b-be3e-233b34d13bff\") " pod="openstack/dnsmasq-dns-848cf88cfc-pv8l6" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:46.991445 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a32c3eed-880c-428c-b58e-d89c763d11b9-logs\") pod \"barbican-worker-55c5f69ff7-qk9n8\" (UID: \"a32c3eed-880c-428c-b58e-d89c763d11b9\") " pod="openstack/barbican-worker-55c5f69ff7-qk9n8" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:46.991460 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/06e93745-1a03-492b-be3e-233b34d13bff-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-pv8l6\" (UID: \"06e93745-1a03-492b-be3e-233b34d13bff\") " pod="openstack/dnsmasq-dns-848cf88cfc-pv8l6" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:46.991489 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/efe10142-642a-45d3-9f5a-8d1f2cb717e9-config-data-custom\") pod \"barbican-keystone-listener-6855d4854d-gc94v\" (UID: \"efe10142-642a-45d3-9f5a-8d1f2cb717e9\") " pod="openstack/barbican-keystone-listener-6855d4854d-gc94v" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:46.991585 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84hw5\" (UniqueName: \"kubernetes.io/projected/efe10142-642a-45d3-9f5a-8d1f2cb717e9-kube-api-access-84hw5\") pod \"barbican-keystone-listener-6855d4854d-gc94v\" (UID: \"efe10142-642a-45d3-9f5a-8d1f2cb717e9\") " pod="openstack/barbican-keystone-listener-6855d4854d-gc94v" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:46.991650 4965 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4e3779f-9f25-4334-97f9-a3778bd78d5e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:46.991663 4965 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4e3779f-9f25-4334-97f9-a3778bd78d5e-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:46.994966 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efe10142-642a-45d3-9f5a-8d1f2cb717e9-logs\") pod \"barbican-keystone-listener-6855d4854d-gc94v\" (UID: \"efe10142-642a-45d3-9f5a-8d1f2cb717e9\") " pod="openstack/barbican-keystone-listener-6855d4854d-gc94v" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.000386 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce8bac0d-7aa6-437f-b234-370384cf1153-config-data" (OuterVolumeSpecName: "config-data") pod "ce8bac0d-7aa6-437f-b234-370384cf1153" (UID: "ce8bac0d-7aa6-437f-b234-370384cf1153"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.001293 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efe10142-642a-45d3-9f5a-8d1f2cb717e9-combined-ca-bundle\") pod \"barbican-keystone-listener-6855d4854d-gc94v\" (UID: \"efe10142-642a-45d3-9f5a-8d1f2cb717e9\") " pod="openstack/barbican-keystone-listener-6855d4854d-gc94v" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.030344 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efe10142-642a-45d3-9f5a-8d1f2cb717e9-config-data\") pod \"barbican-keystone-listener-6855d4854d-gc94v\" (UID: \"efe10142-642a-45d3-9f5a-8d1f2cb717e9\") " pod="openstack/barbican-keystone-listener-6855d4854d-gc94v" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.033442 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/efe10142-642a-45d3-9f5a-8d1f2cb717e9-config-data-custom\") pod \"barbican-keystone-listener-6855d4854d-gc94v\" (UID: \"efe10142-642a-45d3-9f5a-8d1f2cb717e9\") " pod="openstack/barbican-keystone-listener-6855d4854d-gc94v" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.038797 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84hw5\" (UniqueName: \"kubernetes.io/projected/efe10142-642a-45d3-9f5a-8d1f2cb717e9-kube-api-access-84hw5\") pod \"barbican-keystone-listener-6855d4854d-gc94v\" (UID: \"efe10142-642a-45d3-9f5a-8d1f2cb717e9\") " pod="openstack/barbican-keystone-listener-6855d4854d-gc94v" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.093379 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a32c3eed-880c-428c-b58e-d89c763d11b9-combined-ca-bundle\") pod \"barbican-worker-55c5f69ff7-qk9n8\" (UID: \"a32c3eed-880c-428c-b58e-d89c763d11b9\") " pod="openstack/barbican-worker-55c5f69ff7-qk9n8" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.093439 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/06e93745-1a03-492b-be3e-233b34d13bff-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-pv8l6\" (UID: \"06e93745-1a03-492b-be3e-233b34d13bff\") " pod="openstack/dnsmasq-dns-848cf88cfc-pv8l6" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.093474 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bm86t\" (UniqueName: \"kubernetes.io/projected/06e93745-1a03-492b-be3e-233b34d13bff-kube-api-access-bm86t\") pod \"dnsmasq-dns-848cf88cfc-pv8l6\" (UID: \"06e93745-1a03-492b-be3e-233b34d13bff\") " pod="openstack/dnsmasq-dns-848cf88cfc-pv8l6" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.093533 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/06e93745-1a03-492b-be3e-233b34d13bff-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-pv8l6\" (UID: \"06e93745-1a03-492b-be3e-233b34d13bff\") " pod="openstack/dnsmasq-dns-848cf88cfc-pv8l6" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.093583 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftf6q\" (UniqueName: \"kubernetes.io/projected/a32c3eed-880c-428c-b58e-d89c763d11b9-kube-api-access-ftf6q\") pod \"barbican-worker-55c5f69ff7-qk9n8\" (UID: \"a32c3eed-880c-428c-b58e-d89c763d11b9\") " pod="openstack/barbican-worker-55c5f69ff7-qk9n8" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.093620 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06e93745-1a03-492b-be3e-233b34d13bff-config\") pod \"dnsmasq-dns-848cf88cfc-pv8l6\" (UID: \"06e93745-1a03-492b-be3e-233b34d13bff\") " pod="openstack/dnsmasq-dns-848cf88cfc-pv8l6" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.093642 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a32c3eed-880c-428c-b58e-d89c763d11b9-config-data-custom\") pod \"barbican-worker-55c5f69ff7-qk9n8\" (UID: \"a32c3eed-880c-428c-b58e-d89c763d11b9\") " pod="openstack/barbican-worker-55c5f69ff7-qk9n8" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.093662 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a32c3eed-880c-428c-b58e-d89c763d11b9-config-data\") pod \"barbican-worker-55c5f69ff7-qk9n8\" (UID: \"a32c3eed-880c-428c-b58e-d89c763d11b9\") " pod="openstack/barbican-worker-55c5f69ff7-qk9n8" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.093687 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06e93745-1a03-492b-be3e-233b34d13bff-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-pv8l6\" (UID: \"06e93745-1a03-492b-be3e-233b34d13bff\") " pod="openstack/dnsmasq-dns-848cf88cfc-pv8l6" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.093709 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a32c3eed-880c-428c-b58e-d89c763d11b9-logs\") pod \"barbican-worker-55c5f69ff7-qk9n8\" (UID: \"a32c3eed-880c-428c-b58e-d89c763d11b9\") " pod="openstack/barbican-worker-55c5f69ff7-qk9n8" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.093726 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/06e93745-1a03-492b-be3e-233b34d13bff-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-pv8l6\" (UID: \"06e93745-1a03-492b-be3e-233b34d13bff\") " pod="openstack/dnsmasq-dns-848cf88cfc-pv8l6" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.093824 4965 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce8bac0d-7aa6-437f-b234-370384cf1153-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.094710 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/06e93745-1a03-492b-be3e-233b34d13bff-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-pv8l6\" (UID: \"06e93745-1a03-492b-be3e-233b34d13bff\") " pod="openstack/dnsmasq-dns-848cf88cfc-pv8l6" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.097692 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/06e93745-1a03-492b-be3e-233b34d13bff-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-pv8l6\" (UID: \"06e93745-1a03-492b-be3e-233b34d13bff\") " pod="openstack/dnsmasq-dns-848cf88cfc-pv8l6" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.098032 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/06e93745-1a03-492b-be3e-233b34d13bff-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-pv8l6\" (UID: \"06e93745-1a03-492b-be3e-233b34d13bff\") " pod="openstack/dnsmasq-dns-848cf88cfc-pv8l6" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.099914 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06e93745-1a03-492b-be3e-233b34d13bff-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-pv8l6\" (UID: \"06e93745-1a03-492b-be3e-233b34d13bff\") " pod="openstack/dnsmasq-dns-848cf88cfc-pv8l6" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.101251 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06e93745-1a03-492b-be3e-233b34d13bff-config\") pod \"dnsmasq-dns-848cf88cfc-pv8l6\" (UID: \"06e93745-1a03-492b-be3e-233b34d13bff\") " pod="openstack/dnsmasq-dns-848cf88cfc-pv8l6" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.101915 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a32c3eed-880c-428c-b58e-d89c763d11b9-logs\") pod \"barbican-worker-55c5f69ff7-qk9n8\" (UID: \"a32c3eed-880c-428c-b58e-d89c763d11b9\") " pod="openstack/barbican-worker-55c5f69ff7-qk9n8" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.102299 4965 generic.go:334] "Generic (PLEG): container finished" podID="fcad3660-ade7-407c-9d77-bb1c2c2721a8" containerID="edb5d7aa57299a1cf933c795be74a347670604b28e56544f77858b059f9c915d" exitCode=0 Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.102324 4965 generic.go:334] "Generic (PLEG): container finished" podID="fcad3660-ade7-407c-9d77-bb1c2c2721a8" containerID="5154361b55a0d174534bfd97c9f04b35cfce5c6ecac8e56a0dc33ee4d28e72bc" exitCode=2 Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.102368 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fcad3660-ade7-407c-9d77-bb1c2c2721a8","Type":"ContainerDied","Data":"edb5d7aa57299a1cf933c795be74a347670604b28e56544f77858b059f9c915d"} Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.102400 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fcad3660-ade7-407c-9d77-bb1c2c2721a8","Type":"ContainerDied","Data":"5154361b55a0d174534bfd97c9f04b35cfce5c6ecac8e56a0dc33ee4d28e72bc"} Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.104555 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a32c3eed-880c-428c-b58e-d89c763d11b9-combined-ca-bundle\") pod \"barbican-worker-55c5f69ff7-qk9n8\" (UID: \"a32c3eed-880c-428c-b58e-d89c763d11b9\") " pod="openstack/barbican-worker-55c5f69ff7-qk9n8" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.105158 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a32c3eed-880c-428c-b58e-d89c763d11b9-config-data-custom\") pod \"barbican-worker-55c5f69ff7-qk9n8\" (UID: \"a32c3eed-880c-428c-b58e-d89c763d11b9\") " pod="openstack/barbican-worker-55c5f69ff7-qk9n8" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.105283 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7rwpz" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.105165 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7rwpz" event={"ID":"ce8bac0d-7aa6-437f-b234-370384cf1153","Type":"ContainerDied","Data":"137b331a032251deb704440163a87a12fcabd54f7a5554cc0933473d44674a62"} Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.105415 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="137b331a032251deb704440163a87a12fcabd54f7a5554cc0933473d44674a62" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.117581 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a32c3eed-880c-428c-b58e-d89c763d11b9-config-data\") pod \"barbican-worker-55c5f69ff7-qk9n8\" (UID: \"a32c3eed-880c-428c-b58e-d89c763d11b9\") " pod="openstack/barbican-worker-55c5f69ff7-qk9n8" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.140448 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-wh9q9" event={"ID":"e4e3779f-9f25-4334-97f9-a3778bd78d5e","Type":"ContainerDied","Data":"66c1218bdd600b8d7b583f7bc8544b41847e9f3dfa59899066b033f707cfcf4b"} Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.140645 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66c1218bdd600b8d7b583f7bc8544b41847e9f3dfa59899066b033f707cfcf4b" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.140661 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-wh9q9" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.149485 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bm86t\" (UniqueName: \"kubernetes.io/projected/06e93745-1a03-492b-be3e-233b34d13bff-kube-api-access-bm86t\") pod \"dnsmasq-dns-848cf88cfc-pv8l6\" (UID: \"06e93745-1a03-492b-be3e-233b34d13bff\") " pod="openstack/dnsmasq-dns-848cf88cfc-pv8l6" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.159337 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftf6q\" (UniqueName: \"kubernetes.io/projected/a32c3eed-880c-428c-b58e-d89c763d11b9-kube-api-access-ftf6q\") pod \"barbican-worker-55c5f69ff7-qk9n8\" (UID: \"a32c3eed-880c-428c-b58e-d89c763d11b9\") " pod="openstack/barbican-worker-55c5f69ff7-qk9n8" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.176500 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-d7568bd5b-nvbtn"] Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.178774 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-d7568bd5b-nvbtn" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.182370 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.241163 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-d7568bd5b-nvbtn"] Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.291385 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-storageinit-vtxxb"] Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.292644 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-vtxxb" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.296695 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-config-data" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.296900 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.297112 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-client-internal" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.297371 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-cloudkitty-dockercfg-rbdxx" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.297563 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-scripts" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.297970 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnrhd\" (UniqueName: \"kubernetes.io/projected/e59b9522-d30f-4640-8e62-55e0b0c91c9a-kube-api-access-xnrhd\") pod \"barbican-api-d7568bd5b-nvbtn\" (UID: \"e59b9522-d30f-4640-8e62-55e0b0c91c9a\") " pod="openstack/barbican-api-d7568bd5b-nvbtn" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.298059 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e59b9522-d30f-4640-8e62-55e0b0c91c9a-combined-ca-bundle\") pod \"barbican-api-d7568bd5b-nvbtn\" (UID: \"e59b9522-d30f-4640-8e62-55e0b0c91c9a\") " pod="openstack/barbican-api-d7568bd5b-nvbtn" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.298088 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e59b9522-d30f-4640-8e62-55e0b0c91c9a-config-data-custom\") pod \"barbican-api-d7568bd5b-nvbtn\" (UID: \"e59b9522-d30f-4640-8e62-55e0b0c91c9a\") " pod="openstack/barbican-api-d7568bd5b-nvbtn" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.298115 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e59b9522-d30f-4640-8e62-55e0b0c91c9a-config-data\") pod \"barbican-api-d7568bd5b-nvbtn\" (UID: \"e59b9522-d30f-4640-8e62-55e0b0c91c9a\") " pod="openstack/barbican-api-d7568bd5b-nvbtn" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.298156 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e59b9522-d30f-4640-8e62-55e0b0c91c9a-logs\") pod \"barbican-api-d7568bd5b-nvbtn\" (UID: \"e59b9522-d30f-4640-8e62-55e0b0c91c9a\") " pod="openstack/barbican-api-d7568bd5b-nvbtn" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.301243 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-storageinit-vtxxb"] Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.303206 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6855d4854d-gc94v" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.336603 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-55c5f69ff7-qk9n8" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.355367 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-pv8l6" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.395394 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.397178 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.405261 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/712a9147-94b7-45f8-97a9-3c0a988f748d-combined-ca-bundle\") pod \"cloudkitty-storageinit-vtxxb\" (UID: \"712a9147-94b7-45f8-97a9-3c0a988f748d\") " pod="openstack/cloudkitty-storageinit-vtxxb" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.405311 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/712a9147-94b7-45f8-97a9-3c0a988f748d-config-data\") pod \"cloudkitty-storageinit-vtxxb\" (UID: \"712a9147-94b7-45f8-97a9-3c0a988f748d\") " pod="openstack/cloudkitty-storageinit-vtxxb" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.405347 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e59b9522-d30f-4640-8e62-55e0b0c91c9a-combined-ca-bundle\") pod \"barbican-api-d7568bd5b-nvbtn\" (UID: \"e59b9522-d30f-4640-8e62-55e0b0c91c9a\") " pod="openstack/barbican-api-d7568bd5b-nvbtn" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.405374 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e59b9522-d30f-4640-8e62-55e0b0c91c9a-config-data-custom\") pod \"barbican-api-d7568bd5b-nvbtn\" (UID: \"e59b9522-d30f-4640-8e62-55e0b0c91c9a\") " pod="openstack/barbican-api-d7568bd5b-nvbtn" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.405413 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e59b9522-d30f-4640-8e62-55e0b0c91c9a-config-data\") pod \"barbican-api-d7568bd5b-nvbtn\" (UID: \"e59b9522-d30f-4640-8e62-55e0b0c91c9a\") " pod="openstack/barbican-api-d7568bd5b-nvbtn" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.405470 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/712a9147-94b7-45f8-97a9-3c0a988f748d-scripts\") pod \"cloudkitty-storageinit-vtxxb\" (UID: \"712a9147-94b7-45f8-97a9-3c0a988f748d\") " pod="openstack/cloudkitty-storageinit-vtxxb" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.405488 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/712a9147-94b7-45f8-97a9-3c0a988f748d-certs\") pod \"cloudkitty-storageinit-vtxxb\" (UID: \"712a9147-94b7-45f8-97a9-3c0a988f748d\") " pod="openstack/cloudkitty-storageinit-vtxxb" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.405522 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e59b9522-d30f-4640-8e62-55e0b0c91c9a-logs\") pod \"barbican-api-d7568bd5b-nvbtn\" (UID: \"e59b9522-d30f-4640-8e62-55e0b0c91c9a\") " pod="openstack/barbican-api-d7568bd5b-nvbtn" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.405589 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jz7k\" (UniqueName: \"kubernetes.io/projected/712a9147-94b7-45f8-97a9-3c0a988f748d-kube-api-access-6jz7k\") pod \"cloudkitty-storageinit-vtxxb\" (UID: \"712a9147-94b7-45f8-97a9-3c0a988f748d\") " pod="openstack/cloudkitty-storageinit-vtxxb" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.405648 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnrhd\" (UniqueName: \"kubernetes.io/projected/e59b9522-d30f-4640-8e62-55e0b0c91c9a-kube-api-access-xnrhd\") pod \"barbican-api-d7568bd5b-nvbtn\" (UID: \"e59b9522-d30f-4640-8e62-55e0b0c91c9a\") " pod="openstack/barbican-api-d7568bd5b-nvbtn" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.407528 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.407709 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-5dg9r" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.407815 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.407915 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.408831 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e59b9522-d30f-4640-8e62-55e0b0c91c9a-logs\") pod \"barbican-api-d7568bd5b-nvbtn\" (UID: \"e59b9522-d30f-4640-8e62-55e0b0c91c9a\") " pod="openstack/barbican-api-d7568bd5b-nvbtn" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.418937 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e59b9522-d30f-4640-8e62-55e0b0c91c9a-combined-ca-bundle\") pod \"barbican-api-d7568bd5b-nvbtn\" (UID: \"e59b9522-d30f-4640-8e62-55e0b0c91c9a\") " pod="openstack/barbican-api-d7568bd5b-nvbtn" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.425637 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.427590 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e59b9522-d30f-4640-8e62-55e0b0c91c9a-config-data\") pod \"barbican-api-d7568bd5b-nvbtn\" (UID: \"e59b9522-d30f-4640-8e62-55e0b0c91c9a\") " pod="openstack/barbican-api-d7568bd5b-nvbtn" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.448552 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnrhd\" (UniqueName: \"kubernetes.io/projected/e59b9522-d30f-4640-8e62-55e0b0c91c9a-kube-api-access-xnrhd\") pod \"barbican-api-d7568bd5b-nvbtn\" (UID: \"e59b9522-d30f-4640-8e62-55e0b0c91c9a\") " pod="openstack/barbican-api-d7568bd5b-nvbtn" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.474800 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-pv8l6"] Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.478133 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e59b9522-d30f-4640-8e62-55e0b0c91c9a-config-data-custom\") pod \"barbican-api-d7568bd5b-nvbtn\" (UID: \"e59b9522-d30f-4640-8e62-55e0b0c91c9a\") " pod="openstack/barbican-api-d7568bd5b-nvbtn" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.506784 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bf5dff6-70bd-4013-95d1-6e30d7e765a0-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5bf5dff6-70bd-4013-95d1-6e30d7e765a0\") " pod="openstack/cinder-scheduler-0" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.506849 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5bf5dff6-70bd-4013-95d1-6e30d7e765a0-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5bf5dff6-70bd-4013-95d1-6e30d7e765a0\") " pod="openstack/cinder-scheduler-0" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.506891 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/712a9147-94b7-45f8-97a9-3c0a988f748d-scripts\") pod \"cloudkitty-storageinit-vtxxb\" (UID: \"712a9147-94b7-45f8-97a9-3c0a988f748d\") " pod="openstack/cloudkitty-storageinit-vtxxb" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.506911 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/712a9147-94b7-45f8-97a9-3c0a988f748d-certs\") pod \"cloudkitty-storageinit-vtxxb\" (UID: \"712a9147-94b7-45f8-97a9-3c0a988f748d\") " pod="openstack/cloudkitty-storageinit-vtxxb" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.506943 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bf5dff6-70bd-4013-95d1-6e30d7e765a0-config-data\") pod \"cinder-scheduler-0\" (UID: \"5bf5dff6-70bd-4013-95d1-6e30d7e765a0\") " pod="openstack/cinder-scheduler-0" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.506971 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5bf5dff6-70bd-4013-95d1-6e30d7e765a0-scripts\") pod \"cinder-scheduler-0\" (UID: \"5bf5dff6-70bd-4013-95d1-6e30d7e765a0\") " pod="openstack/cinder-scheduler-0" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.506989 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jz7k\" (UniqueName: \"kubernetes.io/projected/712a9147-94b7-45f8-97a9-3c0a988f748d-kube-api-access-6jz7k\") pod \"cloudkitty-storageinit-vtxxb\" (UID: \"712a9147-94b7-45f8-97a9-3c0a988f748d\") " pod="openstack/cloudkitty-storageinit-vtxxb" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.507023 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hl6r\" (UniqueName: \"kubernetes.io/projected/5bf5dff6-70bd-4013-95d1-6e30d7e765a0-kube-api-access-5hl6r\") pod \"cinder-scheduler-0\" (UID: \"5bf5dff6-70bd-4013-95d1-6e30d7e765a0\") " pod="openstack/cinder-scheduler-0" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.507369 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/712a9147-94b7-45f8-97a9-3c0a988f748d-combined-ca-bundle\") pod \"cloudkitty-storageinit-vtxxb\" (UID: \"712a9147-94b7-45f8-97a9-3c0a988f748d\") " pod="openstack/cloudkitty-storageinit-vtxxb" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.507399 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5bf5dff6-70bd-4013-95d1-6e30d7e765a0-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5bf5dff6-70bd-4013-95d1-6e30d7e765a0\") " pod="openstack/cinder-scheduler-0" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.507416 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/712a9147-94b7-45f8-97a9-3c0a988f748d-config-data\") pod \"cloudkitty-storageinit-vtxxb\" (UID: \"712a9147-94b7-45f8-97a9-3c0a988f748d\") " pod="openstack/cloudkitty-storageinit-vtxxb" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.531988 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/712a9147-94b7-45f8-97a9-3c0a988f748d-combined-ca-bundle\") pod \"cloudkitty-storageinit-vtxxb\" (UID: \"712a9147-94b7-45f8-97a9-3c0a988f748d\") " pod="openstack/cloudkitty-storageinit-vtxxb" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.532745 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-d7568bd5b-nvbtn" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.538262 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/712a9147-94b7-45f8-97a9-3c0a988f748d-scripts\") pod \"cloudkitty-storageinit-vtxxb\" (UID: \"712a9147-94b7-45f8-97a9-3c0a988f748d\") " pod="openstack/cloudkitty-storageinit-vtxxb" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.550005 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jz7k\" (UniqueName: \"kubernetes.io/projected/712a9147-94b7-45f8-97a9-3c0a988f748d-kube-api-access-6jz7k\") pod \"cloudkitty-storageinit-vtxxb\" (UID: \"712a9147-94b7-45f8-97a9-3c0a988f748d\") " pod="openstack/cloudkitty-storageinit-vtxxb" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.551305 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/712a9147-94b7-45f8-97a9-3c0a988f748d-config-data\") pod \"cloudkitty-storageinit-vtxxb\" (UID: \"712a9147-94b7-45f8-97a9-3c0a988f748d\") " pod="openstack/cloudkitty-storageinit-vtxxb" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.554082 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/712a9147-94b7-45f8-97a9-3c0a988f748d-certs\") pod \"cloudkitty-storageinit-vtxxb\" (UID: \"712a9147-94b7-45f8-97a9-3c0a988f748d\") " pod="openstack/cloudkitty-storageinit-vtxxb" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.573144 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-vtxxb" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.577506 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-g6g6j"] Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.579115 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-g6g6j" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.605259 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-g6g6j"] Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.610830 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bf5dff6-70bd-4013-95d1-6e30d7e765a0-config-data\") pod \"cinder-scheduler-0\" (UID: \"5bf5dff6-70bd-4013-95d1-6e30d7e765a0\") " pod="openstack/cinder-scheduler-0" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.611140 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5bf5dff6-70bd-4013-95d1-6e30d7e765a0-scripts\") pod \"cinder-scheduler-0\" (UID: \"5bf5dff6-70bd-4013-95d1-6e30d7e765a0\") " pod="openstack/cinder-scheduler-0" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.611270 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hl6r\" (UniqueName: \"kubernetes.io/projected/5bf5dff6-70bd-4013-95d1-6e30d7e765a0-kube-api-access-5hl6r\") pod \"cinder-scheduler-0\" (UID: \"5bf5dff6-70bd-4013-95d1-6e30d7e765a0\") " pod="openstack/cinder-scheduler-0" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.611433 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5bf5dff6-70bd-4013-95d1-6e30d7e765a0-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5bf5dff6-70bd-4013-95d1-6e30d7e765a0\") " pod="openstack/cinder-scheduler-0" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.611543 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bf5dff6-70bd-4013-95d1-6e30d7e765a0-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5bf5dff6-70bd-4013-95d1-6e30d7e765a0\") " pod="openstack/cinder-scheduler-0" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.611652 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5bf5dff6-70bd-4013-95d1-6e30d7e765a0-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5bf5dff6-70bd-4013-95d1-6e30d7e765a0\") " pod="openstack/cinder-scheduler-0" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.613781 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5bf5dff6-70bd-4013-95d1-6e30d7e765a0-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5bf5dff6-70bd-4013-95d1-6e30d7e765a0\") " pod="openstack/cinder-scheduler-0" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.617940 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bf5dff6-70bd-4013-95d1-6e30d7e765a0-config-data\") pod \"cinder-scheduler-0\" (UID: \"5bf5dff6-70bd-4013-95d1-6e30d7e765a0\") " pod="openstack/cinder-scheduler-0" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.620436 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bf5dff6-70bd-4013-95d1-6e30d7e765a0-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5bf5dff6-70bd-4013-95d1-6e30d7e765a0\") " pod="openstack/cinder-scheduler-0" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.627735 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5bf5dff6-70bd-4013-95d1-6e30d7e765a0-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5bf5dff6-70bd-4013-95d1-6e30d7e765a0\") " pod="openstack/cinder-scheduler-0" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.628093 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5bf5dff6-70bd-4013-95d1-6e30d7e765a0-scripts\") pod \"cinder-scheduler-0\" (UID: \"5bf5dff6-70bd-4013-95d1-6e30d7e765a0\") " pod="openstack/cinder-scheduler-0" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.668989 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hl6r\" (UniqueName: \"kubernetes.io/projected/5bf5dff6-70bd-4013-95d1-6e30d7e765a0-kube-api-access-5hl6r\") pod \"cinder-scheduler-0\" (UID: \"5bf5dff6-70bd-4013-95d1-6e30d7e765a0\") " pod="openstack/cinder-scheduler-0" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.681415 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.692782 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.695878 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.721299 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c21e28dc-1ac5-404c-bbb3-6298f94fc5d0-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-g6g6j\" (UID: \"c21e28dc-1ac5-404c-bbb3-6298f94fc5d0\") " pod="openstack/dnsmasq-dns-6578955fd5-g6g6j" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.721364 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c21e28dc-1ac5-404c-bbb3-6298f94fc5d0-dns-svc\") pod \"dnsmasq-dns-6578955fd5-g6g6j\" (UID: \"c21e28dc-1ac5-404c-bbb3-6298f94fc5d0\") " pod="openstack/dnsmasq-dns-6578955fd5-g6g6j" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.721572 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c21e28dc-1ac5-404c-bbb3-6298f94fc5d0-config\") pod \"dnsmasq-dns-6578955fd5-g6g6j\" (UID: \"c21e28dc-1ac5-404c-bbb3-6298f94fc5d0\") " pod="openstack/dnsmasq-dns-6578955fd5-g6g6j" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.721619 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzw59\" (UniqueName: \"kubernetes.io/projected/c21e28dc-1ac5-404c-bbb3-6298f94fc5d0-kube-api-access-zzw59\") pod \"dnsmasq-dns-6578955fd5-g6g6j\" (UID: \"c21e28dc-1ac5-404c-bbb3-6298f94fc5d0\") " pod="openstack/dnsmasq-dns-6578955fd5-g6g6j" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.724269 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c21e28dc-1ac5-404c-bbb3-6298f94fc5d0-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-g6g6j\" (UID: \"c21e28dc-1ac5-404c-bbb3-6298f94fc5d0\") " pod="openstack/dnsmasq-dns-6578955fd5-g6g6j" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.724800 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c21e28dc-1ac5-404c-bbb3-6298f94fc5d0-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-g6g6j\" (UID: \"c21e28dc-1ac5-404c-bbb3-6298f94fc5d0\") " pod="openstack/dnsmasq-dns-6578955fd5-g6g6j" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.744917 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.827347 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f62c8ad7-6089-47f6-ab15-b7ec5eee53d8-logs\") pod \"cinder-api-0\" (UID: \"f62c8ad7-6089-47f6-ab15-b7ec5eee53d8\") " pod="openstack/cinder-api-0" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.827404 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c21e28dc-1ac5-404c-bbb3-6298f94fc5d0-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-g6g6j\" (UID: \"c21e28dc-1ac5-404c-bbb3-6298f94fc5d0\") " pod="openstack/dnsmasq-dns-6578955fd5-g6g6j" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.827528 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c21e28dc-1ac5-404c-bbb3-6298f94fc5d0-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-g6g6j\" (UID: \"c21e28dc-1ac5-404c-bbb3-6298f94fc5d0\") " pod="openstack/dnsmasq-dns-6578955fd5-g6g6j" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.830882 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c21e28dc-1ac5-404c-bbb3-6298f94fc5d0-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-g6g6j\" (UID: \"c21e28dc-1ac5-404c-bbb3-6298f94fc5d0\") " pod="openstack/dnsmasq-dns-6578955fd5-g6g6j" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.831301 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f62c8ad7-6089-47f6-ab15-b7ec5eee53d8-config-data\") pod \"cinder-api-0\" (UID: \"f62c8ad7-6089-47f6-ab15-b7ec5eee53d8\") " pod="openstack/cinder-api-0" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.831358 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmwsh\" (UniqueName: \"kubernetes.io/projected/f62c8ad7-6089-47f6-ab15-b7ec5eee53d8-kube-api-access-rmwsh\") pod \"cinder-api-0\" (UID: \"f62c8ad7-6089-47f6-ab15-b7ec5eee53d8\") " pod="openstack/cinder-api-0" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.831546 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c21e28dc-1ac5-404c-bbb3-6298f94fc5d0-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-g6g6j\" (UID: \"c21e28dc-1ac5-404c-bbb3-6298f94fc5d0\") " pod="openstack/dnsmasq-dns-6578955fd5-g6g6j" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.831554 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c21e28dc-1ac5-404c-bbb3-6298f94fc5d0-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-g6g6j\" (UID: \"c21e28dc-1ac5-404c-bbb3-6298f94fc5d0\") " pod="openstack/dnsmasq-dns-6578955fd5-g6g6j" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.831606 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c21e28dc-1ac5-404c-bbb3-6298f94fc5d0-dns-svc\") pod \"dnsmasq-dns-6578955fd5-g6g6j\" (UID: \"c21e28dc-1ac5-404c-bbb3-6298f94fc5d0\") " pod="openstack/dnsmasq-dns-6578955fd5-g6g6j" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.831639 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f62c8ad7-6089-47f6-ab15-b7ec5eee53d8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f62c8ad7-6089-47f6-ab15-b7ec5eee53d8\") " pod="openstack/cinder-api-0" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.831821 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f62c8ad7-6089-47f6-ab15-b7ec5eee53d8-config-data-custom\") pod \"cinder-api-0\" (UID: \"f62c8ad7-6089-47f6-ab15-b7ec5eee53d8\") " pod="openstack/cinder-api-0" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.831852 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c21e28dc-1ac5-404c-bbb3-6298f94fc5d0-config\") pod \"dnsmasq-dns-6578955fd5-g6g6j\" (UID: \"c21e28dc-1ac5-404c-bbb3-6298f94fc5d0\") " pod="openstack/dnsmasq-dns-6578955fd5-g6g6j" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.831892 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzw59\" (UniqueName: \"kubernetes.io/projected/c21e28dc-1ac5-404c-bbb3-6298f94fc5d0-kube-api-access-zzw59\") pod \"dnsmasq-dns-6578955fd5-g6g6j\" (UID: \"c21e28dc-1ac5-404c-bbb3-6298f94fc5d0\") " pod="openstack/dnsmasq-dns-6578955fd5-g6g6j" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.831960 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f62c8ad7-6089-47f6-ab15-b7ec5eee53d8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f62c8ad7-6089-47f6-ab15-b7ec5eee53d8\") " pod="openstack/cinder-api-0" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.831988 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f62c8ad7-6089-47f6-ab15-b7ec5eee53d8-scripts\") pod \"cinder-api-0\" (UID: \"f62c8ad7-6089-47f6-ab15-b7ec5eee53d8\") " pod="openstack/cinder-api-0" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.832342 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c21e28dc-1ac5-404c-bbb3-6298f94fc5d0-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-g6g6j\" (UID: \"c21e28dc-1ac5-404c-bbb3-6298f94fc5d0\") " pod="openstack/dnsmasq-dns-6578955fd5-g6g6j" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.832562 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c21e28dc-1ac5-404c-bbb3-6298f94fc5d0-dns-svc\") pod \"dnsmasq-dns-6578955fd5-g6g6j\" (UID: \"c21e28dc-1ac5-404c-bbb3-6298f94fc5d0\") " pod="openstack/dnsmasq-dns-6578955fd5-g6g6j" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.833126 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c21e28dc-1ac5-404c-bbb3-6298f94fc5d0-config\") pod \"dnsmasq-dns-6578955fd5-g6g6j\" (UID: \"c21e28dc-1ac5-404c-bbb3-6298f94fc5d0\") " pod="openstack/dnsmasq-dns-6578955fd5-g6g6j" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.869131 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzw59\" (UniqueName: \"kubernetes.io/projected/c21e28dc-1ac5-404c-bbb3-6298f94fc5d0-kube-api-access-zzw59\") pod \"dnsmasq-dns-6578955fd5-g6g6j\" (UID: \"c21e28dc-1ac5-404c-bbb3-6298f94fc5d0\") " pod="openstack/dnsmasq-dns-6578955fd5-g6g6j" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.883044 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.902025 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-g6g6j" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.933715 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f62c8ad7-6089-47f6-ab15-b7ec5eee53d8-config-data-custom\") pod \"cinder-api-0\" (UID: \"f62c8ad7-6089-47f6-ab15-b7ec5eee53d8\") " pod="openstack/cinder-api-0" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.933795 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f62c8ad7-6089-47f6-ab15-b7ec5eee53d8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f62c8ad7-6089-47f6-ab15-b7ec5eee53d8\") " pod="openstack/cinder-api-0" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.933818 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f62c8ad7-6089-47f6-ab15-b7ec5eee53d8-scripts\") pod \"cinder-api-0\" (UID: \"f62c8ad7-6089-47f6-ab15-b7ec5eee53d8\") " pod="openstack/cinder-api-0" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.933853 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f62c8ad7-6089-47f6-ab15-b7ec5eee53d8-logs\") pod \"cinder-api-0\" (UID: \"f62c8ad7-6089-47f6-ab15-b7ec5eee53d8\") " pod="openstack/cinder-api-0" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.933889 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f62c8ad7-6089-47f6-ab15-b7ec5eee53d8-config-data\") pod \"cinder-api-0\" (UID: \"f62c8ad7-6089-47f6-ab15-b7ec5eee53d8\") " pod="openstack/cinder-api-0" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.933905 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmwsh\" (UniqueName: \"kubernetes.io/projected/f62c8ad7-6089-47f6-ab15-b7ec5eee53d8-kube-api-access-rmwsh\") pod \"cinder-api-0\" (UID: \"f62c8ad7-6089-47f6-ab15-b7ec5eee53d8\") " pod="openstack/cinder-api-0" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.933973 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f62c8ad7-6089-47f6-ab15-b7ec5eee53d8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f62c8ad7-6089-47f6-ab15-b7ec5eee53d8\") " pod="openstack/cinder-api-0" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.934074 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f62c8ad7-6089-47f6-ab15-b7ec5eee53d8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f62c8ad7-6089-47f6-ab15-b7ec5eee53d8\") " pod="openstack/cinder-api-0" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.934425 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f62c8ad7-6089-47f6-ab15-b7ec5eee53d8-logs\") pod \"cinder-api-0\" (UID: \"f62c8ad7-6089-47f6-ab15-b7ec5eee53d8\") " pod="openstack/cinder-api-0" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.946653 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f62c8ad7-6089-47f6-ab15-b7ec5eee53d8-config-data\") pod \"cinder-api-0\" (UID: \"f62c8ad7-6089-47f6-ab15-b7ec5eee53d8\") " pod="openstack/cinder-api-0" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.947629 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f62c8ad7-6089-47f6-ab15-b7ec5eee53d8-scripts\") pod \"cinder-api-0\" (UID: \"f62c8ad7-6089-47f6-ab15-b7ec5eee53d8\") " pod="openstack/cinder-api-0" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.947797 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f62c8ad7-6089-47f6-ab15-b7ec5eee53d8-config-data-custom\") pod \"cinder-api-0\" (UID: \"f62c8ad7-6089-47f6-ab15-b7ec5eee53d8\") " pod="openstack/cinder-api-0" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.951573 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f62c8ad7-6089-47f6-ab15-b7ec5eee53d8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f62c8ad7-6089-47f6-ab15-b7ec5eee53d8\") " pod="openstack/cinder-api-0" Feb 19 10:03:47 crc kubenswrapper[4965]: I0219 10:03:47.961899 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmwsh\" (UniqueName: \"kubernetes.io/projected/f62c8ad7-6089-47f6-ab15-b7ec5eee53d8-kube-api-access-rmwsh\") pod \"cinder-api-0\" (UID: \"f62c8ad7-6089-47f6-ab15-b7ec5eee53d8\") " pod="openstack/cinder-api-0" Feb 19 10:03:48 crc kubenswrapper[4965]: I0219 10:03:48.039701 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 10:03:48 crc kubenswrapper[4965]: I0219 10:03:48.166812 4965 generic.go:334] "Generic (PLEG): container finished" podID="fcad3660-ade7-407c-9d77-bb1c2c2721a8" containerID="02531bd557132191c6be3729ea9f4a171329a4568aabbf64acc9d9438d720853" exitCode=0 Feb 19 10:03:48 crc kubenswrapper[4965]: I0219 10:03:48.166857 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fcad3660-ade7-407c-9d77-bb1c2c2721a8","Type":"ContainerDied","Data":"02531bd557132191c6be3729ea9f4a171329a4568aabbf64acc9d9438d720853"} Feb 19 10:03:48 crc kubenswrapper[4965]: I0219 10:03:48.338344 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-pv8l6"] Feb 19 10:03:48 crc kubenswrapper[4965]: I0219 10:03:48.607360 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:03:48 crc kubenswrapper[4965]: I0219 10:03:48.648203 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcad3660-ade7-407c-9d77-bb1c2c2721a8-config-data\") pod \"fcad3660-ade7-407c-9d77-bb1c2c2721a8\" (UID: \"fcad3660-ade7-407c-9d77-bb1c2c2721a8\") " Feb 19 10:03:48 crc kubenswrapper[4965]: I0219 10:03:48.648330 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fcad3660-ade7-407c-9d77-bb1c2c2721a8-sg-core-conf-yaml\") pod \"fcad3660-ade7-407c-9d77-bb1c2c2721a8\" (UID: \"fcad3660-ade7-407c-9d77-bb1c2c2721a8\") " Feb 19 10:03:48 crc kubenswrapper[4965]: I0219 10:03:48.648395 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcad3660-ade7-407c-9d77-bb1c2c2721a8-scripts\") pod \"fcad3660-ade7-407c-9d77-bb1c2c2721a8\" (UID: \"fcad3660-ade7-407c-9d77-bb1c2c2721a8\") " Feb 19 10:03:48 crc kubenswrapper[4965]: I0219 10:03:48.648425 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flv7f\" (UniqueName: \"kubernetes.io/projected/fcad3660-ade7-407c-9d77-bb1c2c2721a8-kube-api-access-flv7f\") pod \"fcad3660-ade7-407c-9d77-bb1c2c2721a8\" (UID: \"fcad3660-ade7-407c-9d77-bb1c2c2721a8\") " Feb 19 10:03:48 crc kubenswrapper[4965]: I0219 10:03:48.648578 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fcad3660-ade7-407c-9d77-bb1c2c2721a8-run-httpd\") pod \"fcad3660-ade7-407c-9d77-bb1c2c2721a8\" (UID: \"fcad3660-ade7-407c-9d77-bb1c2c2721a8\") " Feb 19 10:03:48 crc kubenswrapper[4965]: I0219 10:03:48.648635 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fcad3660-ade7-407c-9d77-bb1c2c2721a8-log-httpd\") pod \"fcad3660-ade7-407c-9d77-bb1c2c2721a8\" (UID: \"fcad3660-ade7-407c-9d77-bb1c2c2721a8\") " Feb 19 10:03:48 crc kubenswrapper[4965]: I0219 10:03:48.648677 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcad3660-ade7-407c-9d77-bb1c2c2721a8-combined-ca-bundle\") pod \"fcad3660-ade7-407c-9d77-bb1c2c2721a8\" (UID: \"fcad3660-ade7-407c-9d77-bb1c2c2721a8\") " Feb 19 10:03:48 crc kubenswrapper[4965]: I0219 10:03:48.657548 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcad3660-ade7-407c-9d77-bb1c2c2721a8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "fcad3660-ade7-407c-9d77-bb1c2c2721a8" (UID: "fcad3660-ade7-407c-9d77-bb1c2c2721a8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:03:48 crc kubenswrapper[4965]: I0219 10:03:48.657613 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6855d4854d-gc94v"] Feb 19 10:03:48 crc kubenswrapper[4965]: I0219 10:03:48.657878 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcad3660-ade7-407c-9d77-bb1c2c2721a8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "fcad3660-ade7-407c-9d77-bb1c2c2721a8" (UID: "fcad3660-ade7-407c-9d77-bb1c2c2721a8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:03:48 crc kubenswrapper[4965]: I0219 10:03:48.661487 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcad3660-ade7-407c-9d77-bb1c2c2721a8-scripts" (OuterVolumeSpecName: "scripts") pod "fcad3660-ade7-407c-9d77-bb1c2c2721a8" (UID: "fcad3660-ade7-407c-9d77-bb1c2c2721a8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:48 crc kubenswrapper[4965]: I0219 10:03:48.663146 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcad3660-ade7-407c-9d77-bb1c2c2721a8-kube-api-access-flv7f" (OuterVolumeSpecName: "kube-api-access-flv7f") pod "fcad3660-ade7-407c-9d77-bb1c2c2721a8" (UID: "fcad3660-ade7-407c-9d77-bb1c2c2721a8"). InnerVolumeSpecName "kube-api-access-flv7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:03:48 crc kubenswrapper[4965]: I0219 10:03:48.694567 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-d7568bd5b-nvbtn"] Feb 19 10:03:48 crc kubenswrapper[4965]: I0219 10:03:48.733611 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 10:03:48 crc kubenswrapper[4965]: I0219 10:03:48.736427 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcad3660-ade7-407c-9d77-bb1c2c2721a8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "fcad3660-ade7-407c-9d77-bb1c2c2721a8" (UID: "fcad3660-ade7-407c-9d77-bb1c2c2721a8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:48 crc kubenswrapper[4965]: I0219 10:03:48.751419 4965 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fcad3660-ade7-407c-9d77-bb1c2c2721a8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:48 crc kubenswrapper[4965]: I0219 10:03:48.751453 4965 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcad3660-ade7-407c-9d77-bb1c2c2721a8-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:48 crc kubenswrapper[4965]: I0219 10:03:48.751496 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-flv7f\" (UniqueName: \"kubernetes.io/projected/fcad3660-ade7-407c-9d77-bb1c2c2721a8-kube-api-access-flv7f\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:48 crc kubenswrapper[4965]: I0219 10:03:48.751505 4965 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fcad3660-ade7-407c-9d77-bb1c2c2721a8-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:48 crc kubenswrapper[4965]: I0219 10:03:48.751514 4965 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fcad3660-ade7-407c-9d77-bb1c2c2721a8-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:48 crc kubenswrapper[4965]: I0219 10:03:48.805469 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-55c5f69ff7-qk9n8"] Feb 19 10:03:48 crc kubenswrapper[4965]: I0219 10:03:48.821874 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-storageinit-vtxxb"] Feb 19 10:03:48 crc kubenswrapper[4965]: I0219 10:03:48.860305 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcad3660-ade7-407c-9d77-bb1c2c2721a8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fcad3660-ade7-407c-9d77-bb1c2c2721a8" (UID: "fcad3660-ade7-407c-9d77-bb1c2c2721a8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:48 crc kubenswrapper[4965]: I0219 10:03:48.957538 4965 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcad3660-ade7-407c-9d77-bb1c2c2721a8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:48 crc kubenswrapper[4965]: I0219 10:03:48.993247 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 19 10:03:49 crc kubenswrapper[4965]: I0219 10:03:49.001849 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcad3660-ade7-407c-9d77-bb1c2c2721a8-config-data" (OuterVolumeSpecName: "config-data") pod "fcad3660-ade7-407c-9d77-bb1c2c2721a8" (UID: "fcad3660-ade7-407c-9d77-bb1c2c2721a8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:49 crc kubenswrapper[4965]: I0219 10:03:49.053680 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-g6g6j"] Feb 19 10:03:49 crc kubenswrapper[4965]: I0219 10:03:49.062574 4965 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcad3660-ade7-407c-9d77-bb1c2c2721a8-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:49 crc kubenswrapper[4965]: W0219 10:03:49.124870 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc21e28dc_1ac5_404c_bbb3_6298f94fc5d0.slice/crio-3b1eb84afa0d3d4129b82241431bc265f5175804b25010b5fa46eb4cb6dcda35 WatchSource:0}: Error finding container 3b1eb84afa0d3d4129b82241431bc265f5175804b25010b5fa46eb4cb6dcda35: Status 404 returned error can't find the container with id 3b1eb84afa0d3d4129b82241431bc265f5175804b25010b5fa46eb4cb6dcda35 Feb 19 10:03:49 crc kubenswrapper[4965]: I0219 10:03:49.132711 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 19 10:03:49 crc kubenswrapper[4965]: I0219 10:03:49.132770 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 19 10:03:49 crc kubenswrapper[4965]: I0219 10:03:49.229020 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 19 10:03:49 crc kubenswrapper[4965]: I0219 10:03:49.229108 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 19 10:03:49 crc kubenswrapper[4965]: I0219 10:03:49.233915 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d7568bd5b-nvbtn" event={"ID":"e59b9522-d30f-4640-8e62-55e0b0c91c9a","Type":"ContainerStarted","Data":"bb586ffe32bb9aad537fdcfbf563a3527d9d8f1844ac3dc36c18cb230a15370d"} Feb 19 10:03:49 crc kubenswrapper[4965]: I0219 10:03:49.233963 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d7568bd5b-nvbtn" event={"ID":"e59b9522-d30f-4640-8e62-55e0b0c91c9a","Type":"ContainerStarted","Data":"4ec3e80bcc7e823b3341d1c61be5db3cf22a497f7f37ecaad7ddbd5dc52f3e18"} Feb 19 10:03:49 crc kubenswrapper[4965]: I0219 10:03:49.238275 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-vtxxb" event={"ID":"712a9147-94b7-45f8-97a9-3c0a988f748d","Type":"ContainerStarted","Data":"f19af3184ea5ecc7079b43e1007844543091623f7186f40357babcfe0d342503"} Feb 19 10:03:49 crc kubenswrapper[4965]: I0219 10:03:49.253647 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-55c5f69ff7-qk9n8" event={"ID":"a32c3eed-880c-428c-b58e-d89c763d11b9","Type":"ContainerStarted","Data":"7fccef2130f859cfb6e2a02c6063c3bab8034175f48fa13b1bdb4523a8804051"} Feb 19 10:03:49 crc kubenswrapper[4965]: I0219 10:03:49.267210 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fcad3660-ade7-407c-9d77-bb1c2c2721a8","Type":"ContainerDied","Data":"fc0afb4918c8382cd21d552560d3252b679e39f5e5b2404026fa93ac9b3a29ab"} Feb 19 10:03:49 crc kubenswrapper[4965]: I0219 10:03:49.267271 4965 scope.go:117] "RemoveContainer" containerID="edb5d7aa57299a1cf933c795be74a347670604b28e56544f77858b059f9c915d" Feb 19 10:03:49 crc kubenswrapper[4965]: I0219 10:03:49.267229 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:03:49 crc kubenswrapper[4965]: I0219 10:03:49.281733 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6855d4854d-gc94v" event={"ID":"efe10142-642a-45d3-9f5a-8d1f2cb717e9","Type":"ContainerStarted","Data":"c0916f76308b7164a0b1cda137041595c8c6688fe9d1b21953aeef1f83d85e5a"} Feb 19 10:03:49 crc kubenswrapper[4965]: I0219 10:03:49.285016 4965 generic.go:334] "Generic (PLEG): container finished" podID="06e93745-1a03-492b-be3e-233b34d13bff" containerID="de381582d7372deee1e68982192a31791bf778e7e92c28b6e090131710a6c23f" exitCode=0 Feb 19 10:03:49 crc kubenswrapper[4965]: I0219 10:03:49.285087 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-pv8l6" event={"ID":"06e93745-1a03-492b-be3e-233b34d13bff","Type":"ContainerDied","Data":"de381582d7372deee1e68982192a31791bf778e7e92c28b6e090131710a6c23f"} Feb 19 10:03:49 crc kubenswrapper[4965]: I0219 10:03:49.285153 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-pv8l6" event={"ID":"06e93745-1a03-492b-be3e-233b34d13bff","Type":"ContainerStarted","Data":"59cbc38a5f388a9b4ec3d2b2f87c819b8ff8bfab3cbb76b0c4b970bff58040ff"} Feb 19 10:03:49 crc kubenswrapper[4965]: I0219 10:03:49.292247 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5bf5dff6-70bd-4013-95d1-6e30d7e765a0","Type":"ContainerStarted","Data":"2fadd622db164bc8e1764ec13c62dcf4396134d0e6387514ae6b5dd8fe3693c2"} Feb 19 10:03:49 crc kubenswrapper[4965]: I0219 10:03:49.323589 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-g6g6j" event={"ID":"c21e28dc-1ac5-404c-bbb3-6298f94fc5d0","Type":"ContainerStarted","Data":"3b1eb84afa0d3d4129b82241431bc265f5175804b25010b5fa46eb4cb6dcda35"} Feb 19 10:03:49 crc kubenswrapper[4965]: I0219 10:03:49.329626 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:03:49 crc kubenswrapper[4965]: I0219 10:03:49.337332 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f62c8ad7-6089-47f6-ab15-b7ec5eee53d8","Type":"ContainerStarted","Data":"c1215623fb5856bfd9ebfbd039c47be193026820dd2ae384c59e5fe27016deda"} Feb 19 10:03:49 crc kubenswrapper[4965]: I0219 10:03:49.337371 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 19 10:03:49 crc kubenswrapper[4965]: I0219 10:03:49.337480 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 19 10:03:49 crc kubenswrapper[4965]: I0219 10:03:49.337773 4965 scope.go:117] "RemoveContainer" containerID="5154361b55a0d174534bfd97c9f04b35cfce5c6ecac8e56a0dc33ee4d28e72bc" Feb 19 10:03:49 crc kubenswrapper[4965]: I0219 10:03:49.351846 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:03:49 crc kubenswrapper[4965]: I0219 10:03:49.408497 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:03:49 crc kubenswrapper[4965]: E0219 10:03:49.422960 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcad3660-ade7-407c-9d77-bb1c2c2721a8" containerName="proxy-httpd" Feb 19 10:03:49 crc kubenswrapper[4965]: I0219 10:03:49.423211 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcad3660-ade7-407c-9d77-bb1c2c2721a8" containerName="proxy-httpd" Feb 19 10:03:49 crc kubenswrapper[4965]: E0219 10:03:49.423427 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcad3660-ade7-407c-9d77-bb1c2c2721a8" containerName="sg-core" Feb 19 10:03:49 crc kubenswrapper[4965]: I0219 10:03:49.423572 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcad3660-ade7-407c-9d77-bb1c2c2721a8" containerName="sg-core" Feb 19 10:03:49 crc kubenswrapper[4965]: E0219 10:03:49.423797 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcad3660-ade7-407c-9d77-bb1c2c2721a8" containerName="ceilometer-notification-agent" Feb 19 10:03:49 crc kubenswrapper[4965]: I0219 10:03:49.423927 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcad3660-ade7-407c-9d77-bb1c2c2721a8" containerName="ceilometer-notification-agent" Feb 19 10:03:49 crc kubenswrapper[4965]: I0219 10:03:49.424678 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcad3660-ade7-407c-9d77-bb1c2c2721a8" containerName="sg-core" Feb 19 10:03:49 crc kubenswrapper[4965]: I0219 10:03:49.424770 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcad3660-ade7-407c-9d77-bb1c2c2721a8" containerName="ceilometer-notification-agent" Feb 19 10:03:49 crc kubenswrapper[4965]: I0219 10:03:49.424851 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcad3660-ade7-407c-9d77-bb1c2c2721a8" containerName="proxy-httpd" Feb 19 10:03:49 crc kubenswrapper[4965]: I0219 10:03:49.427091 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:03:49 crc kubenswrapper[4965]: I0219 10:03:49.431878 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 10:03:49 crc kubenswrapper[4965]: I0219 10:03:49.432256 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 10:03:49 crc kubenswrapper[4965]: I0219 10:03:49.457570 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:03:49 crc kubenswrapper[4965]: I0219 10:03:49.581416 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/648dfd3e-578b-4808-84c2-6dd4b4a7954c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"648dfd3e-578b-4808-84c2-6dd4b4a7954c\") " pod="openstack/ceilometer-0" Feb 19 10:03:49 crc kubenswrapper[4965]: I0219 10:03:49.581517 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slt4m\" (UniqueName: \"kubernetes.io/projected/648dfd3e-578b-4808-84c2-6dd4b4a7954c-kube-api-access-slt4m\") pod \"ceilometer-0\" (UID: \"648dfd3e-578b-4808-84c2-6dd4b4a7954c\") " pod="openstack/ceilometer-0" Feb 19 10:03:49 crc kubenswrapper[4965]: I0219 10:03:49.581555 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/648dfd3e-578b-4808-84c2-6dd4b4a7954c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"648dfd3e-578b-4808-84c2-6dd4b4a7954c\") " pod="openstack/ceilometer-0" Feb 19 10:03:49 crc kubenswrapper[4965]: I0219 10:03:49.581588 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/648dfd3e-578b-4808-84c2-6dd4b4a7954c-config-data\") pod \"ceilometer-0\" (UID: \"648dfd3e-578b-4808-84c2-6dd4b4a7954c\") " pod="openstack/ceilometer-0" Feb 19 10:03:49 crc kubenswrapper[4965]: I0219 10:03:49.581603 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/648dfd3e-578b-4808-84c2-6dd4b4a7954c-run-httpd\") pod \"ceilometer-0\" (UID: \"648dfd3e-578b-4808-84c2-6dd4b4a7954c\") " pod="openstack/ceilometer-0" Feb 19 10:03:49 crc kubenswrapper[4965]: I0219 10:03:49.581646 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/648dfd3e-578b-4808-84c2-6dd4b4a7954c-scripts\") pod \"ceilometer-0\" (UID: \"648dfd3e-578b-4808-84c2-6dd4b4a7954c\") " pod="openstack/ceilometer-0" Feb 19 10:03:49 crc kubenswrapper[4965]: I0219 10:03:49.581663 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/648dfd3e-578b-4808-84c2-6dd4b4a7954c-log-httpd\") pod \"ceilometer-0\" (UID: \"648dfd3e-578b-4808-84c2-6dd4b4a7954c\") " pod="openstack/ceilometer-0" Feb 19 10:03:49 crc kubenswrapper[4965]: I0219 10:03:49.658247 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 19 10:03:49 crc kubenswrapper[4965]: I0219 10:03:49.683297 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slt4m\" (UniqueName: \"kubernetes.io/projected/648dfd3e-578b-4808-84c2-6dd4b4a7954c-kube-api-access-slt4m\") pod \"ceilometer-0\" (UID: \"648dfd3e-578b-4808-84c2-6dd4b4a7954c\") " pod="openstack/ceilometer-0" Feb 19 10:03:49 crc kubenswrapper[4965]: I0219 10:03:49.683365 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/648dfd3e-578b-4808-84c2-6dd4b4a7954c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"648dfd3e-578b-4808-84c2-6dd4b4a7954c\") " pod="openstack/ceilometer-0" Feb 19 10:03:49 crc kubenswrapper[4965]: I0219 10:03:49.683403 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/648dfd3e-578b-4808-84c2-6dd4b4a7954c-config-data\") pod \"ceilometer-0\" (UID: \"648dfd3e-578b-4808-84c2-6dd4b4a7954c\") " pod="openstack/ceilometer-0" Feb 19 10:03:49 crc kubenswrapper[4965]: I0219 10:03:49.683423 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/648dfd3e-578b-4808-84c2-6dd4b4a7954c-run-httpd\") pod \"ceilometer-0\" (UID: \"648dfd3e-578b-4808-84c2-6dd4b4a7954c\") " pod="openstack/ceilometer-0" Feb 19 10:03:49 crc kubenswrapper[4965]: I0219 10:03:49.683478 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/648dfd3e-578b-4808-84c2-6dd4b4a7954c-scripts\") pod \"ceilometer-0\" (UID: \"648dfd3e-578b-4808-84c2-6dd4b4a7954c\") " pod="openstack/ceilometer-0" Feb 19 10:03:49 crc kubenswrapper[4965]: I0219 10:03:49.683496 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/648dfd3e-578b-4808-84c2-6dd4b4a7954c-log-httpd\") pod \"ceilometer-0\" (UID: \"648dfd3e-578b-4808-84c2-6dd4b4a7954c\") " pod="openstack/ceilometer-0" Feb 19 10:03:49 crc kubenswrapper[4965]: I0219 10:03:49.683539 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/648dfd3e-578b-4808-84c2-6dd4b4a7954c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"648dfd3e-578b-4808-84c2-6dd4b4a7954c\") " pod="openstack/ceilometer-0" Feb 19 10:03:49 crc kubenswrapper[4965]: I0219 10:03:49.684478 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/648dfd3e-578b-4808-84c2-6dd4b4a7954c-run-httpd\") pod \"ceilometer-0\" (UID: \"648dfd3e-578b-4808-84c2-6dd4b4a7954c\") " pod="openstack/ceilometer-0" Feb 19 10:03:49 crc kubenswrapper[4965]: I0219 10:03:49.684855 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/648dfd3e-578b-4808-84c2-6dd4b4a7954c-log-httpd\") pod \"ceilometer-0\" (UID: \"648dfd3e-578b-4808-84c2-6dd4b4a7954c\") " pod="openstack/ceilometer-0" Feb 19 10:03:49 crc kubenswrapper[4965]: I0219 10:03:49.699098 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slt4m\" (UniqueName: \"kubernetes.io/projected/648dfd3e-578b-4808-84c2-6dd4b4a7954c-kube-api-access-slt4m\") pod \"ceilometer-0\" (UID: \"648dfd3e-578b-4808-84c2-6dd4b4a7954c\") " pod="openstack/ceilometer-0" Feb 19 10:03:49 crc kubenswrapper[4965]: I0219 10:03:49.700022 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/648dfd3e-578b-4808-84c2-6dd4b4a7954c-config-data\") pod \"ceilometer-0\" (UID: \"648dfd3e-578b-4808-84c2-6dd4b4a7954c\") " pod="openstack/ceilometer-0" Feb 19 10:03:49 crc kubenswrapper[4965]: I0219 10:03:49.700226 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/648dfd3e-578b-4808-84c2-6dd4b4a7954c-scripts\") pod \"ceilometer-0\" (UID: \"648dfd3e-578b-4808-84c2-6dd4b4a7954c\") " pod="openstack/ceilometer-0" Feb 19 10:03:49 crc kubenswrapper[4965]: I0219 10:03:49.700313 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/648dfd3e-578b-4808-84c2-6dd4b4a7954c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"648dfd3e-578b-4808-84c2-6dd4b4a7954c\") " pod="openstack/ceilometer-0" Feb 19 10:03:49 crc kubenswrapper[4965]: I0219 10:03:49.701291 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/648dfd3e-578b-4808-84c2-6dd4b4a7954c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"648dfd3e-578b-4808-84c2-6dd4b4a7954c\") " pod="openstack/ceilometer-0" Feb 19 10:03:49 crc kubenswrapper[4965]: I0219 10:03:49.991540 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:03:50 crc kubenswrapper[4965]: I0219 10:03:50.353758 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d7568bd5b-nvbtn" event={"ID":"e59b9522-d30f-4640-8e62-55e0b0c91c9a","Type":"ContainerStarted","Data":"c02d9e12468023d21ce3165981bc638d5ab08a661112da13016f680b97a1989b"} Feb 19 10:03:50 crc kubenswrapper[4965]: I0219 10:03:50.354103 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-d7568bd5b-nvbtn" Feb 19 10:03:50 crc kubenswrapper[4965]: I0219 10:03:50.354124 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-d7568bd5b-nvbtn" Feb 19 10:03:50 crc kubenswrapper[4965]: I0219 10:03:50.356537 4965 generic.go:334] "Generic (PLEG): container finished" podID="c21e28dc-1ac5-404c-bbb3-6298f94fc5d0" containerID="bb0c0c197521e180fedcfe62f88576b0bb513ceda467eca09b0353e6c5f43353" exitCode=0 Feb 19 10:03:50 crc kubenswrapper[4965]: I0219 10:03:50.356641 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-g6g6j" event={"ID":"c21e28dc-1ac5-404c-bbb3-6298f94fc5d0","Type":"ContainerDied","Data":"bb0c0c197521e180fedcfe62f88576b0bb513ceda467eca09b0353e6c5f43353"} Feb 19 10:03:50 crc kubenswrapper[4965]: I0219 10:03:50.362818 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-vtxxb" event={"ID":"712a9147-94b7-45f8-97a9-3c0a988f748d","Type":"ContainerStarted","Data":"6bae93998cadbb0f6db20398825e276d6023176ba1e54278ec384c904652ebc5"} Feb 19 10:03:50 crc kubenswrapper[4965]: I0219 10:03:50.389034 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-d7568bd5b-nvbtn" podStartSLOduration=3.389011376 podStartE2EDuration="3.389011376s" podCreationTimestamp="2026-02-19 10:03:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:03:50.380755225 +0000 UTC m=+1286.002076535" watchObservedRunningTime="2026-02-19 10:03:50.389011376 +0000 UTC m=+1286.010332706" Feb 19 10:03:50 crc kubenswrapper[4965]: I0219 10:03:50.422520 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-storageinit-vtxxb" podStartSLOduration=3.422495939 podStartE2EDuration="3.422495939s" podCreationTimestamp="2026-02-19 10:03:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:03:50.42215916 +0000 UTC m=+1286.043480470" watchObservedRunningTime="2026-02-19 10:03:50.422495939 +0000 UTC m=+1286.043817249" Feb 19 10:03:50 crc kubenswrapper[4965]: I0219 10:03:50.854435 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-pv8l6" Feb 19 10:03:51 crc kubenswrapper[4965]: I0219 10:03:51.010587 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/06e93745-1a03-492b-be3e-233b34d13bff-ovsdbserver-sb\") pod \"06e93745-1a03-492b-be3e-233b34d13bff\" (UID: \"06e93745-1a03-492b-be3e-233b34d13bff\") " Feb 19 10:03:51 crc kubenswrapper[4965]: I0219 10:03:51.010639 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/06e93745-1a03-492b-be3e-233b34d13bff-dns-swift-storage-0\") pod \"06e93745-1a03-492b-be3e-233b34d13bff\" (UID: \"06e93745-1a03-492b-be3e-233b34d13bff\") " Feb 19 10:03:51 crc kubenswrapper[4965]: I0219 10:03:51.010733 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06e93745-1a03-492b-be3e-233b34d13bff-dns-svc\") pod \"06e93745-1a03-492b-be3e-233b34d13bff\" (UID: \"06e93745-1a03-492b-be3e-233b34d13bff\") " Feb 19 10:03:51 crc kubenswrapper[4965]: I0219 10:03:51.010760 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/06e93745-1a03-492b-be3e-233b34d13bff-ovsdbserver-nb\") pod \"06e93745-1a03-492b-be3e-233b34d13bff\" (UID: \"06e93745-1a03-492b-be3e-233b34d13bff\") " Feb 19 10:03:51 crc kubenswrapper[4965]: I0219 10:03:51.010780 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bm86t\" (UniqueName: \"kubernetes.io/projected/06e93745-1a03-492b-be3e-233b34d13bff-kube-api-access-bm86t\") pod \"06e93745-1a03-492b-be3e-233b34d13bff\" (UID: \"06e93745-1a03-492b-be3e-233b34d13bff\") " Feb 19 10:03:51 crc kubenswrapper[4965]: I0219 10:03:51.010916 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06e93745-1a03-492b-be3e-233b34d13bff-config\") pod \"06e93745-1a03-492b-be3e-233b34d13bff\" (UID: \"06e93745-1a03-492b-be3e-233b34d13bff\") " Feb 19 10:03:51 crc kubenswrapper[4965]: I0219 10:03:51.028736 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06e93745-1a03-492b-be3e-233b34d13bff-kube-api-access-bm86t" (OuterVolumeSpecName: "kube-api-access-bm86t") pod "06e93745-1a03-492b-be3e-233b34d13bff" (UID: "06e93745-1a03-492b-be3e-233b34d13bff"). InnerVolumeSpecName "kube-api-access-bm86t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:03:51 crc kubenswrapper[4965]: I0219 10:03:51.041373 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06e93745-1a03-492b-be3e-233b34d13bff-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "06e93745-1a03-492b-be3e-233b34d13bff" (UID: "06e93745-1a03-492b-be3e-233b34d13bff"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:03:51 crc kubenswrapper[4965]: I0219 10:03:51.042783 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06e93745-1a03-492b-be3e-233b34d13bff-config" (OuterVolumeSpecName: "config") pod "06e93745-1a03-492b-be3e-233b34d13bff" (UID: "06e93745-1a03-492b-be3e-233b34d13bff"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:03:51 crc kubenswrapper[4965]: I0219 10:03:51.045613 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06e93745-1a03-492b-be3e-233b34d13bff-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "06e93745-1a03-492b-be3e-233b34d13bff" (UID: "06e93745-1a03-492b-be3e-233b34d13bff"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:03:51 crc kubenswrapper[4965]: I0219 10:03:51.054160 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06e93745-1a03-492b-be3e-233b34d13bff-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "06e93745-1a03-492b-be3e-233b34d13bff" (UID: "06e93745-1a03-492b-be3e-233b34d13bff"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:03:51 crc kubenswrapper[4965]: I0219 10:03:51.059870 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06e93745-1a03-492b-be3e-233b34d13bff-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "06e93745-1a03-492b-be3e-233b34d13bff" (UID: "06e93745-1a03-492b-be3e-233b34d13bff"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:03:51 crc kubenswrapper[4965]: I0219 10:03:51.063552 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-54864c6876-6fmg4" Feb 19 10:03:51 crc kubenswrapper[4965]: I0219 10:03:51.095958 4965 scope.go:117] "RemoveContainer" containerID="02531bd557132191c6be3729ea9f4a171329a4568aabbf64acc9d9438d720853" Feb 19 10:03:51 crc kubenswrapper[4965]: I0219 10:03:51.114444 4965 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/06e93745-1a03-492b-be3e-233b34d13bff-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:51 crc kubenswrapper[4965]: I0219 10:03:51.114484 4965 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/06e93745-1a03-492b-be3e-233b34d13bff-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:51 crc kubenswrapper[4965]: I0219 10:03:51.114497 4965 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06e93745-1a03-492b-be3e-233b34d13bff-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:51 crc kubenswrapper[4965]: I0219 10:03:51.114512 4965 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/06e93745-1a03-492b-be3e-233b34d13bff-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:51 crc kubenswrapper[4965]: I0219 10:03:51.114524 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bm86t\" (UniqueName: \"kubernetes.io/projected/06e93745-1a03-492b-be3e-233b34d13bff-kube-api-access-bm86t\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:51 crc kubenswrapper[4965]: I0219 10:03:51.114540 4965 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06e93745-1a03-492b-be3e-233b34d13bff-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:51 crc kubenswrapper[4965]: I0219 10:03:51.216591 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcad3660-ade7-407c-9d77-bb1c2c2721a8" path="/var/lib/kubelet/pods/fcad3660-ade7-407c-9d77-bb1c2c2721a8/volumes" Feb 19 10:03:51 crc kubenswrapper[4965]: I0219 10:03:51.355736 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5cbc96c99c-x6mpf"] Feb 19 10:03:51 crc kubenswrapper[4965]: I0219 10:03:51.356040 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5cbc96c99c-x6mpf" podUID="9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54" containerName="neutron-api" containerID="cri-o://137ccfcadc3afe6b21bbd9c90f0f855e1edb71876c6ad8e76ee2d91c8bb38ab5" gracePeriod=30 Feb 19 10:03:51 crc kubenswrapper[4965]: I0219 10:03:51.356124 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5cbc96c99c-x6mpf" podUID="9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54" containerName="neutron-httpd" containerID="cri-o://0dd5ff578f88784de0c00bb232e2b39066e6bda734b72385e0e2339a6febfd1a" gracePeriod=30 Feb 19 10:03:51 crc kubenswrapper[4965]: I0219 10:03:51.386618 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-76546766f9-plbd4"] Feb 19 10:03:51 crc kubenswrapper[4965]: E0219 10:03:51.387017 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06e93745-1a03-492b-be3e-233b34d13bff" containerName="init" Feb 19 10:03:51 crc kubenswrapper[4965]: I0219 10:03:51.387032 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="06e93745-1a03-492b-be3e-233b34d13bff" containerName="init" Feb 19 10:03:51 crc kubenswrapper[4965]: I0219 10:03:51.391021 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="06e93745-1a03-492b-be3e-233b34d13bff" containerName="init" Feb 19 10:03:51 crc kubenswrapper[4965]: I0219 10:03:51.392210 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-76546766f9-plbd4" Feb 19 10:03:51 crc kubenswrapper[4965]: I0219 10:03:51.398859 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-pv8l6" event={"ID":"06e93745-1a03-492b-be3e-233b34d13bff","Type":"ContainerDied","Data":"59cbc38a5f388a9b4ec3d2b2f87c819b8ff8bfab3cbb76b0c4b970bff58040ff"} Feb 19 10:03:51 crc kubenswrapper[4965]: I0219 10:03:51.398892 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-pv8l6" Feb 19 10:03:51 crc kubenswrapper[4965]: I0219 10:03:51.401550 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-76546766f9-plbd4"] Feb 19 10:03:51 crc kubenswrapper[4965]: I0219 10:03:51.415574 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f62c8ad7-6089-47f6-ab15-b7ec5eee53d8","Type":"ContainerStarted","Data":"aca2560cfad27a77ad1d806aae5b17a0c4b6722235e43ff20de62370a59b472a"} Feb 19 10:03:51 crc kubenswrapper[4965]: I0219 10:03:51.415620 4965 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 10:03:51 crc kubenswrapper[4965]: I0219 10:03:51.415635 4965 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 10:03:51 crc kubenswrapper[4965]: I0219 10:03:51.471516 4965 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-5cbc96c99c-x6mpf" podUID="9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.172:9696/\": read tcp 10.217.0.2:59084->10.217.0.172:9696: read: connection reset by peer" Feb 19 10:03:51 crc kubenswrapper[4965]: I0219 10:03:51.496269 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-pv8l6"] Feb 19 10:03:51 crc kubenswrapper[4965]: I0219 10:03:51.503753 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-pv8l6"] Feb 19 10:03:51 crc kubenswrapper[4965]: I0219 10:03:51.534939 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/40c5d1a6-44fc-4f35-a393-d82f69dde17f-httpd-config\") pod \"neutron-76546766f9-plbd4\" (UID: \"40c5d1a6-44fc-4f35-a393-d82f69dde17f\") " pod="openstack/neutron-76546766f9-plbd4" Feb 19 10:03:51 crc kubenswrapper[4965]: I0219 10:03:51.535291 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj9xm\" (UniqueName: \"kubernetes.io/projected/40c5d1a6-44fc-4f35-a393-d82f69dde17f-kube-api-access-pj9xm\") pod \"neutron-76546766f9-plbd4\" (UID: \"40c5d1a6-44fc-4f35-a393-d82f69dde17f\") " pod="openstack/neutron-76546766f9-plbd4" Feb 19 10:03:51 crc kubenswrapper[4965]: I0219 10:03:51.535326 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/40c5d1a6-44fc-4f35-a393-d82f69dde17f-internal-tls-certs\") pod \"neutron-76546766f9-plbd4\" (UID: \"40c5d1a6-44fc-4f35-a393-d82f69dde17f\") " pod="openstack/neutron-76546766f9-plbd4" Feb 19 10:03:51 crc kubenswrapper[4965]: I0219 10:03:51.535408 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/40c5d1a6-44fc-4f35-a393-d82f69dde17f-ovndb-tls-certs\") pod \"neutron-76546766f9-plbd4\" (UID: \"40c5d1a6-44fc-4f35-a393-d82f69dde17f\") " pod="openstack/neutron-76546766f9-plbd4" Feb 19 10:03:51 crc kubenswrapper[4965]: I0219 10:03:51.535439 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40c5d1a6-44fc-4f35-a393-d82f69dde17f-combined-ca-bundle\") pod \"neutron-76546766f9-plbd4\" (UID: \"40c5d1a6-44fc-4f35-a393-d82f69dde17f\") " pod="openstack/neutron-76546766f9-plbd4" Feb 19 10:03:51 crc kubenswrapper[4965]: I0219 10:03:51.535482 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/40c5d1a6-44fc-4f35-a393-d82f69dde17f-config\") pod \"neutron-76546766f9-plbd4\" (UID: \"40c5d1a6-44fc-4f35-a393-d82f69dde17f\") " pod="openstack/neutron-76546766f9-plbd4" Feb 19 10:03:51 crc kubenswrapper[4965]: I0219 10:03:51.535526 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/40c5d1a6-44fc-4f35-a393-d82f69dde17f-public-tls-certs\") pod \"neutron-76546766f9-plbd4\" (UID: \"40c5d1a6-44fc-4f35-a393-d82f69dde17f\") " pod="openstack/neutron-76546766f9-plbd4" Feb 19 10:03:51 crc kubenswrapper[4965]: I0219 10:03:51.636868 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/40c5d1a6-44fc-4f35-a393-d82f69dde17f-internal-tls-certs\") pod \"neutron-76546766f9-plbd4\" (UID: \"40c5d1a6-44fc-4f35-a393-d82f69dde17f\") " pod="openstack/neutron-76546766f9-plbd4" Feb 19 10:03:51 crc kubenswrapper[4965]: I0219 10:03:51.636941 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/40c5d1a6-44fc-4f35-a393-d82f69dde17f-ovndb-tls-certs\") pod \"neutron-76546766f9-plbd4\" (UID: \"40c5d1a6-44fc-4f35-a393-d82f69dde17f\") " pod="openstack/neutron-76546766f9-plbd4" Feb 19 10:03:51 crc kubenswrapper[4965]: I0219 10:03:51.636965 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40c5d1a6-44fc-4f35-a393-d82f69dde17f-combined-ca-bundle\") pod \"neutron-76546766f9-plbd4\" (UID: \"40c5d1a6-44fc-4f35-a393-d82f69dde17f\") " pod="openstack/neutron-76546766f9-plbd4" Feb 19 10:03:51 crc kubenswrapper[4965]: I0219 10:03:51.636994 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/40c5d1a6-44fc-4f35-a393-d82f69dde17f-config\") pod \"neutron-76546766f9-plbd4\" (UID: \"40c5d1a6-44fc-4f35-a393-d82f69dde17f\") " pod="openstack/neutron-76546766f9-plbd4" Feb 19 10:03:51 crc kubenswrapper[4965]: I0219 10:03:51.637032 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/40c5d1a6-44fc-4f35-a393-d82f69dde17f-public-tls-certs\") pod \"neutron-76546766f9-plbd4\" (UID: \"40c5d1a6-44fc-4f35-a393-d82f69dde17f\") " pod="openstack/neutron-76546766f9-plbd4" Feb 19 10:03:51 crc kubenswrapper[4965]: I0219 10:03:51.637123 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/40c5d1a6-44fc-4f35-a393-d82f69dde17f-httpd-config\") pod \"neutron-76546766f9-plbd4\" (UID: \"40c5d1a6-44fc-4f35-a393-d82f69dde17f\") " pod="openstack/neutron-76546766f9-plbd4" Feb 19 10:03:51 crc kubenswrapper[4965]: I0219 10:03:51.637163 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj9xm\" (UniqueName: \"kubernetes.io/projected/40c5d1a6-44fc-4f35-a393-d82f69dde17f-kube-api-access-pj9xm\") pod \"neutron-76546766f9-plbd4\" (UID: \"40c5d1a6-44fc-4f35-a393-d82f69dde17f\") " pod="openstack/neutron-76546766f9-plbd4" Feb 19 10:03:51 crc kubenswrapper[4965]: I0219 10:03:51.642016 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/40c5d1a6-44fc-4f35-a393-d82f69dde17f-config\") pod \"neutron-76546766f9-plbd4\" (UID: \"40c5d1a6-44fc-4f35-a393-d82f69dde17f\") " pod="openstack/neutron-76546766f9-plbd4" Feb 19 10:03:51 crc kubenswrapper[4965]: I0219 10:03:51.642079 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40c5d1a6-44fc-4f35-a393-d82f69dde17f-combined-ca-bundle\") pod \"neutron-76546766f9-plbd4\" (UID: \"40c5d1a6-44fc-4f35-a393-d82f69dde17f\") " pod="openstack/neutron-76546766f9-plbd4" Feb 19 10:03:51 crc kubenswrapper[4965]: I0219 10:03:51.642499 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/40c5d1a6-44fc-4f35-a393-d82f69dde17f-httpd-config\") pod \"neutron-76546766f9-plbd4\" (UID: \"40c5d1a6-44fc-4f35-a393-d82f69dde17f\") " pod="openstack/neutron-76546766f9-plbd4" Feb 19 10:03:51 crc kubenswrapper[4965]: I0219 10:03:51.645169 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/40c5d1a6-44fc-4f35-a393-d82f69dde17f-internal-tls-certs\") pod \"neutron-76546766f9-plbd4\" (UID: \"40c5d1a6-44fc-4f35-a393-d82f69dde17f\") " pod="openstack/neutron-76546766f9-plbd4" Feb 19 10:03:51 crc kubenswrapper[4965]: I0219 10:03:51.645406 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/40c5d1a6-44fc-4f35-a393-d82f69dde17f-public-tls-certs\") pod \"neutron-76546766f9-plbd4\" (UID: \"40c5d1a6-44fc-4f35-a393-d82f69dde17f\") " pod="openstack/neutron-76546766f9-plbd4" Feb 19 10:03:51 crc kubenswrapper[4965]: I0219 10:03:51.653874 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/40c5d1a6-44fc-4f35-a393-d82f69dde17f-ovndb-tls-certs\") pod \"neutron-76546766f9-plbd4\" (UID: \"40c5d1a6-44fc-4f35-a393-d82f69dde17f\") " pod="openstack/neutron-76546766f9-plbd4" Feb 19 10:03:51 crc kubenswrapper[4965]: I0219 10:03:51.658794 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pj9xm\" (UniqueName: \"kubernetes.io/projected/40c5d1a6-44fc-4f35-a393-d82f69dde17f-kube-api-access-pj9xm\") pod \"neutron-76546766f9-plbd4\" (UID: \"40c5d1a6-44fc-4f35-a393-d82f69dde17f\") " pod="openstack/neutron-76546766f9-plbd4" Feb 19 10:03:51 crc kubenswrapper[4965]: I0219 10:03:51.720123 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-76546766f9-plbd4" Feb 19 10:03:51 crc kubenswrapper[4965]: I0219 10:03:51.945796 4965 scope.go:117] "RemoveContainer" containerID="de381582d7372deee1e68982192a31791bf778e7e92c28b6e090131710a6c23f" Feb 19 10:03:52 crc kubenswrapper[4965]: I0219 10:03:52.291698 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 19 10:03:52 crc kubenswrapper[4965]: I0219 10:03:52.301674 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 19 10:03:52 crc kubenswrapper[4965]: I0219 10:03:52.465205 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:03:52 crc kubenswrapper[4965]: I0219 10:03:52.492724 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6855d4854d-gc94v" event={"ID":"efe10142-642a-45d3-9f5a-8d1f2cb717e9","Type":"ContainerStarted","Data":"7dcff6f1f97d1c24229bdded921cb5df14eea8d7346d54c4e8b52a13c3286a85"} Feb 19 10:03:52 crc kubenswrapper[4965]: I0219 10:03:52.586823 4965 generic.go:334] "Generic (PLEG): container finished" podID="9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54" containerID="0dd5ff578f88784de0c00bb232e2b39066e6bda734b72385e0e2339a6febfd1a" exitCode=0 Feb 19 10:03:52 crc kubenswrapper[4965]: I0219 10:03:52.586916 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5cbc96c99c-x6mpf" event={"ID":"9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54","Type":"ContainerDied","Data":"0dd5ff578f88784de0c00bb232e2b39066e6bda734b72385e0e2339a6febfd1a"} Feb 19 10:03:52 crc kubenswrapper[4965]: I0219 10:03:52.678501 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-g6g6j" event={"ID":"c21e28dc-1ac5-404c-bbb3-6298f94fc5d0","Type":"ContainerStarted","Data":"bac1040b115a7735de4fa6a91d378134b60b573bdc07567a2d3080b79f3dda6b"} Feb 19 10:03:52 crc kubenswrapper[4965]: I0219 10:03:52.679654 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-g6g6j" Feb 19 10:03:52 crc kubenswrapper[4965]: I0219 10:03:52.726905 4965 generic.go:334] "Generic (PLEG): container finished" podID="712a9147-94b7-45f8-97a9-3c0a988f748d" containerID="6bae93998cadbb0f6db20398825e276d6023176ba1e54278ec384c904652ebc5" exitCode=0 Feb 19 10:03:52 crc kubenswrapper[4965]: I0219 10:03:52.727000 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-vtxxb" event={"ID":"712a9147-94b7-45f8-97a9-3c0a988f748d","Type":"ContainerDied","Data":"6bae93998cadbb0f6db20398825e276d6023176ba1e54278ec384c904652ebc5"} Feb 19 10:03:52 crc kubenswrapper[4965]: I0219 10:03:52.762815 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-g6g6j" podStartSLOduration=5.76279656 podStartE2EDuration="5.76279656s" podCreationTimestamp="2026-02-19 10:03:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:03:52.734551654 +0000 UTC m=+1288.355872964" watchObservedRunningTime="2026-02-19 10:03:52.76279656 +0000 UTC m=+1288.384117870" Feb 19 10:03:52 crc kubenswrapper[4965]: I0219 10:03:52.789987 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-55c5f69ff7-qk9n8" event={"ID":"a32c3eed-880c-428c-b58e-d89c763d11b9","Type":"ContainerStarted","Data":"cecdb88fc9bc557c8075fd9dbc5bb2e4ff353b1a7a148b24e05b220d4d10f5db"} Feb 19 10:03:52 crc kubenswrapper[4965]: I0219 10:03:52.797519 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-76546766f9-plbd4"] Feb 19 10:03:53 crc kubenswrapper[4965]: I0219 10:03:53.117777 4965 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-5cbc96c99c-x6mpf" podUID="9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.172:9696/\": dial tcp 10.217.0.172:9696: connect: connection refused" Feb 19 10:03:53 crc kubenswrapper[4965]: I0219 10:03:53.213351 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06e93745-1a03-492b-be3e-233b34d13bff" path="/var/lib/kubelet/pods/06e93745-1a03-492b-be3e-233b34d13bff/volumes" Feb 19 10:03:53 crc kubenswrapper[4965]: I0219 10:03:53.834624 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6855d4854d-gc94v" event={"ID":"efe10142-642a-45d3-9f5a-8d1f2cb717e9","Type":"ContainerStarted","Data":"0134e7e5236a20b80ee307b8555be122246e57350c8e209505cd73ac35e56adc"} Feb 19 10:03:53 crc kubenswrapper[4965]: I0219 10:03:53.840739 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5bf5dff6-70bd-4013-95d1-6e30d7e765a0","Type":"ContainerStarted","Data":"c04bc1c0bf157dfa28099dda62833b90199b2419967a9219f832600f93cdbbca"} Feb 19 10:03:53 crc kubenswrapper[4965]: I0219 10:03:53.844358 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"648dfd3e-578b-4808-84c2-6dd4b4a7954c","Type":"ContainerStarted","Data":"e142f3cf13924b1eb47f07fe5fdb88a893daad952a4d75c074a02fa0694e042d"} Feb 19 10:03:53 crc kubenswrapper[4965]: I0219 10:03:53.844402 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"648dfd3e-578b-4808-84c2-6dd4b4a7954c","Type":"ContainerStarted","Data":"f48cda5af32bbd778a95326b79f84412dc1e633b7815a9c03917eda9c3c9fd8e"} Feb 19 10:03:53 crc kubenswrapper[4965]: I0219 10:03:53.850273 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-55c5f69ff7-qk9n8" event={"ID":"a32c3eed-880c-428c-b58e-d89c763d11b9","Type":"ContainerStarted","Data":"f4cc9ed92353cd9999ccdc6a09425ecceb1f1b9a68191b409afe7e3d0ab4ae44"} Feb 19 10:03:53 crc kubenswrapper[4965]: I0219 10:03:53.855978 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-76546766f9-plbd4" event={"ID":"40c5d1a6-44fc-4f35-a393-d82f69dde17f","Type":"ContainerStarted","Data":"ce8919ff3f1a912c5810353c9dfab86ca6c911ba08099963222687901bbb7a5a"} Feb 19 10:03:53 crc kubenswrapper[4965]: I0219 10:03:53.856020 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-76546766f9-plbd4" event={"ID":"40c5d1a6-44fc-4f35-a393-d82f69dde17f","Type":"ContainerStarted","Data":"9a32daba067ea31eb11c877e6449b29daf4e49de5739a2d78969c49623141dfe"} Feb 19 10:03:53 crc kubenswrapper[4965]: I0219 10:03:53.856031 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-76546766f9-plbd4" event={"ID":"40c5d1a6-44fc-4f35-a393-d82f69dde17f","Type":"ContainerStarted","Data":"093e43ef4c8a899c722c8b30f6a1e78e74171333b59cee6d27f47d4b394eb26d"} Feb 19 10:03:53 crc kubenswrapper[4965]: I0219 10:03:53.856147 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-76546766f9-plbd4" Feb 19 10:03:53 crc kubenswrapper[4965]: I0219 10:03:53.858768 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-6855d4854d-gc94v" podStartSLOduration=4.541870207 podStartE2EDuration="7.858740409s" podCreationTimestamp="2026-02-19 10:03:46 +0000 UTC" firstStartedPulling="2026-02-19 10:03:48.674553339 +0000 UTC m=+1284.295874649" lastFinishedPulling="2026-02-19 10:03:51.991423551 +0000 UTC m=+1287.612744851" observedRunningTime="2026-02-19 10:03:53.851303858 +0000 UTC m=+1289.472625168" watchObservedRunningTime="2026-02-19 10:03:53.858740409 +0000 UTC m=+1289.480061729" Feb 19 10:03:53 crc kubenswrapper[4965]: I0219 10:03:53.871496 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f62c8ad7-6089-47f6-ab15-b7ec5eee53d8","Type":"ContainerStarted","Data":"a60392232dfe2d29976017ebb4400c1b12da8611796e84a5d35349313cfd48ff"} Feb 19 10:03:53 crc kubenswrapper[4965]: I0219 10:03:53.871724 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="f62c8ad7-6089-47f6-ab15-b7ec5eee53d8" containerName="cinder-api-log" containerID="cri-o://aca2560cfad27a77ad1d806aae5b17a0c4b6722235e43ff20de62370a59b472a" gracePeriod=30 Feb 19 10:03:53 crc kubenswrapper[4965]: I0219 10:03:53.871932 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="f62c8ad7-6089-47f6-ab15-b7ec5eee53d8" containerName="cinder-api" containerID="cri-o://a60392232dfe2d29976017ebb4400c1b12da8611796e84a5d35349313cfd48ff" gracePeriod=30 Feb 19 10:03:53 crc kubenswrapper[4965]: I0219 10:03:53.872287 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 19 10:03:53 crc kubenswrapper[4965]: I0219 10:03:53.889102 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-55c5f69ff7-qk9n8" podStartSLOduration=4.623067557 podStartE2EDuration="7.889081465s" podCreationTimestamp="2026-02-19 10:03:46 +0000 UTC" firstStartedPulling="2026-02-19 10:03:48.753327471 +0000 UTC m=+1284.374648781" lastFinishedPulling="2026-02-19 10:03:52.019341379 +0000 UTC m=+1287.640662689" observedRunningTime="2026-02-19 10:03:53.883714126 +0000 UTC m=+1289.505035436" watchObservedRunningTime="2026-02-19 10:03:53.889081465 +0000 UTC m=+1289.510402775" Feb 19 10:03:53 crc kubenswrapper[4965]: I0219 10:03:53.920277 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-76546766f9-plbd4" podStartSLOduration=2.920250813 podStartE2EDuration="2.920250813s" podCreationTimestamp="2026-02-19 10:03:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:03:53.903745082 +0000 UTC m=+1289.525066422" watchObservedRunningTime="2026-02-19 10:03:53.920250813 +0000 UTC m=+1289.541572123" Feb 19 10:03:53 crc kubenswrapper[4965]: I0219 10:03:53.942872 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.942848741 podStartE2EDuration="6.942848741s" podCreationTimestamp="2026-02-19 10:03:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:03:53.927652792 +0000 UTC m=+1289.548974102" watchObservedRunningTime="2026-02-19 10:03:53.942848741 +0000 UTC m=+1289.564170051" Feb 19 10:03:54 crc kubenswrapper[4965]: I0219 10:03:54.348296 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-vtxxb" Feb 19 10:03:54 crc kubenswrapper[4965]: I0219 10:03:54.426141 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/712a9147-94b7-45f8-97a9-3c0a988f748d-combined-ca-bundle\") pod \"712a9147-94b7-45f8-97a9-3c0a988f748d\" (UID: \"712a9147-94b7-45f8-97a9-3c0a988f748d\") " Feb 19 10:03:54 crc kubenswrapper[4965]: I0219 10:03:54.426188 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/712a9147-94b7-45f8-97a9-3c0a988f748d-certs\") pod \"712a9147-94b7-45f8-97a9-3c0a988f748d\" (UID: \"712a9147-94b7-45f8-97a9-3c0a988f748d\") " Feb 19 10:03:54 crc kubenswrapper[4965]: I0219 10:03:54.426425 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jz7k\" (UniqueName: \"kubernetes.io/projected/712a9147-94b7-45f8-97a9-3c0a988f748d-kube-api-access-6jz7k\") pod \"712a9147-94b7-45f8-97a9-3c0a988f748d\" (UID: \"712a9147-94b7-45f8-97a9-3c0a988f748d\") " Feb 19 10:03:54 crc kubenswrapper[4965]: I0219 10:03:54.426445 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/712a9147-94b7-45f8-97a9-3c0a988f748d-config-data\") pod \"712a9147-94b7-45f8-97a9-3c0a988f748d\" (UID: \"712a9147-94b7-45f8-97a9-3c0a988f748d\") " Feb 19 10:03:54 crc kubenswrapper[4965]: I0219 10:03:54.426486 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/712a9147-94b7-45f8-97a9-3c0a988f748d-scripts\") pod \"712a9147-94b7-45f8-97a9-3c0a988f748d\" (UID: \"712a9147-94b7-45f8-97a9-3c0a988f748d\") " Feb 19 10:03:54 crc kubenswrapper[4965]: I0219 10:03:54.433466 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/712a9147-94b7-45f8-97a9-3c0a988f748d-kube-api-access-6jz7k" (OuterVolumeSpecName: "kube-api-access-6jz7k") pod "712a9147-94b7-45f8-97a9-3c0a988f748d" (UID: "712a9147-94b7-45f8-97a9-3c0a988f748d"). InnerVolumeSpecName "kube-api-access-6jz7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:03:54 crc kubenswrapper[4965]: I0219 10:03:54.435411 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/712a9147-94b7-45f8-97a9-3c0a988f748d-certs" (OuterVolumeSpecName: "certs") pod "712a9147-94b7-45f8-97a9-3c0a988f748d" (UID: "712a9147-94b7-45f8-97a9-3c0a988f748d"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:03:54 crc kubenswrapper[4965]: I0219 10:03:54.441481 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/712a9147-94b7-45f8-97a9-3c0a988f748d-scripts" (OuterVolumeSpecName: "scripts") pod "712a9147-94b7-45f8-97a9-3c0a988f748d" (UID: "712a9147-94b7-45f8-97a9-3c0a988f748d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:54 crc kubenswrapper[4965]: I0219 10:03:54.471320 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-c97f468c6-bwf6p"] Feb 19 10:03:54 crc kubenswrapper[4965]: E0219 10:03:54.471767 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="712a9147-94b7-45f8-97a9-3c0a988f748d" containerName="cloudkitty-storageinit" Feb 19 10:03:54 crc kubenswrapper[4965]: I0219 10:03:54.471785 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="712a9147-94b7-45f8-97a9-3c0a988f748d" containerName="cloudkitty-storageinit" Feb 19 10:03:54 crc kubenswrapper[4965]: I0219 10:03:54.471992 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="712a9147-94b7-45f8-97a9-3c0a988f748d" containerName="cloudkitty-storageinit" Feb 19 10:03:54 crc kubenswrapper[4965]: I0219 10:03:54.473080 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-c97f468c6-bwf6p" Feb 19 10:03:54 crc kubenswrapper[4965]: I0219 10:03:54.474070 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/712a9147-94b7-45f8-97a9-3c0a988f748d-config-data" (OuterVolumeSpecName: "config-data") pod "712a9147-94b7-45f8-97a9-3c0a988f748d" (UID: "712a9147-94b7-45f8-97a9-3c0a988f748d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:54 crc kubenswrapper[4965]: I0219 10:03:54.480841 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 19 10:03:54 crc kubenswrapper[4965]: I0219 10:03:54.481096 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 19 10:03:54 crc kubenswrapper[4965]: I0219 10:03:54.490112 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/712a9147-94b7-45f8-97a9-3c0a988f748d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "712a9147-94b7-45f8-97a9-3c0a988f748d" (UID: "712a9147-94b7-45f8-97a9-3c0a988f748d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:54 crc kubenswrapper[4965]: I0219 10:03:54.491756 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-c97f468c6-bwf6p"] Feb 19 10:03:54 crc kubenswrapper[4965]: I0219 10:03:54.533365 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45f4a2b8-338f-4c3d-afe8-305eb599081c-combined-ca-bundle\") pod \"barbican-api-c97f468c6-bwf6p\" (UID: \"45f4a2b8-338f-4c3d-afe8-305eb599081c\") " pod="openstack/barbican-api-c97f468c6-bwf6p" Feb 19 10:03:54 crc kubenswrapper[4965]: I0219 10:03:54.533441 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45f4a2b8-338f-4c3d-afe8-305eb599081c-logs\") pod \"barbican-api-c97f468c6-bwf6p\" (UID: \"45f4a2b8-338f-4c3d-afe8-305eb599081c\") " pod="openstack/barbican-api-c97f468c6-bwf6p" Feb 19 10:03:54 crc kubenswrapper[4965]: I0219 10:03:54.533471 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/45f4a2b8-338f-4c3d-afe8-305eb599081c-internal-tls-certs\") pod \"barbican-api-c97f468c6-bwf6p\" (UID: \"45f4a2b8-338f-4c3d-afe8-305eb599081c\") " pod="openstack/barbican-api-c97f468c6-bwf6p" Feb 19 10:03:54 crc kubenswrapper[4965]: I0219 10:03:54.533505 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/45f4a2b8-338f-4c3d-afe8-305eb599081c-config-data-custom\") pod \"barbican-api-c97f468c6-bwf6p\" (UID: \"45f4a2b8-338f-4c3d-afe8-305eb599081c\") " pod="openstack/barbican-api-c97f468c6-bwf6p" Feb 19 10:03:54 crc kubenswrapper[4965]: I0219 10:03:54.533537 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45f4a2b8-338f-4c3d-afe8-305eb599081c-config-data\") pod \"barbican-api-c97f468c6-bwf6p\" (UID: \"45f4a2b8-338f-4c3d-afe8-305eb599081c\") " pod="openstack/barbican-api-c97f468c6-bwf6p" Feb 19 10:03:54 crc kubenswrapper[4965]: I0219 10:03:54.533557 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/45f4a2b8-338f-4c3d-afe8-305eb599081c-public-tls-certs\") pod \"barbican-api-c97f468c6-bwf6p\" (UID: \"45f4a2b8-338f-4c3d-afe8-305eb599081c\") " pod="openstack/barbican-api-c97f468c6-bwf6p" Feb 19 10:03:54 crc kubenswrapper[4965]: I0219 10:03:54.533598 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxqw9\" (UniqueName: \"kubernetes.io/projected/45f4a2b8-338f-4c3d-afe8-305eb599081c-kube-api-access-nxqw9\") pod \"barbican-api-c97f468c6-bwf6p\" (UID: \"45f4a2b8-338f-4c3d-afe8-305eb599081c\") " pod="openstack/barbican-api-c97f468c6-bwf6p" Feb 19 10:03:54 crc kubenswrapper[4965]: I0219 10:03:54.533715 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jz7k\" (UniqueName: \"kubernetes.io/projected/712a9147-94b7-45f8-97a9-3c0a988f748d-kube-api-access-6jz7k\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:54 crc kubenswrapper[4965]: I0219 10:03:54.533727 4965 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/712a9147-94b7-45f8-97a9-3c0a988f748d-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:54 crc kubenswrapper[4965]: I0219 10:03:54.533735 4965 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/712a9147-94b7-45f8-97a9-3c0a988f748d-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:54 crc kubenswrapper[4965]: I0219 10:03:54.533744 4965 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/712a9147-94b7-45f8-97a9-3c0a988f748d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:54 crc kubenswrapper[4965]: I0219 10:03:54.533754 4965 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/712a9147-94b7-45f8-97a9-3c0a988f748d-certs\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:54 crc kubenswrapper[4965]: I0219 10:03:54.635905 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxqw9\" (UniqueName: \"kubernetes.io/projected/45f4a2b8-338f-4c3d-afe8-305eb599081c-kube-api-access-nxqw9\") pod \"barbican-api-c97f468c6-bwf6p\" (UID: \"45f4a2b8-338f-4c3d-afe8-305eb599081c\") " pod="openstack/barbican-api-c97f468c6-bwf6p" Feb 19 10:03:54 crc kubenswrapper[4965]: I0219 10:03:54.636461 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45f4a2b8-338f-4c3d-afe8-305eb599081c-combined-ca-bundle\") pod \"barbican-api-c97f468c6-bwf6p\" (UID: \"45f4a2b8-338f-4c3d-afe8-305eb599081c\") " pod="openstack/barbican-api-c97f468c6-bwf6p" Feb 19 10:03:54 crc kubenswrapper[4965]: I0219 10:03:54.636523 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45f4a2b8-338f-4c3d-afe8-305eb599081c-logs\") pod \"barbican-api-c97f468c6-bwf6p\" (UID: \"45f4a2b8-338f-4c3d-afe8-305eb599081c\") " pod="openstack/barbican-api-c97f468c6-bwf6p" Feb 19 10:03:54 crc kubenswrapper[4965]: I0219 10:03:54.636553 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/45f4a2b8-338f-4c3d-afe8-305eb599081c-internal-tls-certs\") pod \"barbican-api-c97f468c6-bwf6p\" (UID: \"45f4a2b8-338f-4c3d-afe8-305eb599081c\") " pod="openstack/barbican-api-c97f468c6-bwf6p" Feb 19 10:03:54 crc kubenswrapper[4965]: I0219 10:03:54.636598 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/45f4a2b8-338f-4c3d-afe8-305eb599081c-config-data-custom\") pod \"barbican-api-c97f468c6-bwf6p\" (UID: \"45f4a2b8-338f-4c3d-afe8-305eb599081c\") " pod="openstack/barbican-api-c97f468c6-bwf6p" Feb 19 10:03:54 crc kubenswrapper[4965]: I0219 10:03:54.636640 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45f4a2b8-338f-4c3d-afe8-305eb599081c-config-data\") pod \"barbican-api-c97f468c6-bwf6p\" (UID: \"45f4a2b8-338f-4c3d-afe8-305eb599081c\") " pod="openstack/barbican-api-c97f468c6-bwf6p" Feb 19 10:03:54 crc kubenswrapper[4965]: I0219 10:03:54.636669 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/45f4a2b8-338f-4c3d-afe8-305eb599081c-public-tls-certs\") pod \"barbican-api-c97f468c6-bwf6p\" (UID: \"45f4a2b8-338f-4c3d-afe8-305eb599081c\") " pod="openstack/barbican-api-c97f468c6-bwf6p" Feb 19 10:03:54 crc kubenswrapper[4965]: I0219 10:03:54.637648 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45f4a2b8-338f-4c3d-afe8-305eb599081c-logs\") pod \"barbican-api-c97f468c6-bwf6p\" (UID: \"45f4a2b8-338f-4c3d-afe8-305eb599081c\") " pod="openstack/barbican-api-c97f468c6-bwf6p" Feb 19 10:03:54 crc kubenswrapper[4965]: I0219 10:03:54.642855 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/45f4a2b8-338f-4c3d-afe8-305eb599081c-public-tls-certs\") pod \"barbican-api-c97f468c6-bwf6p\" (UID: \"45f4a2b8-338f-4c3d-afe8-305eb599081c\") " pod="openstack/barbican-api-c97f468c6-bwf6p" Feb 19 10:03:54 crc kubenswrapper[4965]: I0219 10:03:54.642889 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/45f4a2b8-338f-4c3d-afe8-305eb599081c-config-data-custom\") pod \"barbican-api-c97f468c6-bwf6p\" (UID: \"45f4a2b8-338f-4c3d-afe8-305eb599081c\") " pod="openstack/barbican-api-c97f468c6-bwf6p" Feb 19 10:03:54 crc kubenswrapper[4965]: I0219 10:03:54.643318 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/45f4a2b8-338f-4c3d-afe8-305eb599081c-internal-tls-certs\") pod \"barbican-api-c97f468c6-bwf6p\" (UID: \"45f4a2b8-338f-4c3d-afe8-305eb599081c\") " pod="openstack/barbican-api-c97f468c6-bwf6p" Feb 19 10:03:54 crc kubenswrapper[4965]: I0219 10:03:54.643976 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45f4a2b8-338f-4c3d-afe8-305eb599081c-combined-ca-bundle\") pod \"barbican-api-c97f468c6-bwf6p\" (UID: \"45f4a2b8-338f-4c3d-afe8-305eb599081c\") " pod="openstack/barbican-api-c97f468c6-bwf6p" Feb 19 10:03:54 crc kubenswrapper[4965]: I0219 10:03:54.653200 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45f4a2b8-338f-4c3d-afe8-305eb599081c-config-data\") pod \"barbican-api-c97f468c6-bwf6p\" (UID: \"45f4a2b8-338f-4c3d-afe8-305eb599081c\") " pod="openstack/barbican-api-c97f468c6-bwf6p" Feb 19 10:03:54 crc kubenswrapper[4965]: I0219 10:03:54.662903 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxqw9\" (UniqueName: \"kubernetes.io/projected/45f4a2b8-338f-4c3d-afe8-305eb599081c-kube-api-access-nxqw9\") pod \"barbican-api-c97f468c6-bwf6p\" (UID: \"45f4a2b8-338f-4c3d-afe8-305eb599081c\") " pod="openstack/barbican-api-c97f468c6-bwf6p" Feb 19 10:03:54 crc kubenswrapper[4965]: I0219 10:03:54.813342 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-c97f468c6-bwf6p" Feb 19 10:03:54 crc kubenswrapper[4965]: I0219 10:03:54.895723 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5bf5dff6-70bd-4013-95d1-6e30d7e765a0","Type":"ContainerStarted","Data":"3c55fc999dbe5b5d627568c9e85c57dcd835bf47c420669edb7a2ad4c1b94538"} Feb 19 10:03:54 crc kubenswrapper[4965]: I0219 10:03:54.900299 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"648dfd3e-578b-4808-84c2-6dd4b4a7954c","Type":"ContainerStarted","Data":"b4c83774e861ec73bc9c0dfddcfedf12568c709a0c36018bd09c4e6870629001"} Feb 19 10:03:54 crc kubenswrapper[4965]: I0219 10:03:54.900334 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"648dfd3e-578b-4808-84c2-6dd4b4a7954c","Type":"ContainerStarted","Data":"28fb6b5acdca274bb67d1413f4c1739d36b826769dec9bc6ccbf95b4ebc87265"} Feb 19 10:03:54 crc kubenswrapper[4965]: I0219 10:03:54.905942 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-vtxxb" event={"ID":"712a9147-94b7-45f8-97a9-3c0a988f748d","Type":"ContainerDied","Data":"f19af3184ea5ecc7079b43e1007844543091623f7186f40357babcfe0d342503"} Feb 19 10:03:54 crc kubenswrapper[4965]: I0219 10:03:54.905979 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f19af3184ea5ecc7079b43e1007844543091623f7186f40357babcfe0d342503" Feb 19 10:03:54 crc kubenswrapper[4965]: I0219 10:03:54.906048 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-vtxxb" Feb 19 10:03:54 crc kubenswrapper[4965]: I0219 10:03:54.913756 4965 generic.go:334] "Generic (PLEG): container finished" podID="f62c8ad7-6089-47f6-ab15-b7ec5eee53d8" containerID="a60392232dfe2d29976017ebb4400c1b12da8611796e84a5d35349313cfd48ff" exitCode=0 Feb 19 10:03:54 crc kubenswrapper[4965]: I0219 10:03:54.913791 4965 generic.go:334] "Generic (PLEG): container finished" podID="f62c8ad7-6089-47f6-ab15-b7ec5eee53d8" containerID="aca2560cfad27a77ad1d806aae5b17a0c4b6722235e43ff20de62370a59b472a" exitCode=143 Feb 19 10:03:54 crc kubenswrapper[4965]: I0219 10:03:54.914709 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f62c8ad7-6089-47f6-ab15-b7ec5eee53d8","Type":"ContainerDied","Data":"a60392232dfe2d29976017ebb4400c1b12da8611796e84a5d35349313cfd48ff"} Feb 19 10:03:54 crc kubenswrapper[4965]: I0219 10:03:54.914739 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f62c8ad7-6089-47f6-ab15-b7ec5eee53d8","Type":"ContainerDied","Data":"aca2560cfad27a77ad1d806aae5b17a0c4b6722235e43ff20de62370a59b472a"} Feb 19 10:03:54 crc kubenswrapper[4965]: I0219 10:03:54.914749 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f62c8ad7-6089-47f6-ab15-b7ec5eee53d8","Type":"ContainerDied","Data":"c1215623fb5856bfd9ebfbd039c47be193026820dd2ae384c59e5fe27016deda"} Feb 19 10:03:54 crc kubenswrapper[4965]: I0219 10:03:54.914759 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1215623fb5856bfd9ebfbd039c47be193026820dd2ae384c59e5fe27016deda" Feb 19 10:03:54 crc kubenswrapper[4965]: I0219 10:03:54.936623 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.697650499 podStartE2EDuration="7.936600029s" podCreationTimestamp="2026-02-19 10:03:47 +0000 UTC" firstStartedPulling="2026-02-19 10:03:48.753618219 +0000 UTC m=+1284.374939519" lastFinishedPulling="2026-02-19 10:03:51.992567739 +0000 UTC m=+1287.613889049" observedRunningTime="2026-02-19 10:03:54.927809655 +0000 UTC m=+1290.549130975" watchObservedRunningTime="2026-02-19 10:03:54.936600029 +0000 UTC m=+1290.557921339" Feb 19 10:03:54 crc kubenswrapper[4965]: I0219 10:03:54.985279 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.046331 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f62c8ad7-6089-47f6-ab15-b7ec5eee53d8-etc-machine-id\") pod \"f62c8ad7-6089-47f6-ab15-b7ec5eee53d8\" (UID: \"f62c8ad7-6089-47f6-ab15-b7ec5eee53d8\") " Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.047825 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f62c8ad7-6089-47f6-ab15-b7ec5eee53d8-scripts\") pod \"f62c8ad7-6089-47f6-ab15-b7ec5eee53d8\" (UID: \"f62c8ad7-6089-47f6-ab15-b7ec5eee53d8\") " Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.047542 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f62c8ad7-6089-47f6-ab15-b7ec5eee53d8-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f62c8ad7-6089-47f6-ab15-b7ec5eee53d8" (UID: "f62c8ad7-6089-47f6-ab15-b7ec5eee53d8"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.047893 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f62c8ad7-6089-47f6-ab15-b7ec5eee53d8-config-data\") pod \"f62c8ad7-6089-47f6-ab15-b7ec5eee53d8\" (UID: \"f62c8ad7-6089-47f6-ab15-b7ec5eee53d8\") " Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.047925 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f62c8ad7-6089-47f6-ab15-b7ec5eee53d8-config-data-custom\") pod \"f62c8ad7-6089-47f6-ab15-b7ec5eee53d8\" (UID: \"f62c8ad7-6089-47f6-ab15-b7ec5eee53d8\") " Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.048069 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f62c8ad7-6089-47f6-ab15-b7ec5eee53d8-logs\") pod \"f62c8ad7-6089-47f6-ab15-b7ec5eee53d8\" (UID: \"f62c8ad7-6089-47f6-ab15-b7ec5eee53d8\") " Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.048095 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f62c8ad7-6089-47f6-ab15-b7ec5eee53d8-combined-ca-bundle\") pod \"f62c8ad7-6089-47f6-ab15-b7ec5eee53d8\" (UID: \"f62c8ad7-6089-47f6-ab15-b7ec5eee53d8\") " Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.048250 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmwsh\" (UniqueName: \"kubernetes.io/projected/f62c8ad7-6089-47f6-ab15-b7ec5eee53d8-kube-api-access-rmwsh\") pod \"f62c8ad7-6089-47f6-ab15-b7ec5eee53d8\" (UID: \"f62c8ad7-6089-47f6-ab15-b7ec5eee53d8\") " Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.060568 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f62c8ad7-6089-47f6-ab15-b7ec5eee53d8-scripts" (OuterVolumeSpecName: "scripts") pod "f62c8ad7-6089-47f6-ab15-b7ec5eee53d8" (UID: "f62c8ad7-6089-47f6-ab15-b7ec5eee53d8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.070876 4965 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f62c8ad7-6089-47f6-ab15-b7ec5eee53d8-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.071089 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f62c8ad7-6089-47f6-ab15-b7ec5eee53d8-kube-api-access-rmwsh" (OuterVolumeSpecName: "kube-api-access-rmwsh") pod "f62c8ad7-6089-47f6-ab15-b7ec5eee53d8" (UID: "f62c8ad7-6089-47f6-ab15-b7ec5eee53d8"). InnerVolumeSpecName "kube-api-access-rmwsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.083382 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f62c8ad7-6089-47f6-ab15-b7ec5eee53d8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f62c8ad7-6089-47f6-ab15-b7ec5eee53d8" (UID: "f62c8ad7-6089-47f6-ab15-b7ec5eee53d8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.090497 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f62c8ad7-6089-47f6-ab15-b7ec5eee53d8-logs" (OuterVolumeSpecName: "logs") pod "f62c8ad7-6089-47f6-ab15-b7ec5eee53d8" (UID: "f62c8ad7-6089-47f6-ab15-b7ec5eee53d8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.096158 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 19 10:03:55 crc kubenswrapper[4965]: E0219 10:03:55.100437 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f62c8ad7-6089-47f6-ab15-b7ec5eee53d8" containerName="cinder-api" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.100469 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="f62c8ad7-6089-47f6-ab15-b7ec5eee53d8" containerName="cinder-api" Feb 19 10:03:55 crc kubenswrapper[4965]: E0219 10:03:55.100499 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f62c8ad7-6089-47f6-ab15-b7ec5eee53d8" containerName="cinder-api-log" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.100506 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="f62c8ad7-6089-47f6-ab15-b7ec5eee53d8" containerName="cinder-api-log" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.100717 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="f62c8ad7-6089-47f6-ab15-b7ec5eee53d8" containerName="cinder-api-log" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.100740 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="f62c8ad7-6089-47f6-ab15-b7ec5eee53d8" containerName="cinder-api" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.101427 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.104420 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-config-data" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.104627 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-proc-config-data" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.104745 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-client-internal" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.110703 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-scripts" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.117544 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-cloudkitty-dockercfg-rbdxx" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.137030 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f62c8ad7-6089-47f6-ab15-b7ec5eee53d8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f62c8ad7-6089-47f6-ab15-b7ec5eee53d8" (UID: "f62c8ad7-6089-47f6-ab15-b7ec5eee53d8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.146487 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.159945 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-g6g6j"] Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.173765 4965 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f62c8ad7-6089-47f6-ab15-b7ec5eee53d8-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.173845 4965 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f62c8ad7-6089-47f6-ab15-b7ec5eee53d8-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.173862 4965 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f62c8ad7-6089-47f6-ab15-b7ec5eee53d8-logs\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.173873 4965 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f62c8ad7-6089-47f6-ab15-b7ec5eee53d8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.173886 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmwsh\" (UniqueName: \"kubernetes.io/projected/f62c8ad7-6089-47f6-ab15-b7ec5eee53d8-kube-api-access-rmwsh\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.189658 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58bd69657f-66r4m"] Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.193973 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58bd69657f-66r4m" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.220350 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f62c8ad7-6089-47f6-ab15-b7ec5eee53d8-config-data" (OuterVolumeSpecName: "config-data") pod "f62c8ad7-6089-47f6-ab15-b7ec5eee53d8" (UID: "f62c8ad7-6089-47f6-ab15-b7ec5eee53d8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.275790 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58bd69657f-66r4m"] Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.276116 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-api-0"] Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.275960 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/feacab4f-9866-41e7-a8c2-e9850aff1252-dns-swift-storage-0\") pod \"dnsmasq-dns-58bd69657f-66r4m\" (UID: \"feacab4f-9866-41e7-a8c2-e9850aff1252\") " pod="openstack/dnsmasq-dns-58bd69657f-66r4m" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.277677 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.277760 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.278225 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llkw4\" (UniqueName: \"kubernetes.io/projected/908accc3-aea8-40d3-a13f-c197badfa0d1-kube-api-access-llkw4\") pod \"cloudkitty-proc-0\" (UID: \"908accc3-aea8-40d3-a13f-c197badfa0d1\") " pod="openstack/cloudkitty-proc-0" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.278413 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/908accc3-aea8-40d3-a13f-c197badfa0d1-certs\") pod \"cloudkitty-proc-0\" (UID: \"908accc3-aea8-40d3-a13f-c197badfa0d1\") " pod="openstack/cloudkitty-proc-0" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.278488 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/908accc3-aea8-40d3-a13f-c197badfa0d1-scripts\") pod \"cloudkitty-proc-0\" (UID: \"908accc3-aea8-40d3-a13f-c197badfa0d1\") " pod="openstack/cloudkitty-proc-0" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.278546 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908accc3-aea8-40d3-a13f-c197badfa0d1-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"908accc3-aea8-40d3-a13f-c197badfa0d1\") " pod="openstack/cloudkitty-proc-0" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.278577 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/feacab4f-9866-41e7-a8c2-e9850aff1252-config\") pod \"dnsmasq-dns-58bd69657f-66r4m\" (UID: \"feacab4f-9866-41e7-a8c2-e9850aff1252\") " pod="openstack/dnsmasq-dns-58bd69657f-66r4m" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.278716 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/feacab4f-9866-41e7-a8c2-e9850aff1252-ovsdbserver-sb\") pod \"dnsmasq-dns-58bd69657f-66r4m\" (UID: \"feacab4f-9866-41e7-a8c2-e9850aff1252\") " pod="openstack/dnsmasq-dns-58bd69657f-66r4m" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.278884 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/feacab4f-9866-41e7-a8c2-e9850aff1252-ovsdbserver-nb\") pod \"dnsmasq-dns-58bd69657f-66r4m\" (UID: \"feacab4f-9866-41e7-a8c2-e9850aff1252\") " pod="openstack/dnsmasq-dns-58bd69657f-66r4m" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.280286 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-api-config-data" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.280518 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/feacab4f-9866-41e7-a8c2-e9850aff1252-dns-svc\") pod \"dnsmasq-dns-58bd69657f-66r4m\" (UID: \"feacab4f-9866-41e7-a8c2-e9850aff1252\") " pod="openstack/dnsmasq-dns-58bd69657f-66r4m" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.281402 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/908accc3-aea8-40d3-a13f-c197badfa0d1-config-data\") pod \"cloudkitty-proc-0\" (UID: \"908accc3-aea8-40d3-a13f-c197badfa0d1\") " pod="openstack/cloudkitty-proc-0" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.281513 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/908accc3-aea8-40d3-a13f-c197badfa0d1-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"908accc3-aea8-40d3-a13f-c197badfa0d1\") " pod="openstack/cloudkitty-proc-0" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.281585 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnggr\" (UniqueName: \"kubernetes.io/projected/feacab4f-9866-41e7-a8c2-e9850aff1252-kube-api-access-lnggr\") pod \"dnsmasq-dns-58bd69657f-66r4m\" (UID: \"feacab4f-9866-41e7-a8c2-e9850aff1252\") " pod="openstack/dnsmasq-dns-58bd69657f-66r4m" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.282531 4965 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f62c8ad7-6089-47f6-ab15-b7ec5eee53d8-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.385778 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/feacab4f-9866-41e7-a8c2-e9850aff1252-ovsdbserver-sb\") pod \"dnsmasq-dns-58bd69657f-66r4m\" (UID: \"feacab4f-9866-41e7-a8c2-e9850aff1252\") " pod="openstack/dnsmasq-dns-58bd69657f-66r4m" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.385892 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/feacab4f-9866-41e7-a8c2-e9850aff1252-ovsdbserver-nb\") pod \"dnsmasq-dns-58bd69657f-66r4m\" (UID: \"feacab4f-9866-41e7-a8c2-e9850aff1252\") " pod="openstack/dnsmasq-dns-58bd69657f-66r4m" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.385915 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/feacab4f-9866-41e7-a8c2-e9850aff1252-dns-svc\") pod \"dnsmasq-dns-58bd69657f-66r4m\" (UID: \"feacab4f-9866-41e7-a8c2-e9850aff1252\") " pod="openstack/dnsmasq-dns-58bd69657f-66r4m" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.385968 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6-logs\") pod \"cloudkitty-api-0\" (UID: \"dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6\") " pod="openstack/cloudkitty-api-0" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.385988 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfkb8\" (UniqueName: \"kubernetes.io/projected/dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6-kube-api-access-hfkb8\") pod \"cloudkitty-api-0\" (UID: \"dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6\") " pod="openstack/cloudkitty-api-0" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.386033 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/908accc3-aea8-40d3-a13f-c197badfa0d1-config-data\") pod \"cloudkitty-proc-0\" (UID: \"908accc3-aea8-40d3-a13f-c197badfa0d1\") " pod="openstack/cloudkitty-proc-0" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.386068 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6\") " pod="openstack/cloudkitty-api-0" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.386086 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6-config-data\") pod \"cloudkitty-api-0\" (UID: \"dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6\") " pod="openstack/cloudkitty-api-0" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.386120 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/908accc3-aea8-40d3-a13f-c197badfa0d1-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"908accc3-aea8-40d3-a13f-c197badfa0d1\") " pod="openstack/cloudkitty-proc-0" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.386199 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnggr\" (UniqueName: \"kubernetes.io/projected/feacab4f-9866-41e7-a8c2-e9850aff1252-kube-api-access-lnggr\") pod \"dnsmasq-dns-58bd69657f-66r4m\" (UID: \"feacab4f-9866-41e7-a8c2-e9850aff1252\") " pod="openstack/dnsmasq-dns-58bd69657f-66r4m" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.386255 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6\") " pod="openstack/cloudkitty-api-0" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.386290 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/feacab4f-9866-41e7-a8c2-e9850aff1252-dns-swift-storage-0\") pod \"dnsmasq-dns-58bd69657f-66r4m\" (UID: \"feacab4f-9866-41e7-a8c2-e9850aff1252\") " pod="openstack/dnsmasq-dns-58bd69657f-66r4m" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.386317 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6-certs\") pod \"cloudkitty-api-0\" (UID: \"dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6\") " pod="openstack/cloudkitty-api-0" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.386361 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llkw4\" (UniqueName: \"kubernetes.io/projected/908accc3-aea8-40d3-a13f-c197badfa0d1-kube-api-access-llkw4\") pod \"cloudkitty-proc-0\" (UID: \"908accc3-aea8-40d3-a13f-c197badfa0d1\") " pod="openstack/cloudkitty-proc-0" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.386414 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6-scripts\") pod \"cloudkitty-api-0\" (UID: \"dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6\") " pod="openstack/cloudkitty-api-0" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.386443 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/908accc3-aea8-40d3-a13f-c197badfa0d1-certs\") pod \"cloudkitty-proc-0\" (UID: \"908accc3-aea8-40d3-a13f-c197badfa0d1\") " pod="openstack/cloudkitty-proc-0" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.386460 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/908accc3-aea8-40d3-a13f-c197badfa0d1-scripts\") pod \"cloudkitty-proc-0\" (UID: \"908accc3-aea8-40d3-a13f-c197badfa0d1\") " pod="openstack/cloudkitty-proc-0" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.386483 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908accc3-aea8-40d3-a13f-c197badfa0d1-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"908accc3-aea8-40d3-a13f-c197badfa0d1\") " pod="openstack/cloudkitty-proc-0" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.386498 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/feacab4f-9866-41e7-a8c2-e9850aff1252-config\") pod \"dnsmasq-dns-58bd69657f-66r4m\" (UID: \"feacab4f-9866-41e7-a8c2-e9850aff1252\") " pod="openstack/dnsmasq-dns-58bd69657f-66r4m" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.387467 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/feacab4f-9866-41e7-a8c2-e9850aff1252-config\") pod \"dnsmasq-dns-58bd69657f-66r4m\" (UID: \"feacab4f-9866-41e7-a8c2-e9850aff1252\") " pod="openstack/dnsmasq-dns-58bd69657f-66r4m" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.387714 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/feacab4f-9866-41e7-a8c2-e9850aff1252-ovsdbserver-sb\") pod \"dnsmasq-dns-58bd69657f-66r4m\" (UID: \"feacab4f-9866-41e7-a8c2-e9850aff1252\") " pod="openstack/dnsmasq-dns-58bd69657f-66r4m" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.388361 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/feacab4f-9866-41e7-a8c2-e9850aff1252-dns-swift-storage-0\") pod \"dnsmasq-dns-58bd69657f-66r4m\" (UID: \"feacab4f-9866-41e7-a8c2-e9850aff1252\") " pod="openstack/dnsmasq-dns-58bd69657f-66r4m" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.390949 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/feacab4f-9866-41e7-a8c2-e9850aff1252-dns-svc\") pod \"dnsmasq-dns-58bd69657f-66r4m\" (UID: \"feacab4f-9866-41e7-a8c2-e9850aff1252\") " pod="openstack/dnsmasq-dns-58bd69657f-66r4m" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.395110 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/feacab4f-9866-41e7-a8c2-e9850aff1252-ovsdbserver-nb\") pod \"dnsmasq-dns-58bd69657f-66r4m\" (UID: \"feacab4f-9866-41e7-a8c2-e9850aff1252\") " pod="openstack/dnsmasq-dns-58bd69657f-66r4m" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.396045 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908accc3-aea8-40d3-a13f-c197badfa0d1-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"908accc3-aea8-40d3-a13f-c197badfa0d1\") " pod="openstack/cloudkitty-proc-0" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.397011 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/908accc3-aea8-40d3-a13f-c197badfa0d1-config-data\") pod \"cloudkitty-proc-0\" (UID: \"908accc3-aea8-40d3-a13f-c197badfa0d1\") " pod="openstack/cloudkitty-proc-0" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.409498 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/908accc3-aea8-40d3-a13f-c197badfa0d1-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"908accc3-aea8-40d3-a13f-c197badfa0d1\") " pod="openstack/cloudkitty-proc-0" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.413849 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/908accc3-aea8-40d3-a13f-c197badfa0d1-certs\") pod \"cloudkitty-proc-0\" (UID: \"908accc3-aea8-40d3-a13f-c197badfa0d1\") " pod="openstack/cloudkitty-proc-0" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.415058 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnggr\" (UniqueName: \"kubernetes.io/projected/feacab4f-9866-41e7-a8c2-e9850aff1252-kube-api-access-lnggr\") pod \"dnsmasq-dns-58bd69657f-66r4m\" (UID: \"feacab4f-9866-41e7-a8c2-e9850aff1252\") " pod="openstack/dnsmasq-dns-58bd69657f-66r4m" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.415633 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/908accc3-aea8-40d3-a13f-c197badfa0d1-scripts\") pod \"cloudkitty-proc-0\" (UID: \"908accc3-aea8-40d3-a13f-c197badfa0d1\") " pod="openstack/cloudkitty-proc-0" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.419479 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llkw4\" (UniqueName: \"kubernetes.io/projected/908accc3-aea8-40d3-a13f-c197badfa0d1-kube-api-access-llkw4\") pod \"cloudkitty-proc-0\" (UID: \"908accc3-aea8-40d3-a13f-c197badfa0d1\") " pod="openstack/cloudkitty-proc-0" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.463478 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.488378 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6-scripts\") pod \"cloudkitty-api-0\" (UID: \"dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6\") " pod="openstack/cloudkitty-api-0" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.488517 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6-logs\") pod \"cloudkitty-api-0\" (UID: \"dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6\") " pod="openstack/cloudkitty-api-0" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.488552 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfkb8\" (UniqueName: \"kubernetes.io/projected/dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6-kube-api-access-hfkb8\") pod \"cloudkitty-api-0\" (UID: \"dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6\") " pod="openstack/cloudkitty-api-0" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.488596 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6\") " pod="openstack/cloudkitty-api-0" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.488614 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6-config-data\") pod \"cloudkitty-api-0\" (UID: \"dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6\") " pod="openstack/cloudkitty-api-0" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.488655 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6\") " pod="openstack/cloudkitty-api-0" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.488686 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6-certs\") pod \"cloudkitty-api-0\" (UID: \"dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6\") " pod="openstack/cloudkitty-api-0" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.494001 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6-logs\") pod \"cloudkitty-api-0\" (UID: \"dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6\") " pod="openstack/cloudkitty-api-0" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.501060 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6\") " pod="openstack/cloudkitty-api-0" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.501229 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6-scripts\") pod \"cloudkitty-api-0\" (UID: \"dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6\") " pod="openstack/cloudkitty-api-0" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.502003 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6-config-data\") pod \"cloudkitty-api-0\" (UID: \"dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6\") " pod="openstack/cloudkitty-api-0" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.503892 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6\") " pod="openstack/cloudkitty-api-0" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.505062 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6-certs\") pod \"cloudkitty-api-0\" (UID: \"dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6\") " pod="openstack/cloudkitty-api-0" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.526736 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfkb8\" (UniqueName: \"kubernetes.io/projected/dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6-kube-api-access-hfkb8\") pod \"cloudkitty-api-0\" (UID: \"dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6\") " pod="openstack/cloudkitty-api-0" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.580826 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58bd69657f-66r4m" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.613460 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 19 10:03:55 crc kubenswrapper[4965]: I0219 10:03:55.760043 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-c97f468c6-bwf6p"] Feb 19 10:03:56 crc kubenswrapper[4965]: I0219 10:03:56.013368 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-c97f468c6-bwf6p" event={"ID":"45f4a2b8-338f-4c3d-afe8-305eb599081c","Type":"ContainerStarted","Data":"3b75d50ba49e925fb89162c1ee8d2eed18171a75fe330234fa3dcd7204d03190"} Feb 19 10:03:56 crc kubenswrapper[4965]: I0219 10:03:56.013436 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 10:03:56 crc kubenswrapper[4965]: I0219 10:03:56.014521 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-g6g6j" podUID="c21e28dc-1ac5-404c-bbb3-6298f94fc5d0" containerName="dnsmasq-dns" containerID="cri-o://bac1040b115a7735de4fa6a91d378134b60b573bdc07567a2d3080b79f3dda6b" gracePeriod=10 Feb 19 10:03:56 crc kubenswrapper[4965]: I0219 10:03:56.377781 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 19 10:03:56 crc kubenswrapper[4965]: I0219 10:03:56.418708 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 19 10:03:56 crc kubenswrapper[4965]: I0219 10:03:56.444274 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 19 10:03:56 crc kubenswrapper[4965]: I0219 10:03:56.446632 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 10:03:56 crc kubenswrapper[4965]: I0219 10:03:56.458883 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 19 10:03:56 crc kubenswrapper[4965]: I0219 10:03:56.459094 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 19 10:03:56 crc kubenswrapper[4965]: I0219 10:03:56.459235 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 19 10:03:56 crc kubenswrapper[4965]: I0219 10:03:56.463567 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 19 10:03:56 crc kubenswrapper[4965]: I0219 10:03:56.648614 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92030a04-19d0-4766-b560-3d5b64be8716-scripts\") pod \"cinder-api-0\" (UID: \"92030a04-19d0-4766-b560-3d5b64be8716\") " pod="openstack/cinder-api-0" Feb 19 10:03:56 crc kubenswrapper[4965]: I0219 10:03:56.648883 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4wrg\" (UniqueName: \"kubernetes.io/projected/92030a04-19d0-4766-b560-3d5b64be8716-kube-api-access-l4wrg\") pod \"cinder-api-0\" (UID: \"92030a04-19d0-4766-b560-3d5b64be8716\") " pod="openstack/cinder-api-0" Feb 19 10:03:56 crc kubenswrapper[4965]: I0219 10:03:56.648980 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/92030a04-19d0-4766-b560-3d5b64be8716-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"92030a04-19d0-4766-b560-3d5b64be8716\") " pod="openstack/cinder-api-0" Feb 19 10:03:56 crc kubenswrapper[4965]: I0219 10:03:56.649133 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/92030a04-19d0-4766-b560-3d5b64be8716-config-data-custom\") pod \"cinder-api-0\" (UID: \"92030a04-19d0-4766-b560-3d5b64be8716\") " pod="openstack/cinder-api-0" Feb 19 10:03:56 crc kubenswrapper[4965]: I0219 10:03:56.649243 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/92030a04-19d0-4766-b560-3d5b64be8716-etc-machine-id\") pod \"cinder-api-0\" (UID: \"92030a04-19d0-4766-b560-3d5b64be8716\") " pod="openstack/cinder-api-0" Feb 19 10:03:56 crc kubenswrapper[4965]: I0219 10:03:56.649325 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/92030a04-19d0-4766-b560-3d5b64be8716-public-tls-certs\") pod \"cinder-api-0\" (UID: \"92030a04-19d0-4766-b560-3d5b64be8716\") " pod="openstack/cinder-api-0" Feb 19 10:03:56 crc kubenswrapper[4965]: I0219 10:03:56.649506 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92030a04-19d0-4766-b560-3d5b64be8716-config-data\") pod \"cinder-api-0\" (UID: \"92030a04-19d0-4766-b560-3d5b64be8716\") " pod="openstack/cinder-api-0" Feb 19 10:03:56 crc kubenswrapper[4965]: I0219 10:03:56.649635 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92030a04-19d0-4766-b560-3d5b64be8716-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"92030a04-19d0-4766-b560-3d5b64be8716\") " pod="openstack/cinder-api-0" Feb 19 10:03:56 crc kubenswrapper[4965]: I0219 10:03:56.649731 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92030a04-19d0-4766-b560-3d5b64be8716-logs\") pod \"cinder-api-0\" (UID: \"92030a04-19d0-4766-b560-3d5b64be8716\") " pod="openstack/cinder-api-0" Feb 19 10:03:56 crc kubenswrapper[4965]: I0219 10:03:56.750429 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92030a04-19d0-4766-b560-3d5b64be8716-config-data\") pod \"cinder-api-0\" (UID: \"92030a04-19d0-4766-b560-3d5b64be8716\") " pod="openstack/cinder-api-0" Feb 19 10:03:56 crc kubenswrapper[4965]: I0219 10:03:56.750982 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92030a04-19d0-4766-b560-3d5b64be8716-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"92030a04-19d0-4766-b560-3d5b64be8716\") " pod="openstack/cinder-api-0" Feb 19 10:03:56 crc kubenswrapper[4965]: I0219 10:03:56.751089 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92030a04-19d0-4766-b560-3d5b64be8716-logs\") pod \"cinder-api-0\" (UID: \"92030a04-19d0-4766-b560-3d5b64be8716\") " pod="openstack/cinder-api-0" Feb 19 10:03:56 crc kubenswrapper[4965]: I0219 10:03:56.751222 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92030a04-19d0-4766-b560-3d5b64be8716-scripts\") pod \"cinder-api-0\" (UID: \"92030a04-19d0-4766-b560-3d5b64be8716\") " pod="openstack/cinder-api-0" Feb 19 10:03:56 crc kubenswrapper[4965]: I0219 10:03:56.751303 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4wrg\" (UniqueName: \"kubernetes.io/projected/92030a04-19d0-4766-b560-3d5b64be8716-kube-api-access-l4wrg\") pod \"cinder-api-0\" (UID: \"92030a04-19d0-4766-b560-3d5b64be8716\") " pod="openstack/cinder-api-0" Feb 19 10:03:56 crc kubenswrapper[4965]: I0219 10:03:56.751394 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/92030a04-19d0-4766-b560-3d5b64be8716-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"92030a04-19d0-4766-b560-3d5b64be8716\") " pod="openstack/cinder-api-0" Feb 19 10:03:56 crc kubenswrapper[4965]: I0219 10:03:56.751482 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/92030a04-19d0-4766-b560-3d5b64be8716-config-data-custom\") pod \"cinder-api-0\" (UID: \"92030a04-19d0-4766-b560-3d5b64be8716\") " pod="openstack/cinder-api-0" Feb 19 10:03:56 crc kubenswrapper[4965]: I0219 10:03:56.751558 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/92030a04-19d0-4766-b560-3d5b64be8716-etc-machine-id\") pod \"cinder-api-0\" (UID: \"92030a04-19d0-4766-b560-3d5b64be8716\") " pod="openstack/cinder-api-0" Feb 19 10:03:56 crc kubenswrapper[4965]: I0219 10:03:56.751665 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/92030a04-19d0-4766-b560-3d5b64be8716-public-tls-certs\") pod \"cinder-api-0\" (UID: \"92030a04-19d0-4766-b560-3d5b64be8716\") " pod="openstack/cinder-api-0" Feb 19 10:03:56 crc kubenswrapper[4965]: I0219 10:03:56.760909 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92030a04-19d0-4766-b560-3d5b64be8716-logs\") pod \"cinder-api-0\" (UID: \"92030a04-19d0-4766-b560-3d5b64be8716\") " pod="openstack/cinder-api-0" Feb 19 10:03:56 crc kubenswrapper[4965]: I0219 10:03:56.760989 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/92030a04-19d0-4766-b560-3d5b64be8716-etc-machine-id\") pod \"cinder-api-0\" (UID: \"92030a04-19d0-4766-b560-3d5b64be8716\") " pod="openstack/cinder-api-0" Feb 19 10:03:56 crc kubenswrapper[4965]: I0219 10:03:56.770483 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/92030a04-19d0-4766-b560-3d5b64be8716-public-tls-certs\") pod \"cinder-api-0\" (UID: \"92030a04-19d0-4766-b560-3d5b64be8716\") " pod="openstack/cinder-api-0" Feb 19 10:03:56 crc kubenswrapper[4965]: I0219 10:03:56.777422 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 19 10:03:56 crc kubenswrapper[4965]: I0219 10:03:56.785884 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4wrg\" (UniqueName: \"kubernetes.io/projected/92030a04-19d0-4766-b560-3d5b64be8716-kube-api-access-l4wrg\") pod \"cinder-api-0\" (UID: \"92030a04-19d0-4766-b560-3d5b64be8716\") " pod="openstack/cinder-api-0" Feb 19 10:03:56 crc kubenswrapper[4965]: I0219 10:03:56.786361 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92030a04-19d0-4766-b560-3d5b64be8716-scripts\") pod \"cinder-api-0\" (UID: \"92030a04-19d0-4766-b560-3d5b64be8716\") " pod="openstack/cinder-api-0" Feb 19 10:03:56 crc kubenswrapper[4965]: I0219 10:03:56.787232 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/92030a04-19d0-4766-b560-3d5b64be8716-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"92030a04-19d0-4766-b560-3d5b64be8716\") " pod="openstack/cinder-api-0" Feb 19 10:03:56 crc kubenswrapper[4965]: I0219 10:03:56.788786 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58bd69657f-66r4m"] Feb 19 10:03:56 crc kubenswrapper[4965]: I0219 10:03:56.789831 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92030a04-19d0-4766-b560-3d5b64be8716-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"92030a04-19d0-4766-b560-3d5b64be8716\") " pod="openstack/cinder-api-0" Feb 19 10:03:56 crc kubenswrapper[4965]: I0219 10:03:56.794250 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/92030a04-19d0-4766-b560-3d5b64be8716-config-data-custom\") pod \"cinder-api-0\" (UID: \"92030a04-19d0-4766-b560-3d5b64be8716\") " pod="openstack/cinder-api-0" Feb 19 10:03:56 crc kubenswrapper[4965]: I0219 10:03:56.825184 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92030a04-19d0-4766-b560-3d5b64be8716-config-data\") pod \"cinder-api-0\" (UID: \"92030a04-19d0-4766-b560-3d5b64be8716\") " pod="openstack/cinder-api-0" Feb 19 10:03:56 crc kubenswrapper[4965]: W0219 10:03:56.881373 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfeacab4f_9866_41e7_a8c2_e9850aff1252.slice/crio-52a02ffcc46a894acb9872515a61488b2e823b86c7010b316f702acef9b3762d WatchSource:0}: Error finding container 52a02ffcc46a894acb9872515a61488b2e823b86c7010b316f702acef9b3762d: Status 404 returned error can't find the container with id 52a02ffcc46a894acb9872515a61488b2e823b86c7010b316f702acef9b3762d Feb 19 10:03:57 crc kubenswrapper[4965]: I0219 10:03:57.061050 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-c97f468c6-bwf6p" event={"ID":"45f4a2b8-338f-4c3d-afe8-305eb599081c","Type":"ContainerStarted","Data":"8138f832d5ae02a1d7839ebbcd54d340034ed88ee942e4b23a9168162ebb8edf"} Feb 19 10:03:57 crc kubenswrapper[4965]: I0219 10:03:57.074976 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"648dfd3e-578b-4808-84c2-6dd4b4a7954c","Type":"ContainerStarted","Data":"0400aa7c9dfd960cf23d08febf3d6ae9bd1f5875a6b095538c6e6b69821b1ee9"} Feb 19 10:03:57 crc kubenswrapper[4965]: I0219 10:03:57.077044 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 10:03:57 crc kubenswrapper[4965]: I0219 10:03:57.082488 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 10:03:57 crc kubenswrapper[4965]: I0219 10:03:57.122032 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"908accc3-aea8-40d3-a13f-c197badfa0d1","Type":"ContainerStarted","Data":"5f86309983052a49013be8e1e26eac3eb9e0955dce9f977ca2cdbafd7c03a830"} Feb 19 10:03:57 crc kubenswrapper[4965]: I0219 10:03:57.131689 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.1317027060000004 podStartE2EDuration="8.131665753s" podCreationTimestamp="2026-02-19 10:03:49 +0000 UTC" firstStartedPulling="2026-02-19 10:03:52.492842475 +0000 UTC m=+1288.114163785" lastFinishedPulling="2026-02-19 10:03:56.492805522 +0000 UTC m=+1292.114126832" observedRunningTime="2026-02-19 10:03:57.107175929 +0000 UTC m=+1292.728497239" watchObservedRunningTime="2026-02-19 10:03:57.131665753 +0000 UTC m=+1292.752987063" Feb 19 10:03:57 crc kubenswrapper[4965]: I0219 10:03:57.147394 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 19 10:03:57 crc kubenswrapper[4965]: I0219 10:03:57.176489 4965 generic.go:334] "Generic (PLEG): container finished" podID="9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54" containerID="137ccfcadc3afe6b21bbd9c90f0f855e1edb71876c6ad8e76ee2d91c8bb38ab5" exitCode=0 Feb 19 10:03:57 crc kubenswrapper[4965]: I0219 10:03:57.176543 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5cbc96c99c-x6mpf" event={"ID":"9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54","Type":"ContainerDied","Data":"137ccfcadc3afe6b21bbd9c90f0f855e1edb71876c6ad8e76ee2d91c8bb38ab5"} Feb 19 10:03:57 crc kubenswrapper[4965]: I0219 10:03:57.182270 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-g6g6j" Feb 19 10:03:57 crc kubenswrapper[4965]: W0219 10:03:57.223976 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddbcb0cd7_89d6_4c32_8f15_f1ebc4eeb5b6.slice/crio-0f75ba3112c349251320c1dbfffe0120569b05cbebc162026a9183d8c993da92 WatchSource:0}: Error finding container 0f75ba3112c349251320c1dbfffe0120569b05cbebc162026a9183d8c993da92: Status 404 returned error can't find the container with id 0f75ba3112c349251320c1dbfffe0120569b05cbebc162026a9183d8c993da92 Feb 19 10:03:57 crc kubenswrapper[4965]: I0219 10:03:57.227386 4965 generic.go:334] "Generic (PLEG): container finished" podID="c21e28dc-1ac5-404c-bbb3-6298f94fc5d0" containerID="bac1040b115a7735de4fa6a91d378134b60b573bdc07567a2d3080b79f3dda6b" exitCode=0 Feb 19 10:03:57 crc kubenswrapper[4965]: I0219 10:03:57.272934 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f62c8ad7-6089-47f6-ab15-b7ec5eee53d8" path="/var/lib/kubelet/pods/f62c8ad7-6089-47f6-ab15-b7ec5eee53d8/volumes" Feb 19 10:03:57 crc kubenswrapper[4965]: I0219 10:03:57.276829 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-g6g6j" event={"ID":"c21e28dc-1ac5-404c-bbb3-6298f94fc5d0","Type":"ContainerDied","Data":"bac1040b115a7735de4fa6a91d378134b60b573bdc07567a2d3080b79f3dda6b"} Feb 19 10:03:57 crc kubenswrapper[4965]: I0219 10:03:57.276864 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58bd69657f-66r4m" event={"ID":"feacab4f-9866-41e7-a8c2-e9850aff1252","Type":"ContainerStarted","Data":"52a02ffcc46a894acb9872515a61488b2e823b86c7010b316f702acef9b3762d"} Feb 19 10:03:57 crc kubenswrapper[4965]: I0219 10:03:57.276882 4965 scope.go:117] "RemoveContainer" containerID="bac1040b115a7735de4fa6a91d378134b60b573bdc07567a2d3080b79f3dda6b" Feb 19 10:03:57 crc kubenswrapper[4965]: I0219 10:03:57.332422 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5cbc96c99c-x6mpf" Feb 19 10:03:57 crc kubenswrapper[4965]: I0219 10:03:57.374547 4965 scope.go:117] "RemoveContainer" containerID="bb0c0c197521e180fedcfe62f88576b0bb513ceda467eca09b0353e6c5f43353" Feb 19 10:03:57 crc kubenswrapper[4965]: I0219 10:03:57.376156 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c21e28dc-1ac5-404c-bbb3-6298f94fc5d0-ovsdbserver-sb\") pod \"c21e28dc-1ac5-404c-bbb3-6298f94fc5d0\" (UID: \"c21e28dc-1ac5-404c-bbb3-6298f94fc5d0\") " Feb 19 10:03:57 crc kubenswrapper[4965]: I0219 10:03:57.385136 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c21e28dc-1ac5-404c-bbb3-6298f94fc5d0-config\") pod \"c21e28dc-1ac5-404c-bbb3-6298f94fc5d0\" (UID: \"c21e28dc-1ac5-404c-bbb3-6298f94fc5d0\") " Feb 19 10:03:57 crc kubenswrapper[4965]: I0219 10:03:57.385275 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c21e28dc-1ac5-404c-bbb3-6298f94fc5d0-ovsdbserver-nb\") pod \"c21e28dc-1ac5-404c-bbb3-6298f94fc5d0\" (UID: \"c21e28dc-1ac5-404c-bbb3-6298f94fc5d0\") " Feb 19 10:03:57 crc kubenswrapper[4965]: I0219 10:03:57.385349 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c21e28dc-1ac5-404c-bbb3-6298f94fc5d0-dns-swift-storage-0\") pod \"c21e28dc-1ac5-404c-bbb3-6298f94fc5d0\" (UID: \"c21e28dc-1ac5-404c-bbb3-6298f94fc5d0\") " Feb 19 10:03:57 crc kubenswrapper[4965]: I0219 10:03:57.385392 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c21e28dc-1ac5-404c-bbb3-6298f94fc5d0-dns-svc\") pod \"c21e28dc-1ac5-404c-bbb3-6298f94fc5d0\" (UID: \"c21e28dc-1ac5-404c-bbb3-6298f94fc5d0\") " Feb 19 10:03:57 crc kubenswrapper[4965]: I0219 10:03:57.385516 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzw59\" (UniqueName: \"kubernetes.io/projected/c21e28dc-1ac5-404c-bbb3-6298f94fc5d0-kube-api-access-zzw59\") pod \"c21e28dc-1ac5-404c-bbb3-6298f94fc5d0\" (UID: \"c21e28dc-1ac5-404c-bbb3-6298f94fc5d0\") " Feb 19 10:03:57 crc kubenswrapper[4965]: I0219 10:03:57.493458 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c21e28dc-1ac5-404c-bbb3-6298f94fc5d0-kube-api-access-zzw59" (OuterVolumeSpecName: "kube-api-access-zzw59") pod "c21e28dc-1ac5-404c-bbb3-6298f94fc5d0" (UID: "c21e28dc-1ac5-404c-bbb3-6298f94fc5d0"). InnerVolumeSpecName "kube-api-access-zzw59". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:03:57 crc kubenswrapper[4965]: I0219 10:03:57.507234 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54-public-tls-certs\") pod \"9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54\" (UID: \"9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54\") " Feb 19 10:03:57 crc kubenswrapper[4965]: I0219 10:03:57.507438 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54-combined-ca-bundle\") pod \"9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54\" (UID: \"9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54\") " Feb 19 10:03:57 crc kubenswrapper[4965]: I0219 10:03:57.507529 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54-ovndb-tls-certs\") pod \"9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54\" (UID: \"9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54\") " Feb 19 10:03:57 crc kubenswrapper[4965]: I0219 10:03:57.508075 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54-internal-tls-certs\") pod \"9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54\" (UID: \"9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54\") " Feb 19 10:03:57 crc kubenswrapper[4965]: I0219 10:03:57.508242 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54-httpd-config\") pod \"9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54\" (UID: \"9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54\") " Feb 19 10:03:57 crc kubenswrapper[4965]: I0219 10:03:57.508341 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54-config\") pod \"9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54\" (UID: \"9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54\") " Feb 19 10:03:57 crc kubenswrapper[4965]: I0219 10:03:57.508497 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brvgc\" (UniqueName: \"kubernetes.io/projected/9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54-kube-api-access-brvgc\") pod \"9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54\" (UID: \"9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54\") " Feb 19 10:03:57 crc kubenswrapper[4965]: I0219 10:03:57.509523 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzw59\" (UniqueName: \"kubernetes.io/projected/c21e28dc-1ac5-404c-bbb3-6298f94fc5d0-kube-api-access-zzw59\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:57 crc kubenswrapper[4965]: I0219 10:03:57.549524 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c21e28dc-1ac5-404c-bbb3-6298f94fc5d0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c21e28dc-1ac5-404c-bbb3-6298f94fc5d0" (UID: "c21e28dc-1ac5-404c-bbb3-6298f94fc5d0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:03:57 crc kubenswrapper[4965]: I0219 10:03:57.557432 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54-kube-api-access-brvgc" (OuterVolumeSpecName: "kube-api-access-brvgc") pod "9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54" (UID: "9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54"). InnerVolumeSpecName "kube-api-access-brvgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:03:57 crc kubenswrapper[4965]: I0219 10:03:57.611726 4965 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c21e28dc-1ac5-404c-bbb3-6298f94fc5d0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:57 crc kubenswrapper[4965]: I0219 10:03:57.611764 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brvgc\" (UniqueName: \"kubernetes.io/projected/9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54-kube-api-access-brvgc\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:57 crc kubenswrapper[4965]: I0219 10:03:57.627878 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54" (UID: "9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:57 crc kubenswrapper[4965]: I0219 10:03:57.713583 4965 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:57 crc kubenswrapper[4965]: I0219 10:03:57.714116 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c21e28dc-1ac5-404c-bbb3-6298f94fc5d0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c21e28dc-1ac5-404c-bbb3-6298f94fc5d0" (UID: "c21e28dc-1ac5-404c-bbb3-6298f94fc5d0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:03:57 crc kubenswrapper[4965]: I0219 10:03:57.773789 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c21e28dc-1ac5-404c-bbb3-6298f94fc5d0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c21e28dc-1ac5-404c-bbb3-6298f94fc5d0" (UID: "c21e28dc-1ac5-404c-bbb3-6298f94fc5d0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:03:57 crc kubenswrapper[4965]: I0219 10:03:57.797481 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c21e28dc-1ac5-404c-bbb3-6298f94fc5d0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c21e28dc-1ac5-404c-bbb3-6298f94fc5d0" (UID: "c21e28dc-1ac5-404c-bbb3-6298f94fc5d0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:03:57 crc kubenswrapper[4965]: I0219 10:03:57.818624 4965 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c21e28dc-1ac5-404c-bbb3-6298f94fc5d0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:57 crc kubenswrapper[4965]: I0219 10:03:57.818657 4965 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c21e28dc-1ac5-404c-bbb3-6298f94fc5d0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:57 crc kubenswrapper[4965]: I0219 10:03:57.818668 4965 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c21e28dc-1ac5-404c-bbb3-6298f94fc5d0-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:57 crc kubenswrapper[4965]: I0219 10:03:57.875472 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54" (UID: "9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:57 crc kubenswrapper[4965]: I0219 10:03:57.884488 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 19 10:03:57 crc kubenswrapper[4965]: I0219 10:03:57.885700 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54-config" (OuterVolumeSpecName: "config") pod "9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54" (UID: "9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:57 crc kubenswrapper[4965]: I0219 10:03:57.919976 4965 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:57 crc kubenswrapper[4965]: I0219 10:03:57.920012 4965 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:58 crc kubenswrapper[4965]: I0219 10:03:58.051799 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54" (UID: "9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:58 crc kubenswrapper[4965]: I0219 10:03:58.118519 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54" (UID: "9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:58 crc kubenswrapper[4965]: I0219 10:03:58.126771 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c21e28dc-1ac5-404c-bbb3-6298f94fc5d0-config" (OuterVolumeSpecName: "config") pod "c21e28dc-1ac5-404c-bbb3-6298f94fc5d0" (UID: "c21e28dc-1ac5-404c-bbb3-6298f94fc5d0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:03:58 crc kubenswrapper[4965]: I0219 10:03:58.130830 4965 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:58 crc kubenswrapper[4965]: I0219 10:03:58.130861 4965 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:58 crc kubenswrapper[4965]: I0219 10:03:58.130873 4965 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c21e28dc-1ac5-404c-bbb3-6298f94fc5d0-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:58 crc kubenswrapper[4965]: I0219 10:03:58.273979 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 19 10:03:58 crc kubenswrapper[4965]: I0219 10:03:58.300442 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6","Type":"ContainerStarted","Data":"9d03bad4d7199fcc8612f7714f81da1492578ceb0154c7da6ec24258a6d1f5ad"} Feb 19 10:03:58 crc kubenswrapper[4965]: I0219 10:03:58.300501 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6","Type":"ContainerStarted","Data":"0f75ba3112c349251320c1dbfffe0120569b05cbebc162026a9183d8c993da92"} Feb 19 10:03:58 crc kubenswrapper[4965]: I0219 10:03:58.334215 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5cbc96c99c-x6mpf" event={"ID":"9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54","Type":"ContainerDied","Data":"b66fa0c769f044d4209b4ad5b159d613000a9930a66e8345c73f73a42dd6320f"} Feb 19 10:03:58 crc kubenswrapper[4965]: I0219 10:03:58.334285 4965 scope.go:117] "RemoveContainer" containerID="0dd5ff578f88784de0c00bb232e2b39066e6bda734b72385e0e2339a6febfd1a" Feb 19 10:03:58 crc kubenswrapper[4965]: I0219 10:03:58.334632 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5cbc96c99c-x6mpf" Feb 19 10:03:58 crc kubenswrapper[4965]: I0219 10:03:58.337545 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-g6g6j" event={"ID":"c21e28dc-1ac5-404c-bbb3-6298f94fc5d0","Type":"ContainerDied","Data":"3b1eb84afa0d3d4129b82241431bc265f5175804b25010b5fa46eb4cb6dcda35"} Feb 19 10:03:58 crc kubenswrapper[4965]: I0219 10:03:58.338021 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-g6g6j" Feb 19 10:03:58 crc kubenswrapper[4965]: I0219 10:03:58.357331 4965 generic.go:334] "Generic (PLEG): container finished" podID="feacab4f-9866-41e7-a8c2-e9850aff1252" containerID="521d1a7f6f99a2ef4ae43a3c0ff1e7c74557907e5c7f53846cc9bf813c91d4a9" exitCode=0 Feb 19 10:03:58 crc kubenswrapper[4965]: I0219 10:03:58.357421 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58bd69657f-66r4m" event={"ID":"feacab4f-9866-41e7-a8c2-e9850aff1252","Type":"ContainerDied","Data":"521d1a7f6f99a2ef4ae43a3c0ff1e7c74557907e5c7f53846cc9bf813c91d4a9"} Feb 19 10:03:58 crc kubenswrapper[4965]: I0219 10:03:58.364799 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-c97f468c6-bwf6p" event={"ID":"45f4a2b8-338f-4c3d-afe8-305eb599081c","Type":"ContainerStarted","Data":"9015f34952cefbb9bd42e0029bbdb46d4a39bb8825ba02c404e4db59f1814c42"} Feb 19 10:03:58 crc kubenswrapper[4965]: I0219 10:03:58.364841 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-c97f468c6-bwf6p" Feb 19 10:03:58 crc kubenswrapper[4965]: I0219 10:03:58.364862 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-c97f468c6-bwf6p" Feb 19 10:03:58 crc kubenswrapper[4965]: I0219 10:03:58.383864 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 19 10:03:58 crc kubenswrapper[4965]: I0219 10:03:58.441791 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-c97f468c6-bwf6p" podStartSLOduration=4.441767642 podStartE2EDuration="4.441767642s" podCreationTimestamp="2026-02-19 10:03:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:03:58.418048506 +0000 UTC m=+1294.039369826" watchObservedRunningTime="2026-02-19 10:03:58.441767642 +0000 UTC m=+1294.063088952" Feb 19 10:03:58 crc kubenswrapper[4965]: I0219 10:03:58.521758 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54" (UID: "9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:03:58 crc kubenswrapper[4965]: I0219 10:03:58.529584 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-g6g6j"] Feb 19 10:03:58 crc kubenswrapper[4965]: I0219 10:03:58.545770 4965 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 10:03:58 crc kubenswrapper[4965]: I0219 10:03:58.545967 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-g6g6j"] Feb 19 10:03:58 crc kubenswrapper[4965]: I0219 10:03:58.600289 4965 scope.go:117] "RemoveContainer" containerID="137ccfcadc3afe6b21bbd9c90f0f855e1edb71876c6ad8e76ee2d91c8bb38ab5" Feb 19 10:03:58 crc kubenswrapper[4965]: I0219 10:03:58.703622 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5cbc96c99c-x6mpf"] Feb 19 10:03:58 crc kubenswrapper[4965]: I0219 10:03:58.725698 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5cbc96c99c-x6mpf"] Feb 19 10:03:59 crc kubenswrapper[4965]: I0219 10:03:59.242305 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54" path="/var/lib/kubelet/pods/9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54/volumes" Feb 19 10:03:59 crc kubenswrapper[4965]: I0219 10:03:59.243385 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c21e28dc-1ac5-404c-bbb3-6298f94fc5d0" path="/var/lib/kubelet/pods/c21e28dc-1ac5-404c-bbb3-6298f94fc5d0/volumes" Feb 19 10:03:59 crc kubenswrapper[4965]: I0219 10:03:59.381653 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-api-0" podUID="dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6" containerName="cloudkitty-api-log" containerID="cri-o://9d03bad4d7199fcc8612f7714f81da1492578ceb0154c7da6ec24258a6d1f5ad" gracePeriod=30 Feb 19 10:03:59 crc kubenswrapper[4965]: I0219 10:03:59.381765 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-api-0" podUID="dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6" containerName="cloudkitty-api" containerID="cri-o://66922edcca4ee1c1bfd91792b5232872f63362b68312661a536b67c30ac81a9b" gracePeriod=30 Feb 19 10:03:59 crc kubenswrapper[4965]: I0219 10:03:59.381608 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6","Type":"ContainerStarted","Data":"66922edcca4ee1c1bfd91792b5232872f63362b68312661a536b67c30ac81a9b"} Feb 19 10:03:59 crc kubenswrapper[4965]: I0219 10:03:59.381927 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-api-0" Feb 19 10:03:59 crc kubenswrapper[4965]: I0219 10:03:59.387163 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58bd69657f-66r4m" event={"ID":"feacab4f-9866-41e7-a8c2-e9850aff1252","Type":"ContainerStarted","Data":"7a4838ae6a8b0456b86cae2a6be896db8b6d9e6ee13475213a4ad0f585b35bec"} Feb 19 10:03:59 crc kubenswrapper[4965]: I0219 10:03:59.387553 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58bd69657f-66r4m" Feb 19 10:03:59 crc kubenswrapper[4965]: I0219 10:03:59.414901 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"92030a04-19d0-4766-b560-3d5b64be8716","Type":"ContainerStarted","Data":"f4b72a9569d4af30bee5002dd256b39ef98ec0d05fb1d597cc5b9dcc19f9c291"} Feb 19 10:03:59 crc kubenswrapper[4965]: I0219 10:03:59.414965 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"92030a04-19d0-4766-b560-3d5b64be8716","Type":"ContainerStarted","Data":"33a8c39f14bc8ba120be73bbbc5ab653c6f35d06424991d32b047323815e34b4"} Feb 19 10:03:59 crc kubenswrapper[4965]: I0219 10:03:59.417867 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-api-0" podStartSLOduration=4.417845313 podStartE2EDuration="4.417845313s" podCreationTimestamp="2026-02-19 10:03:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:03:59.413621941 +0000 UTC m=+1295.034943261" watchObservedRunningTime="2026-02-19 10:03:59.417845313 +0000 UTC m=+1295.039166623" Feb 19 10:03:59 crc kubenswrapper[4965]: I0219 10:03:59.439068 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58bd69657f-66r4m" podStartSLOduration=4.439047588 podStartE2EDuration="4.439047588s" podCreationTimestamp="2026-02-19 10:03:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:03:59.431970606 +0000 UTC m=+1295.053291926" watchObservedRunningTime="2026-02-19 10:03:59.439047588 +0000 UTC m=+1295.060368898" Feb 19 10:04:00 crc kubenswrapper[4965]: I0219 10:04:00.155611 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-d7568bd5b-nvbtn" Feb 19 10:04:00 crc kubenswrapper[4965]: I0219 10:04:00.288870 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-d7568bd5b-nvbtn" Feb 19 10:04:00 crc kubenswrapper[4965]: I0219 10:04:00.438998 4965 generic.go:334] "Generic (PLEG): container finished" podID="dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6" containerID="66922edcca4ee1c1bfd91792b5232872f63362b68312661a536b67c30ac81a9b" exitCode=0 Feb 19 10:04:00 crc kubenswrapper[4965]: I0219 10:04:00.439297 4965 generic.go:334] "Generic (PLEG): container finished" podID="dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6" containerID="9d03bad4d7199fcc8612f7714f81da1492578ceb0154c7da6ec24258a6d1f5ad" exitCode=143 Feb 19 10:04:00 crc kubenswrapper[4965]: I0219 10:04:00.440119 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6","Type":"ContainerDied","Data":"66922edcca4ee1c1bfd91792b5232872f63362b68312661a536b67c30ac81a9b"} Feb 19 10:04:00 crc kubenswrapper[4965]: I0219 10:04:00.440153 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6","Type":"ContainerDied","Data":"9d03bad4d7199fcc8612f7714f81da1492578ceb0154c7da6ec24258a6d1f5ad"} Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.172202 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.241853 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6-certs\") pod \"dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6\" (UID: \"dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6\") " Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.241967 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6-config-data\") pod \"dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6\" (UID: \"dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6\") " Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.242092 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6-logs\") pod \"dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6\" (UID: \"dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6\") " Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.242116 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6-combined-ca-bundle\") pod \"dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6\" (UID: \"dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6\") " Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.242152 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6-config-data-custom\") pod \"dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6\" (UID: \"dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6\") " Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.242173 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6-scripts\") pod \"dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6\" (UID: \"dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6\") " Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.242224 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfkb8\" (UniqueName: \"kubernetes.io/projected/dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6-kube-api-access-hfkb8\") pod \"dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6\" (UID: \"dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6\") " Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.256684 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6" (UID: "dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.257585 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6-logs" (OuterVolumeSpecName: "logs") pod "dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6" (UID: "dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.259838 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6-kube-api-access-hfkb8" (OuterVolumeSpecName: "kube-api-access-hfkb8") pod "dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6" (UID: "dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6"). InnerVolumeSpecName "kube-api-access-hfkb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.260035 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6-certs" (OuterVolumeSpecName: "certs") pod "dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6" (UID: "dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.261829 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6-scripts" (OuterVolumeSpecName: "scripts") pod "dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6" (UID: "dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.293517 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6" (UID: "dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.314340 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6-config-data" (OuterVolumeSpecName: "config-data") pod "dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6" (UID: "dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.345842 4965 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.345889 4965 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6-logs\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.345902 4965 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.345917 4965 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.345928 4965 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.345941 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfkb8\" (UniqueName: \"kubernetes.io/projected/dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6-kube-api-access-hfkb8\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.345953 4965 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6-certs\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.466172 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"908accc3-aea8-40d3-a13f-c197badfa0d1","Type":"ContainerStarted","Data":"5d7b515deab7bdabd8b3d9ba472035b79bd994c93e3605cdd424f5f408a414d5"} Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.494964 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6","Type":"ContainerDied","Data":"0f75ba3112c349251320c1dbfffe0120569b05cbebc162026a9183d8c993da92"} Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.495025 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.495057 4965 scope.go:117] "RemoveContainer" containerID="66922edcca4ee1c1bfd91792b5232872f63362b68312661a536b67c30ac81a9b" Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.517024 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-proc-0" podStartSLOduration=2.474727691 podStartE2EDuration="6.517005985s" podCreationTimestamp="2026-02-19 10:03:55 +0000 UTC" firstStartedPulling="2026-02-19 10:03:56.80446524 +0000 UTC m=+1292.425786560" lastFinishedPulling="2026-02-19 10:04:00.846743544 +0000 UTC m=+1296.468064854" observedRunningTime="2026-02-19 10:04:01.47849027 +0000 UTC m=+1297.099811580" watchObservedRunningTime="2026-02-19 10:04:01.517005985 +0000 UTC m=+1297.138327295" Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.540067 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.564014 4965 scope.go:117] "RemoveContainer" containerID="9d03bad4d7199fcc8612f7714f81da1492578ceb0154c7da6ec24258a6d1f5ad" Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.572844 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.604159 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.641225 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-api-0"] Feb 19 10:04:01 crc kubenswrapper[4965]: E0219 10:04:01.642170 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c21e28dc-1ac5-404c-bbb3-6298f94fc5d0" containerName="dnsmasq-dns" Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.642192 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="c21e28dc-1ac5-404c-bbb3-6298f94fc5d0" containerName="dnsmasq-dns" Feb 19 10:04:01 crc kubenswrapper[4965]: E0219 10:04:01.642941 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6" containerName="cloudkitty-api-log" Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.642950 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6" containerName="cloudkitty-api-log" Feb 19 10:04:01 crc kubenswrapper[4965]: E0219 10:04:01.642969 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54" containerName="neutron-httpd" Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.642977 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54" containerName="neutron-httpd" Feb 19 10:04:01 crc kubenswrapper[4965]: E0219 10:04:01.643036 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6" containerName="cloudkitty-api" Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.643045 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6" containerName="cloudkitty-api" Feb 19 10:04:01 crc kubenswrapper[4965]: E0219 10:04:01.643058 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54" containerName="neutron-api" Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.643077 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54" containerName="neutron-api" Feb 19 10:04:01 crc kubenswrapper[4965]: E0219 10:04:01.643108 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c21e28dc-1ac5-404c-bbb3-6298f94fc5d0" containerName="init" Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.643114 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="c21e28dc-1ac5-404c-bbb3-6298f94fc5d0" containerName="init" Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.647961 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54" containerName="neutron-httpd" Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.647989 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6" containerName="cloudkitty-api" Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.648002 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6" containerName="cloudkitty-api-log" Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.648032 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="c21e28dc-1ac5-404c-bbb3-6298f94fc5d0" containerName="dnsmasq-dns" Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.648042 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="9eb59c9e-00e2-4f7e-85df-d8b40e2cdd54" containerName="neutron-api" Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.649173 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.653736 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8ddfcb2-bbac-405a-beee-d6e4da23170d-logs\") pod \"cloudkitty-api-0\" (UID: \"f8ddfcb2-bbac-405a-beee-d6e4da23170d\") " pod="openstack/cloudkitty-api-0" Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.653787 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8ddfcb2-bbac-405a-beee-d6e4da23170d-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"f8ddfcb2-bbac-405a-beee-d6e4da23170d\") " pod="openstack/cloudkitty-api-0" Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.653835 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8ddfcb2-bbac-405a-beee-d6e4da23170d-scripts\") pod \"cloudkitty-api-0\" (UID: \"f8ddfcb2-bbac-405a-beee-d6e4da23170d\") " pod="openstack/cloudkitty-api-0" Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.653857 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f8ddfcb2-bbac-405a-beee-d6e4da23170d-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"f8ddfcb2-bbac-405a-beee-d6e4da23170d\") " pod="openstack/cloudkitty-api-0" Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.653883 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8ddfcb2-bbac-405a-beee-d6e4da23170d-config-data\") pod \"cloudkitty-api-0\" (UID: \"f8ddfcb2-bbac-405a-beee-d6e4da23170d\") " pod="openstack/cloudkitty-api-0" Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.653915 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8ddfcb2-bbac-405a-beee-d6e4da23170d-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"f8ddfcb2-bbac-405a-beee-d6e4da23170d\") " pod="openstack/cloudkitty-api-0" Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.653929 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/f8ddfcb2-bbac-405a-beee-d6e4da23170d-certs\") pod \"cloudkitty-api-0\" (UID: \"f8ddfcb2-bbac-405a-beee-d6e4da23170d\") " pod="openstack/cloudkitty-api-0" Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.653952 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st4kl\" (UniqueName: \"kubernetes.io/projected/f8ddfcb2-bbac-405a-beee-d6e4da23170d-kube-api-access-st4kl\") pod \"cloudkitty-api-0\" (UID: \"f8ddfcb2-bbac-405a-beee-d6e4da23170d\") " pod="openstack/cloudkitty-api-0" Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.653981 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8ddfcb2-bbac-405a-beee-d6e4da23170d-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"f8ddfcb2-bbac-405a-beee-d6e4da23170d\") " pod="openstack/cloudkitty-api-0" Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.655759 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-api-config-data" Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.656399 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-public-svc" Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.656513 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-internal-svc" Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.666755 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.756974 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8ddfcb2-bbac-405a-beee-d6e4da23170d-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"f8ddfcb2-bbac-405a-beee-d6e4da23170d\") " pod="openstack/cloudkitty-api-0" Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.757039 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8ddfcb2-bbac-405a-beee-d6e4da23170d-logs\") pod \"cloudkitty-api-0\" (UID: \"f8ddfcb2-bbac-405a-beee-d6e4da23170d\") " pod="openstack/cloudkitty-api-0" Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.757080 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8ddfcb2-bbac-405a-beee-d6e4da23170d-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"f8ddfcb2-bbac-405a-beee-d6e4da23170d\") " pod="openstack/cloudkitty-api-0" Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.757127 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8ddfcb2-bbac-405a-beee-d6e4da23170d-scripts\") pod \"cloudkitty-api-0\" (UID: \"f8ddfcb2-bbac-405a-beee-d6e4da23170d\") " pod="openstack/cloudkitty-api-0" Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.757146 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f8ddfcb2-bbac-405a-beee-d6e4da23170d-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"f8ddfcb2-bbac-405a-beee-d6e4da23170d\") " pod="openstack/cloudkitty-api-0" Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.757172 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8ddfcb2-bbac-405a-beee-d6e4da23170d-config-data\") pod \"cloudkitty-api-0\" (UID: \"f8ddfcb2-bbac-405a-beee-d6e4da23170d\") " pod="openstack/cloudkitty-api-0" Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.757240 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8ddfcb2-bbac-405a-beee-d6e4da23170d-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"f8ddfcb2-bbac-405a-beee-d6e4da23170d\") " pod="openstack/cloudkitty-api-0" Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.757262 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/f8ddfcb2-bbac-405a-beee-d6e4da23170d-certs\") pod \"cloudkitty-api-0\" (UID: \"f8ddfcb2-bbac-405a-beee-d6e4da23170d\") " pod="openstack/cloudkitty-api-0" Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.757285 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st4kl\" (UniqueName: \"kubernetes.io/projected/f8ddfcb2-bbac-405a-beee-d6e4da23170d-kube-api-access-st4kl\") pod \"cloudkitty-api-0\" (UID: \"f8ddfcb2-bbac-405a-beee-d6e4da23170d\") " pod="openstack/cloudkitty-api-0" Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.761880 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f8ddfcb2-bbac-405a-beee-d6e4da23170d-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"f8ddfcb2-bbac-405a-beee-d6e4da23170d\") " pod="openstack/cloudkitty-api-0" Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.762125 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8ddfcb2-bbac-405a-beee-d6e4da23170d-logs\") pod \"cloudkitty-api-0\" (UID: \"f8ddfcb2-bbac-405a-beee-d6e4da23170d\") " pod="openstack/cloudkitty-api-0" Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.766133 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8ddfcb2-bbac-405a-beee-d6e4da23170d-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"f8ddfcb2-bbac-405a-beee-d6e4da23170d\") " pod="openstack/cloudkitty-api-0" Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.768030 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8ddfcb2-bbac-405a-beee-d6e4da23170d-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"f8ddfcb2-bbac-405a-beee-d6e4da23170d\") " pod="openstack/cloudkitty-api-0" Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.769621 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8ddfcb2-bbac-405a-beee-d6e4da23170d-scripts\") pod \"cloudkitty-api-0\" (UID: \"f8ddfcb2-bbac-405a-beee-d6e4da23170d\") " pod="openstack/cloudkitty-api-0" Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.770595 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8ddfcb2-bbac-405a-beee-d6e4da23170d-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"f8ddfcb2-bbac-405a-beee-d6e4da23170d\") " pod="openstack/cloudkitty-api-0" Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.772499 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/f8ddfcb2-bbac-405a-beee-d6e4da23170d-certs\") pod \"cloudkitty-api-0\" (UID: \"f8ddfcb2-bbac-405a-beee-d6e4da23170d\") " pod="openstack/cloudkitty-api-0" Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.775734 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st4kl\" (UniqueName: \"kubernetes.io/projected/f8ddfcb2-bbac-405a-beee-d6e4da23170d-kube-api-access-st4kl\") pod \"cloudkitty-api-0\" (UID: \"f8ddfcb2-bbac-405a-beee-d6e4da23170d\") " pod="openstack/cloudkitty-api-0" Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.777283 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8ddfcb2-bbac-405a-beee-d6e4da23170d-config-data\") pod \"cloudkitty-api-0\" (UID: \"f8ddfcb2-bbac-405a-beee-d6e4da23170d\") " pod="openstack/cloudkitty-api-0" Feb 19 10:04:01 crc kubenswrapper[4965]: I0219 10:04:01.969899 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 19 10:04:02 crc kubenswrapper[4965]: I0219 10:04:02.452640 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 19 10:04:02 crc kubenswrapper[4965]: W0219 10:04:02.454551 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8ddfcb2_bbac_405a_beee_d6e4da23170d.slice/crio-19def9397b2c4afd8b2781c5dee81dbce3b165e6f509953a4036284fe63072e0 WatchSource:0}: Error finding container 19def9397b2c4afd8b2781c5dee81dbce3b165e6f509953a4036284fe63072e0: Status 404 returned error can't find the container with id 19def9397b2c4afd8b2781c5dee81dbce3b165e6f509953a4036284fe63072e0 Feb 19 10:04:02 crc kubenswrapper[4965]: I0219 10:04:02.518381 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"92030a04-19d0-4766-b560-3d5b64be8716","Type":"ContainerStarted","Data":"eafc906e2a16c50ee69b4aeb833dfab400515ac10b972fa17e78999b3db0eb30"} Feb 19 10:04:02 crc kubenswrapper[4965]: I0219 10:04:02.518505 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 19 10:04:02 crc kubenswrapper[4965]: I0219 10:04:02.521108 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"f8ddfcb2-bbac-405a-beee-d6e4da23170d","Type":"ContainerStarted","Data":"19def9397b2c4afd8b2781c5dee81dbce3b165e6f509953a4036284fe63072e0"} Feb 19 10:04:02 crc kubenswrapper[4965]: I0219 10:04:02.547395 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.547357195 podStartE2EDuration="6.547357195s" podCreationTimestamp="2026-02-19 10:03:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:04:02.538583402 +0000 UTC m=+1298.159904712" watchObservedRunningTime="2026-02-19 10:04:02.547357195 +0000 UTC m=+1298.168678505" Feb 19 10:04:03 crc kubenswrapper[4965]: I0219 10:04:03.113409 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 19 10:04:03 crc kubenswrapper[4965]: I0219 10:04:03.164073 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 10:04:03 crc kubenswrapper[4965]: I0219 10:04:03.284419 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6" path="/var/lib/kubelet/pods/dbcb0cd7-89d6-4c32-8f15-f1ebc4eeb5b6/volumes" Feb 19 10:04:03 crc kubenswrapper[4965]: I0219 10:04:03.531757 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"f8ddfcb2-bbac-405a-beee-d6e4da23170d","Type":"ContainerStarted","Data":"69ca164c9f37f0ce0c37fa016da9e88aa4cd0b504bbdb26325c7128fca9264df"} Feb 19 10:04:03 crc kubenswrapper[4965]: I0219 10:04:03.531841 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"f8ddfcb2-bbac-405a-beee-d6e4da23170d","Type":"ContainerStarted","Data":"332c594571f180116989e323bf9d780e8c755ff84d07fe3b12f0ac47d671441f"} Feb 19 10:04:03 crc kubenswrapper[4965]: I0219 10:04:03.531825 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-proc-0" podUID="908accc3-aea8-40d3-a13f-c197badfa0d1" containerName="cloudkitty-proc" containerID="cri-o://5d7b515deab7bdabd8b3d9ba472035b79bd994c93e3605cdd424f5f408a414d5" gracePeriod=30 Feb 19 10:04:03 crc kubenswrapper[4965]: I0219 10:04:03.532467 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="5bf5dff6-70bd-4013-95d1-6e30d7e765a0" containerName="probe" containerID="cri-o://3c55fc999dbe5b5d627568c9e85c57dcd835bf47c420669edb7a2ad4c1b94538" gracePeriod=30 Feb 19 10:04:03 crc kubenswrapper[4965]: I0219 10:04:03.532424 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="5bf5dff6-70bd-4013-95d1-6e30d7e765a0" containerName="cinder-scheduler" containerID="cri-o://c04bc1c0bf157dfa28099dda62833b90199b2419967a9219f832600f93cdbbca" gracePeriod=30 Feb 19 10:04:03 crc kubenswrapper[4965]: I0219 10:04:03.576103 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-api-0" podStartSLOduration=2.576076965 podStartE2EDuration="2.576076965s" podCreationTimestamp="2026-02-19 10:04:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:04:03.565693523 +0000 UTC m=+1299.187014833" watchObservedRunningTime="2026-02-19 10:04:03.576076965 +0000 UTC m=+1299.197398275" Feb 19 10:04:04 crc kubenswrapper[4965]: I0219 10:04:04.540677 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-api-0" Feb 19 10:04:05 crc kubenswrapper[4965]: I0219 10:04:05.552103 4965 generic.go:334] "Generic (PLEG): container finished" podID="5bf5dff6-70bd-4013-95d1-6e30d7e765a0" containerID="3c55fc999dbe5b5d627568c9e85c57dcd835bf47c420669edb7a2ad4c1b94538" exitCode=0 Feb 19 10:04:05 crc kubenswrapper[4965]: I0219 10:04:05.552180 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5bf5dff6-70bd-4013-95d1-6e30d7e765a0","Type":"ContainerDied","Data":"3c55fc999dbe5b5d627568c9e85c57dcd835bf47c420669edb7a2ad4c1b94538"} Feb 19 10:04:05 crc kubenswrapper[4965]: I0219 10:04:05.583354 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58bd69657f-66r4m" Feb 19 10:04:05 crc kubenswrapper[4965]: I0219 10:04:05.664621 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-hprpt"] Feb 19 10:04:05 crc kubenswrapper[4965]: I0219 10:04:05.664858 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7b667979-hprpt" podUID="0acf6ac2-e0ae-4315-9e82-656caeeedbb6" containerName="dnsmasq-dns" containerID="cri-o://af67119bac7aa3bad15b61e62d797becae8b4f67ddcd5a580a1250f541c52fd0" gracePeriod=10 Feb 19 10:04:05 crc kubenswrapper[4965]: I0219 10:04:05.924682 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-fbb65bccb-zmlg7" Feb 19 10:04:06 crc kubenswrapper[4965]: I0219 10:04:06.342358 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-hprpt" Feb 19 10:04:06 crc kubenswrapper[4965]: I0219 10:04:06.476058 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0acf6ac2-e0ae-4315-9e82-656caeeedbb6-ovsdbserver-nb\") pod \"0acf6ac2-e0ae-4315-9e82-656caeeedbb6\" (UID: \"0acf6ac2-e0ae-4315-9e82-656caeeedbb6\") " Feb 19 10:04:06 crc kubenswrapper[4965]: I0219 10:04:06.476183 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8qj7\" (UniqueName: \"kubernetes.io/projected/0acf6ac2-e0ae-4315-9e82-656caeeedbb6-kube-api-access-s8qj7\") pod \"0acf6ac2-e0ae-4315-9e82-656caeeedbb6\" (UID: \"0acf6ac2-e0ae-4315-9e82-656caeeedbb6\") " Feb 19 10:04:06 crc kubenswrapper[4965]: I0219 10:04:06.476281 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0acf6ac2-e0ae-4315-9e82-656caeeedbb6-dns-swift-storage-0\") pod \"0acf6ac2-e0ae-4315-9e82-656caeeedbb6\" (UID: \"0acf6ac2-e0ae-4315-9e82-656caeeedbb6\") " Feb 19 10:04:06 crc kubenswrapper[4965]: I0219 10:04:06.476309 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0acf6ac2-e0ae-4315-9e82-656caeeedbb6-ovsdbserver-sb\") pod \"0acf6ac2-e0ae-4315-9e82-656caeeedbb6\" (UID: \"0acf6ac2-e0ae-4315-9e82-656caeeedbb6\") " Feb 19 10:04:06 crc kubenswrapper[4965]: I0219 10:04:06.476427 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0acf6ac2-e0ae-4315-9e82-656caeeedbb6-dns-svc\") pod \"0acf6ac2-e0ae-4315-9e82-656caeeedbb6\" (UID: \"0acf6ac2-e0ae-4315-9e82-656caeeedbb6\") " Feb 19 10:04:06 crc kubenswrapper[4965]: I0219 10:04:06.476515 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0acf6ac2-e0ae-4315-9e82-656caeeedbb6-config\") pod \"0acf6ac2-e0ae-4315-9e82-656caeeedbb6\" (UID: \"0acf6ac2-e0ae-4315-9e82-656caeeedbb6\") " Feb 19 10:04:06 crc kubenswrapper[4965]: I0219 10:04:06.504726 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0acf6ac2-e0ae-4315-9e82-656caeeedbb6-kube-api-access-s8qj7" (OuterVolumeSpecName: "kube-api-access-s8qj7") pod "0acf6ac2-e0ae-4315-9e82-656caeeedbb6" (UID: "0acf6ac2-e0ae-4315-9e82-656caeeedbb6"). InnerVolumeSpecName "kube-api-access-s8qj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:04:06 crc kubenswrapper[4965]: I0219 10:04:06.569524 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0acf6ac2-e0ae-4315-9e82-656caeeedbb6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0acf6ac2-e0ae-4315-9e82-656caeeedbb6" (UID: "0acf6ac2-e0ae-4315-9e82-656caeeedbb6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:04:06 crc kubenswrapper[4965]: I0219 10:04:06.575172 4965 generic.go:334] "Generic (PLEG): container finished" podID="0acf6ac2-e0ae-4315-9e82-656caeeedbb6" containerID="af67119bac7aa3bad15b61e62d797becae8b4f67ddcd5a580a1250f541c52fd0" exitCode=0 Feb 19 10:04:06 crc kubenswrapper[4965]: I0219 10:04:06.575525 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-hprpt" event={"ID":"0acf6ac2-e0ae-4315-9e82-656caeeedbb6","Type":"ContainerDied","Data":"af67119bac7aa3bad15b61e62d797becae8b4f67ddcd5a580a1250f541c52fd0"} Feb 19 10:04:06 crc kubenswrapper[4965]: I0219 10:04:06.575593 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-hprpt" event={"ID":"0acf6ac2-e0ae-4315-9e82-656caeeedbb6","Type":"ContainerDied","Data":"ff95e90c0fbbfd5543f9e93a9adcc1b359959fbc0657dc387f2a2fb094e681ff"} Feb 19 10:04:06 crc kubenswrapper[4965]: I0219 10:04:06.575621 4965 scope.go:117] "RemoveContainer" containerID="af67119bac7aa3bad15b61e62d797becae8b4f67ddcd5a580a1250f541c52fd0" Feb 19 10:04:06 crc kubenswrapper[4965]: I0219 10:04:06.575800 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-hprpt" Feb 19 10:04:06 crc kubenswrapper[4965]: I0219 10:04:06.581780 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8qj7\" (UniqueName: \"kubernetes.io/projected/0acf6ac2-e0ae-4315-9e82-656caeeedbb6-kube-api-access-s8qj7\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:06 crc kubenswrapper[4965]: I0219 10:04:06.583958 4965 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0acf6ac2-e0ae-4315-9e82-656caeeedbb6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:06 crc kubenswrapper[4965]: I0219 10:04:06.598221 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0acf6ac2-e0ae-4315-9e82-656caeeedbb6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0acf6ac2-e0ae-4315-9e82-656caeeedbb6" (UID: "0acf6ac2-e0ae-4315-9e82-656caeeedbb6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:04:06 crc kubenswrapper[4965]: I0219 10:04:06.602674 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0acf6ac2-e0ae-4315-9e82-656caeeedbb6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0acf6ac2-e0ae-4315-9e82-656caeeedbb6" (UID: "0acf6ac2-e0ae-4315-9e82-656caeeedbb6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:04:06 crc kubenswrapper[4965]: I0219 10:04:06.603771 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0acf6ac2-e0ae-4315-9e82-656caeeedbb6-config" (OuterVolumeSpecName: "config") pod "0acf6ac2-e0ae-4315-9e82-656caeeedbb6" (UID: "0acf6ac2-e0ae-4315-9e82-656caeeedbb6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:04:06 crc kubenswrapper[4965]: I0219 10:04:06.632630 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-c97f468c6-bwf6p" Feb 19 10:04:06 crc kubenswrapper[4965]: I0219 10:04:06.632760 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0acf6ac2-e0ae-4315-9e82-656caeeedbb6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0acf6ac2-e0ae-4315-9e82-656caeeedbb6" (UID: "0acf6ac2-e0ae-4315-9e82-656caeeedbb6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:04:06 crc kubenswrapper[4965]: I0219 10:04:06.636759 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-fbb65bccb-zmlg7" Feb 19 10:04:06 crc kubenswrapper[4965]: I0219 10:04:06.658497 4965 scope.go:117] "RemoveContainer" containerID="e7d1a98e558e3d8f1a25ef24f0c59ef5df97d74fff3ed683d7f605119b1cad68" Feb 19 10:04:06 crc kubenswrapper[4965]: I0219 10:04:06.689404 4965 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0acf6ac2-e0ae-4315-9e82-656caeeedbb6-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:06 crc kubenswrapper[4965]: I0219 10:04:06.689446 4965 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0acf6ac2-e0ae-4315-9e82-656caeeedbb6-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:06 crc kubenswrapper[4965]: I0219 10:04:06.689461 4965 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0acf6ac2-e0ae-4315-9e82-656caeeedbb6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:06 crc kubenswrapper[4965]: I0219 10:04:06.689473 4965 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0acf6ac2-e0ae-4315-9e82-656caeeedbb6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:06 crc kubenswrapper[4965]: I0219 10:04:06.697254 4965 scope.go:117] "RemoveContainer" containerID="af67119bac7aa3bad15b61e62d797becae8b4f67ddcd5a580a1250f541c52fd0" Feb 19 10:04:06 crc kubenswrapper[4965]: E0219 10:04:06.698543 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af67119bac7aa3bad15b61e62d797becae8b4f67ddcd5a580a1250f541c52fd0\": container with ID starting with af67119bac7aa3bad15b61e62d797becae8b4f67ddcd5a580a1250f541c52fd0 not found: ID does not exist" containerID="af67119bac7aa3bad15b61e62d797becae8b4f67ddcd5a580a1250f541c52fd0" Feb 19 10:04:06 crc kubenswrapper[4965]: I0219 10:04:06.698601 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af67119bac7aa3bad15b61e62d797becae8b4f67ddcd5a580a1250f541c52fd0"} err="failed to get container status \"af67119bac7aa3bad15b61e62d797becae8b4f67ddcd5a580a1250f541c52fd0\": rpc error: code = NotFound desc = could not find container \"af67119bac7aa3bad15b61e62d797becae8b4f67ddcd5a580a1250f541c52fd0\": container with ID starting with af67119bac7aa3bad15b61e62d797becae8b4f67ddcd5a580a1250f541c52fd0 not found: ID does not exist" Feb 19 10:04:06 crc kubenswrapper[4965]: I0219 10:04:06.698636 4965 scope.go:117] "RemoveContainer" containerID="e7d1a98e558e3d8f1a25ef24f0c59ef5df97d74fff3ed683d7f605119b1cad68" Feb 19 10:04:06 crc kubenswrapper[4965]: E0219 10:04:06.702555 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7d1a98e558e3d8f1a25ef24f0c59ef5df97d74fff3ed683d7f605119b1cad68\": container with ID starting with e7d1a98e558e3d8f1a25ef24f0c59ef5df97d74fff3ed683d7f605119b1cad68 not found: ID does not exist" containerID="e7d1a98e558e3d8f1a25ef24f0c59ef5df97d74fff3ed683d7f605119b1cad68" Feb 19 10:04:06 crc kubenswrapper[4965]: I0219 10:04:06.702613 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7d1a98e558e3d8f1a25ef24f0c59ef5df97d74fff3ed683d7f605119b1cad68"} err="failed to get container status \"e7d1a98e558e3d8f1a25ef24f0c59ef5df97d74fff3ed683d7f605119b1cad68\": rpc error: code = NotFound desc = could not find container \"e7d1a98e558e3d8f1a25ef24f0c59ef5df97d74fff3ed683d7f605119b1cad68\": container with ID starting with e7d1a98e558e3d8f1a25ef24f0c59ef5df97d74fff3ed683d7f605119b1cad68 not found: ID does not exist" Feb 19 10:04:06 crc kubenswrapper[4965]: I0219 10:04:06.979441 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-hprpt"] Feb 19 10:04:07 crc kubenswrapper[4965]: I0219 10:04:07.012141 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-hprpt"] Feb 19 10:04:07 crc kubenswrapper[4965]: I0219 10:04:07.148048 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-c97f468c6-bwf6p" Feb 19 10:04:07 crc kubenswrapper[4965]: I0219 10:04:07.246592 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0acf6ac2-e0ae-4315-9e82-656caeeedbb6" path="/var/lib/kubelet/pods/0acf6ac2-e0ae-4315-9e82-656caeeedbb6/volumes" Feb 19 10:04:07 crc kubenswrapper[4965]: I0219 10:04:07.360494 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-d7568bd5b-nvbtn"] Feb 19 10:04:07 crc kubenswrapper[4965]: I0219 10:04:07.360953 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-d7568bd5b-nvbtn" podUID="e59b9522-d30f-4640-8e62-55e0b0c91c9a" containerName="barbican-api-log" containerID="cri-o://bb586ffe32bb9aad537fdcfbf563a3527d9d8f1844ac3dc36c18cb230a15370d" gracePeriod=30 Feb 19 10:04:07 crc kubenswrapper[4965]: I0219 10:04:07.361367 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-d7568bd5b-nvbtn" podUID="e59b9522-d30f-4640-8e62-55e0b0c91c9a" containerName="barbican-api" containerID="cri-o://c02d9e12468023d21ce3165981bc638d5ab08a661112da13016f680b97a1989b" gracePeriod=30 Feb 19 10:04:07 crc kubenswrapper[4965]: I0219 10:04:07.646536 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 10:04:07 crc kubenswrapper[4965]: I0219 10:04:07.653564 4965 generic.go:334] "Generic (PLEG): container finished" podID="5bf5dff6-70bd-4013-95d1-6e30d7e765a0" containerID="c04bc1c0bf157dfa28099dda62833b90199b2419967a9219f832600f93cdbbca" exitCode=0 Feb 19 10:04:07 crc kubenswrapper[4965]: I0219 10:04:07.653662 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5bf5dff6-70bd-4013-95d1-6e30d7e765a0","Type":"ContainerDied","Data":"c04bc1c0bf157dfa28099dda62833b90199b2419967a9219f832600f93cdbbca"} Feb 19 10:04:07 crc kubenswrapper[4965]: I0219 10:04:07.653689 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5bf5dff6-70bd-4013-95d1-6e30d7e765a0","Type":"ContainerDied","Data":"2fadd622db164bc8e1764ec13c62dcf4396134d0e6387514ae6b5dd8fe3693c2"} Feb 19 10:04:07 crc kubenswrapper[4965]: I0219 10:04:07.653705 4965 scope.go:117] "RemoveContainer" containerID="3c55fc999dbe5b5d627568c9e85c57dcd835bf47c420669edb7a2ad4c1b94538" Feb 19 10:04:07 crc kubenswrapper[4965]: E0219 10:04:07.682670 4965 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod908accc3_aea8_40d3_a13f_c197badfa0d1.slice/crio-conmon-5d7b515deab7bdabd8b3d9ba472035b79bd994c93e3605cdd424f5f408a414d5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode59b9522_d30f_4640_8e62_55e0b0c91c9a.slice/crio-conmon-bb586ffe32bb9aad537fdcfbf563a3527d9d8f1844ac3dc36c18cb230a15370d.scope\": RecentStats: unable to find data in memory cache]" Feb 19 10:04:07 crc kubenswrapper[4965]: I0219 10:04:07.682931 4965 generic.go:334] "Generic (PLEG): container finished" podID="e59b9522-d30f-4640-8e62-55e0b0c91c9a" containerID="bb586ffe32bb9aad537fdcfbf563a3527d9d8f1844ac3dc36c18cb230a15370d" exitCode=143 Feb 19 10:04:07 crc kubenswrapper[4965]: I0219 10:04:07.683010 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d7568bd5b-nvbtn" event={"ID":"e59b9522-d30f-4640-8e62-55e0b0c91c9a","Type":"ContainerDied","Data":"bb586ffe32bb9aad537fdcfbf563a3527d9d8f1844ac3dc36c18cb230a15370d"} Feb 19 10:04:07 crc kubenswrapper[4965]: I0219 10:04:07.690037 4965 scope.go:117] "RemoveContainer" containerID="c04bc1c0bf157dfa28099dda62833b90199b2419967a9219f832600f93cdbbca" Feb 19 10:04:07 crc kubenswrapper[4965]: I0219 10:04:07.691100 4965 generic.go:334] "Generic (PLEG): container finished" podID="908accc3-aea8-40d3-a13f-c197badfa0d1" containerID="5d7b515deab7bdabd8b3d9ba472035b79bd994c93e3605cdd424f5f408a414d5" exitCode=0 Feb 19 10:04:07 crc kubenswrapper[4965]: I0219 10:04:07.691989 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"908accc3-aea8-40d3-a13f-c197badfa0d1","Type":"ContainerDied","Data":"5d7b515deab7bdabd8b3d9ba472035b79bd994c93e3605cdd424f5f408a414d5"} Feb 19 10:04:07 crc kubenswrapper[4965]: I0219 10:04:07.748620 4965 scope.go:117] "RemoveContainer" containerID="3c55fc999dbe5b5d627568c9e85c57dcd835bf47c420669edb7a2ad4c1b94538" Feb 19 10:04:07 crc kubenswrapper[4965]: E0219 10:04:07.749225 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c55fc999dbe5b5d627568c9e85c57dcd835bf47c420669edb7a2ad4c1b94538\": container with ID starting with 3c55fc999dbe5b5d627568c9e85c57dcd835bf47c420669edb7a2ad4c1b94538 not found: ID does not exist" containerID="3c55fc999dbe5b5d627568c9e85c57dcd835bf47c420669edb7a2ad4c1b94538" Feb 19 10:04:07 crc kubenswrapper[4965]: I0219 10:04:07.749258 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c55fc999dbe5b5d627568c9e85c57dcd835bf47c420669edb7a2ad4c1b94538"} err="failed to get container status \"3c55fc999dbe5b5d627568c9e85c57dcd835bf47c420669edb7a2ad4c1b94538\": rpc error: code = NotFound desc = could not find container \"3c55fc999dbe5b5d627568c9e85c57dcd835bf47c420669edb7a2ad4c1b94538\": container with ID starting with 3c55fc999dbe5b5d627568c9e85c57dcd835bf47c420669edb7a2ad4c1b94538 not found: ID does not exist" Feb 19 10:04:07 crc kubenswrapper[4965]: I0219 10:04:07.749279 4965 scope.go:117] "RemoveContainer" containerID="c04bc1c0bf157dfa28099dda62833b90199b2419967a9219f832600f93cdbbca" Feb 19 10:04:07 crc kubenswrapper[4965]: E0219 10:04:07.749589 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c04bc1c0bf157dfa28099dda62833b90199b2419967a9219f832600f93cdbbca\": container with ID starting with c04bc1c0bf157dfa28099dda62833b90199b2419967a9219f832600f93cdbbca not found: ID does not exist" containerID="c04bc1c0bf157dfa28099dda62833b90199b2419967a9219f832600f93cdbbca" Feb 19 10:04:07 crc kubenswrapper[4965]: I0219 10:04:07.749608 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c04bc1c0bf157dfa28099dda62833b90199b2419967a9219f832600f93cdbbca"} err="failed to get container status \"c04bc1c0bf157dfa28099dda62833b90199b2419967a9219f832600f93cdbbca\": rpc error: code = NotFound desc = could not find container \"c04bc1c0bf157dfa28099dda62833b90199b2419967a9219f832600f93cdbbca\": container with ID starting with c04bc1c0bf157dfa28099dda62833b90199b2419967a9219f832600f93cdbbca not found: ID does not exist" Feb 19 10:04:07 crc kubenswrapper[4965]: I0219 10:04:07.774702 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-6dbb44f597-5cgmc" Feb 19 10:04:07 crc kubenswrapper[4965]: I0219 10:04:07.831730 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5bf5dff6-70bd-4013-95d1-6e30d7e765a0-config-data-custom\") pod \"5bf5dff6-70bd-4013-95d1-6e30d7e765a0\" (UID: \"5bf5dff6-70bd-4013-95d1-6e30d7e765a0\") " Feb 19 10:04:07 crc kubenswrapper[4965]: I0219 10:04:07.832045 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5bf5dff6-70bd-4013-95d1-6e30d7e765a0-etc-machine-id\") pod \"5bf5dff6-70bd-4013-95d1-6e30d7e765a0\" (UID: \"5bf5dff6-70bd-4013-95d1-6e30d7e765a0\") " Feb 19 10:04:07 crc kubenswrapper[4965]: I0219 10:04:07.832160 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5bf5dff6-70bd-4013-95d1-6e30d7e765a0-scripts\") pod \"5bf5dff6-70bd-4013-95d1-6e30d7e765a0\" (UID: \"5bf5dff6-70bd-4013-95d1-6e30d7e765a0\") " Feb 19 10:04:07 crc kubenswrapper[4965]: I0219 10:04:07.832188 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5bf5dff6-70bd-4013-95d1-6e30d7e765a0-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "5bf5dff6-70bd-4013-95d1-6e30d7e765a0" (UID: "5bf5dff6-70bd-4013-95d1-6e30d7e765a0"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 10:04:07 crc kubenswrapper[4965]: I0219 10:04:07.832261 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bf5dff6-70bd-4013-95d1-6e30d7e765a0-config-data\") pod \"5bf5dff6-70bd-4013-95d1-6e30d7e765a0\" (UID: \"5bf5dff6-70bd-4013-95d1-6e30d7e765a0\") " Feb 19 10:04:07 crc kubenswrapper[4965]: I0219 10:04:07.832355 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bf5dff6-70bd-4013-95d1-6e30d7e765a0-combined-ca-bundle\") pod \"5bf5dff6-70bd-4013-95d1-6e30d7e765a0\" (UID: \"5bf5dff6-70bd-4013-95d1-6e30d7e765a0\") " Feb 19 10:04:07 crc kubenswrapper[4965]: I0219 10:04:07.832396 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hl6r\" (UniqueName: \"kubernetes.io/projected/5bf5dff6-70bd-4013-95d1-6e30d7e765a0-kube-api-access-5hl6r\") pod \"5bf5dff6-70bd-4013-95d1-6e30d7e765a0\" (UID: \"5bf5dff6-70bd-4013-95d1-6e30d7e765a0\") " Feb 19 10:04:07 crc kubenswrapper[4965]: I0219 10:04:07.832860 4965 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5bf5dff6-70bd-4013-95d1-6e30d7e765a0-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:07 crc kubenswrapper[4965]: I0219 10:04:07.848432 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bf5dff6-70bd-4013-95d1-6e30d7e765a0-scripts" (OuterVolumeSpecName: "scripts") pod "5bf5dff6-70bd-4013-95d1-6e30d7e765a0" (UID: "5bf5dff6-70bd-4013-95d1-6e30d7e765a0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:07 crc kubenswrapper[4965]: I0219 10:04:07.848473 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bf5dff6-70bd-4013-95d1-6e30d7e765a0-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5bf5dff6-70bd-4013-95d1-6e30d7e765a0" (UID: "5bf5dff6-70bd-4013-95d1-6e30d7e765a0"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:07 crc kubenswrapper[4965]: I0219 10:04:07.864407 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bf5dff6-70bd-4013-95d1-6e30d7e765a0-kube-api-access-5hl6r" (OuterVolumeSpecName: "kube-api-access-5hl6r") pod "5bf5dff6-70bd-4013-95d1-6e30d7e765a0" (UID: "5bf5dff6-70bd-4013-95d1-6e30d7e765a0"). InnerVolumeSpecName "kube-api-access-5hl6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:04:07 crc kubenswrapper[4965]: I0219 10:04:07.935367 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hl6r\" (UniqueName: \"kubernetes.io/projected/5bf5dff6-70bd-4013-95d1-6e30d7e765a0-kube-api-access-5hl6r\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:07 crc kubenswrapper[4965]: I0219 10:04:07.935400 4965 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5bf5dff6-70bd-4013-95d1-6e30d7e765a0-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:07 crc kubenswrapper[4965]: I0219 10:04:07.935408 4965 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5bf5dff6-70bd-4013-95d1-6e30d7e765a0-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:07 crc kubenswrapper[4965]: I0219 10:04:07.944598 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bf5dff6-70bd-4013-95d1-6e30d7e765a0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5bf5dff6-70bd-4013-95d1-6e30d7e765a0" (UID: "5bf5dff6-70bd-4013-95d1-6e30d7e765a0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:07 crc kubenswrapper[4965]: I0219 10:04:07.977285 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bf5dff6-70bd-4013-95d1-6e30d7e765a0-config-data" (OuterVolumeSpecName: "config-data") pod "5bf5dff6-70bd-4013-95d1-6e30d7e765a0" (UID: "5bf5dff6-70bd-4013-95d1-6e30d7e765a0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:07 crc kubenswrapper[4965]: I0219 10:04:07.986006 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 19 10:04:08 crc kubenswrapper[4965]: I0219 10:04:08.040539 4965 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bf5dff6-70bd-4013-95d1-6e30d7e765a0-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:08 crc kubenswrapper[4965]: I0219 10:04:08.040567 4965 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bf5dff6-70bd-4013-95d1-6e30d7e765a0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:08 crc kubenswrapper[4965]: I0219 10:04:08.141850 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/908accc3-aea8-40d3-a13f-c197badfa0d1-scripts\") pod \"908accc3-aea8-40d3-a13f-c197badfa0d1\" (UID: \"908accc3-aea8-40d3-a13f-c197badfa0d1\") " Feb 19 10:04:08 crc kubenswrapper[4965]: I0219 10:04:08.141910 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908accc3-aea8-40d3-a13f-c197badfa0d1-combined-ca-bundle\") pod \"908accc3-aea8-40d3-a13f-c197badfa0d1\" (UID: \"908accc3-aea8-40d3-a13f-c197badfa0d1\") " Feb 19 10:04:08 crc kubenswrapper[4965]: I0219 10:04:08.142067 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/908accc3-aea8-40d3-a13f-c197badfa0d1-certs\") pod \"908accc3-aea8-40d3-a13f-c197badfa0d1\" (UID: \"908accc3-aea8-40d3-a13f-c197badfa0d1\") " Feb 19 10:04:08 crc kubenswrapper[4965]: I0219 10:04:08.142148 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/908accc3-aea8-40d3-a13f-c197badfa0d1-config-data-custom\") pod \"908accc3-aea8-40d3-a13f-c197badfa0d1\" (UID: \"908accc3-aea8-40d3-a13f-c197badfa0d1\") " Feb 19 10:04:08 crc kubenswrapper[4965]: I0219 10:04:08.142291 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/908accc3-aea8-40d3-a13f-c197badfa0d1-config-data\") pod \"908accc3-aea8-40d3-a13f-c197badfa0d1\" (UID: \"908accc3-aea8-40d3-a13f-c197badfa0d1\") " Feb 19 10:04:08 crc kubenswrapper[4965]: I0219 10:04:08.142344 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llkw4\" (UniqueName: \"kubernetes.io/projected/908accc3-aea8-40d3-a13f-c197badfa0d1-kube-api-access-llkw4\") pod \"908accc3-aea8-40d3-a13f-c197badfa0d1\" (UID: \"908accc3-aea8-40d3-a13f-c197badfa0d1\") " Feb 19 10:04:08 crc kubenswrapper[4965]: I0219 10:04:08.145392 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/908accc3-aea8-40d3-a13f-c197badfa0d1-certs" (OuterVolumeSpecName: "certs") pod "908accc3-aea8-40d3-a13f-c197badfa0d1" (UID: "908accc3-aea8-40d3-a13f-c197badfa0d1"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:04:08 crc kubenswrapper[4965]: I0219 10:04:08.146781 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/908accc3-aea8-40d3-a13f-c197badfa0d1-scripts" (OuterVolumeSpecName: "scripts") pod "908accc3-aea8-40d3-a13f-c197badfa0d1" (UID: "908accc3-aea8-40d3-a13f-c197badfa0d1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:08 crc kubenswrapper[4965]: I0219 10:04:08.153612 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/908accc3-aea8-40d3-a13f-c197badfa0d1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "908accc3-aea8-40d3-a13f-c197badfa0d1" (UID: "908accc3-aea8-40d3-a13f-c197badfa0d1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:08 crc kubenswrapper[4965]: I0219 10:04:08.153729 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/908accc3-aea8-40d3-a13f-c197badfa0d1-kube-api-access-llkw4" (OuterVolumeSpecName: "kube-api-access-llkw4") pod "908accc3-aea8-40d3-a13f-c197badfa0d1" (UID: "908accc3-aea8-40d3-a13f-c197badfa0d1"). InnerVolumeSpecName "kube-api-access-llkw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:04:08 crc kubenswrapper[4965]: I0219 10:04:08.170406 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/908accc3-aea8-40d3-a13f-c197badfa0d1-config-data" (OuterVolumeSpecName: "config-data") pod "908accc3-aea8-40d3-a13f-c197badfa0d1" (UID: "908accc3-aea8-40d3-a13f-c197badfa0d1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:08 crc kubenswrapper[4965]: I0219 10:04:08.181460 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/908accc3-aea8-40d3-a13f-c197badfa0d1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "908accc3-aea8-40d3-a13f-c197badfa0d1" (UID: "908accc3-aea8-40d3-a13f-c197badfa0d1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:08 crc kubenswrapper[4965]: I0219 10:04:08.244830 4965 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/908accc3-aea8-40d3-a13f-c197badfa0d1-certs\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:08 crc kubenswrapper[4965]: I0219 10:04:08.244868 4965 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/908accc3-aea8-40d3-a13f-c197badfa0d1-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:08 crc kubenswrapper[4965]: I0219 10:04:08.244881 4965 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/908accc3-aea8-40d3-a13f-c197badfa0d1-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:08 crc kubenswrapper[4965]: I0219 10:04:08.244891 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llkw4\" (UniqueName: \"kubernetes.io/projected/908accc3-aea8-40d3-a13f-c197badfa0d1-kube-api-access-llkw4\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:08 crc kubenswrapper[4965]: I0219 10:04:08.244901 4965 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/908accc3-aea8-40d3-a13f-c197badfa0d1-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:08 crc kubenswrapper[4965]: I0219 10:04:08.244911 4965 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908accc3-aea8-40d3-a13f-c197badfa0d1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:08 crc kubenswrapper[4965]: I0219 10:04:08.701924 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 10:04:08 crc kubenswrapper[4965]: I0219 10:04:08.703613 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"908accc3-aea8-40d3-a13f-c197badfa0d1","Type":"ContainerDied","Data":"5f86309983052a49013be8e1e26eac3eb9e0955dce9f977ca2cdbafd7c03a830"} Feb 19 10:04:08 crc kubenswrapper[4965]: I0219 10:04:08.703660 4965 scope.go:117] "RemoveContainer" containerID="5d7b515deab7bdabd8b3d9ba472035b79bd994c93e3605cdd424f5f408a414d5" Feb 19 10:04:08 crc kubenswrapper[4965]: I0219 10:04:08.703783 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 19 10:04:08 crc kubenswrapper[4965]: I0219 10:04:08.756974 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 19 10:04:08 crc kubenswrapper[4965]: I0219 10:04:08.765330 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 19 10:04:08 crc kubenswrapper[4965]: I0219 10:04:08.797658 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 19 10:04:08 crc kubenswrapper[4965]: E0219 10:04:08.798437 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bf5dff6-70bd-4013-95d1-6e30d7e765a0" containerName="cinder-scheduler" Feb 19 10:04:08 crc kubenswrapper[4965]: I0219 10:04:08.798457 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bf5dff6-70bd-4013-95d1-6e30d7e765a0" containerName="cinder-scheduler" Feb 19 10:04:08 crc kubenswrapper[4965]: E0219 10:04:08.798482 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bf5dff6-70bd-4013-95d1-6e30d7e765a0" containerName="probe" Feb 19 10:04:08 crc kubenswrapper[4965]: I0219 10:04:08.798488 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bf5dff6-70bd-4013-95d1-6e30d7e765a0" containerName="probe" Feb 19 10:04:08 crc kubenswrapper[4965]: E0219 10:04:08.798518 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0acf6ac2-e0ae-4315-9e82-656caeeedbb6" containerName="dnsmasq-dns" Feb 19 10:04:08 crc kubenswrapper[4965]: I0219 10:04:08.798523 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="0acf6ac2-e0ae-4315-9e82-656caeeedbb6" containerName="dnsmasq-dns" Feb 19 10:04:08 crc kubenswrapper[4965]: E0219 10:04:08.798544 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="908accc3-aea8-40d3-a13f-c197badfa0d1" containerName="cloudkitty-proc" Feb 19 10:04:08 crc kubenswrapper[4965]: I0219 10:04:08.798549 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="908accc3-aea8-40d3-a13f-c197badfa0d1" containerName="cloudkitty-proc" Feb 19 10:04:08 crc kubenswrapper[4965]: E0219 10:04:08.798565 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0acf6ac2-e0ae-4315-9e82-656caeeedbb6" containerName="init" Feb 19 10:04:08 crc kubenswrapper[4965]: I0219 10:04:08.798571 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="0acf6ac2-e0ae-4315-9e82-656caeeedbb6" containerName="init" Feb 19 10:04:08 crc kubenswrapper[4965]: I0219 10:04:08.798922 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="0acf6ac2-e0ae-4315-9e82-656caeeedbb6" containerName="dnsmasq-dns" Feb 19 10:04:08 crc kubenswrapper[4965]: I0219 10:04:08.798949 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bf5dff6-70bd-4013-95d1-6e30d7e765a0" containerName="cinder-scheduler" Feb 19 10:04:08 crc kubenswrapper[4965]: I0219 10:04:08.798972 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bf5dff6-70bd-4013-95d1-6e30d7e765a0" containerName="probe" Feb 19 10:04:08 crc kubenswrapper[4965]: I0219 10:04:08.798994 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="908accc3-aea8-40d3-a13f-c197badfa0d1" containerName="cloudkitty-proc" Feb 19 10:04:08 crc kubenswrapper[4965]: I0219 10:04:08.799944 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 19 10:04:08 crc kubenswrapper[4965]: I0219 10:04:08.811173 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-proc-config-data" Feb 19 10:04:08 crc kubenswrapper[4965]: I0219 10:04:08.852002 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 10:04:08 crc kubenswrapper[4965]: I0219 10:04:08.869496 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 10:04:08 crc kubenswrapper[4965]: I0219 10:04:08.888934 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 19 10:04:08 crc kubenswrapper[4965]: I0219 10:04:08.900210 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 10:04:08 crc kubenswrapper[4965]: I0219 10:04:08.902876 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 10:04:08 crc kubenswrapper[4965]: I0219 10:04:08.913803 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 19 10:04:08 crc kubenswrapper[4965]: I0219 10:04:08.933003 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 10:04:08 crc kubenswrapper[4965]: I0219 10:04:08.972749 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/81c49478-306d-44e9-99bd-157057f0ed27-certs\") pod \"cloudkitty-proc-0\" (UID: \"81c49478-306d-44e9-99bd-157057f0ed27\") " pod="openstack/cloudkitty-proc-0" Feb 19 10:04:08 crc kubenswrapper[4965]: I0219 10:04:08.973188 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq67l\" (UniqueName: \"kubernetes.io/projected/81c49478-306d-44e9-99bd-157057f0ed27-kube-api-access-cq67l\") pod \"cloudkitty-proc-0\" (UID: \"81c49478-306d-44e9-99bd-157057f0ed27\") " pod="openstack/cloudkitty-proc-0" Feb 19 10:04:08 crc kubenswrapper[4965]: I0219 10:04:08.973346 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81c49478-306d-44e9-99bd-157057f0ed27-config-data\") pod \"cloudkitty-proc-0\" (UID: \"81c49478-306d-44e9-99bd-157057f0ed27\") " pod="openstack/cloudkitty-proc-0" Feb 19 10:04:08 crc kubenswrapper[4965]: I0219 10:04:08.973422 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81c49478-306d-44e9-99bd-157057f0ed27-scripts\") pod \"cloudkitty-proc-0\" (UID: \"81c49478-306d-44e9-99bd-157057f0ed27\") " pod="openstack/cloudkitty-proc-0" Feb 19 10:04:08 crc kubenswrapper[4965]: I0219 10:04:08.973529 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/81c49478-306d-44e9-99bd-157057f0ed27-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"81c49478-306d-44e9-99bd-157057f0ed27\") " pod="openstack/cloudkitty-proc-0" Feb 19 10:04:08 crc kubenswrapper[4965]: I0219 10:04:08.973562 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81c49478-306d-44e9-99bd-157057f0ed27-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"81c49478-306d-44e9-99bd-157057f0ed27\") " pod="openstack/cloudkitty-proc-0" Feb 19 10:04:09 crc kubenswrapper[4965]: I0219 10:04:09.075798 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/81c49478-306d-44e9-99bd-157057f0ed27-certs\") pod \"cloudkitty-proc-0\" (UID: \"81c49478-306d-44e9-99bd-157057f0ed27\") " pod="openstack/cloudkitty-proc-0" Feb 19 10:04:09 crc kubenswrapper[4965]: I0219 10:04:09.076983 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81c49478-306d-44e9-99bd-157057f0ed27-config-data\") pod \"cloudkitty-proc-0\" (UID: \"81c49478-306d-44e9-99bd-157057f0ed27\") " pod="openstack/cloudkitty-proc-0" Feb 19 10:04:09 crc kubenswrapper[4965]: I0219 10:04:09.077061 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cq67l\" (UniqueName: \"kubernetes.io/projected/81c49478-306d-44e9-99bd-157057f0ed27-kube-api-access-cq67l\") pod \"cloudkitty-proc-0\" (UID: \"81c49478-306d-44e9-99bd-157057f0ed27\") " pod="openstack/cloudkitty-proc-0" Feb 19 10:04:09 crc kubenswrapper[4965]: I0219 10:04:09.077176 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81c49478-306d-44e9-99bd-157057f0ed27-scripts\") pod \"cloudkitty-proc-0\" (UID: \"81c49478-306d-44e9-99bd-157057f0ed27\") " pod="openstack/cloudkitty-proc-0" Feb 19 10:04:09 crc kubenswrapper[4965]: I0219 10:04:09.077363 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2e838493-1547-4574-8af2-eff17e75c65b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2e838493-1547-4574-8af2-eff17e75c65b\") " pod="openstack/cinder-scheduler-0" Feb 19 10:04:09 crc kubenswrapper[4965]: I0219 10:04:09.077488 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/81c49478-306d-44e9-99bd-157057f0ed27-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"81c49478-306d-44e9-99bd-157057f0ed27\") " pod="openstack/cloudkitty-proc-0" Feb 19 10:04:09 crc kubenswrapper[4965]: I0219 10:04:09.077557 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e838493-1547-4574-8af2-eff17e75c65b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2e838493-1547-4574-8af2-eff17e75c65b\") " pod="openstack/cinder-scheduler-0" Feb 19 10:04:09 crc kubenswrapper[4965]: I0219 10:04:09.077628 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e838493-1547-4574-8af2-eff17e75c65b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2e838493-1547-4574-8af2-eff17e75c65b\") " pod="openstack/cinder-scheduler-0" Feb 19 10:04:09 crc kubenswrapper[4965]: I0219 10:04:09.077699 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81c49478-306d-44e9-99bd-157057f0ed27-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"81c49478-306d-44e9-99bd-157057f0ed27\") " pod="openstack/cloudkitty-proc-0" Feb 19 10:04:09 crc kubenswrapper[4965]: I0219 10:04:09.077790 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcq9d\" (UniqueName: \"kubernetes.io/projected/2e838493-1547-4574-8af2-eff17e75c65b-kube-api-access-fcq9d\") pod \"cinder-scheduler-0\" (UID: \"2e838493-1547-4574-8af2-eff17e75c65b\") " pod="openstack/cinder-scheduler-0" Feb 19 10:04:09 crc kubenswrapper[4965]: I0219 10:04:09.077892 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e838493-1547-4574-8af2-eff17e75c65b-config-data\") pod \"cinder-scheduler-0\" (UID: \"2e838493-1547-4574-8af2-eff17e75c65b\") " pod="openstack/cinder-scheduler-0" Feb 19 10:04:09 crc kubenswrapper[4965]: I0219 10:04:09.078033 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e838493-1547-4574-8af2-eff17e75c65b-scripts\") pod \"cinder-scheduler-0\" (UID: \"2e838493-1547-4574-8af2-eff17e75c65b\") " pod="openstack/cinder-scheduler-0" Feb 19 10:04:09 crc kubenswrapper[4965]: I0219 10:04:09.080759 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/81c49478-306d-44e9-99bd-157057f0ed27-certs\") pod \"cloudkitty-proc-0\" (UID: \"81c49478-306d-44e9-99bd-157057f0ed27\") " pod="openstack/cloudkitty-proc-0" Feb 19 10:04:09 crc kubenswrapper[4965]: I0219 10:04:09.082933 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81c49478-306d-44e9-99bd-157057f0ed27-config-data\") pod \"cloudkitty-proc-0\" (UID: \"81c49478-306d-44e9-99bd-157057f0ed27\") " pod="openstack/cloudkitty-proc-0" Feb 19 10:04:09 crc kubenswrapper[4965]: I0219 10:04:09.083120 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81c49478-306d-44e9-99bd-157057f0ed27-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"81c49478-306d-44e9-99bd-157057f0ed27\") " pod="openstack/cloudkitty-proc-0" Feb 19 10:04:09 crc kubenswrapper[4965]: I0219 10:04:09.083242 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81c49478-306d-44e9-99bd-157057f0ed27-scripts\") pod \"cloudkitty-proc-0\" (UID: \"81c49478-306d-44e9-99bd-157057f0ed27\") " pod="openstack/cloudkitty-proc-0" Feb 19 10:04:09 crc kubenswrapper[4965]: I0219 10:04:09.098728 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/81c49478-306d-44e9-99bd-157057f0ed27-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"81c49478-306d-44e9-99bd-157057f0ed27\") " pod="openstack/cloudkitty-proc-0" Feb 19 10:04:09 crc kubenswrapper[4965]: I0219 10:04:09.099135 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cq67l\" (UniqueName: \"kubernetes.io/projected/81c49478-306d-44e9-99bd-157057f0ed27-kube-api-access-cq67l\") pod \"cloudkitty-proc-0\" (UID: \"81c49478-306d-44e9-99bd-157057f0ed27\") " pod="openstack/cloudkitty-proc-0" Feb 19 10:04:09 crc kubenswrapper[4965]: I0219 10:04:09.151085 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 19 10:04:09 crc kubenswrapper[4965]: I0219 10:04:09.180070 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e838493-1547-4574-8af2-eff17e75c65b-scripts\") pod \"cinder-scheduler-0\" (UID: \"2e838493-1547-4574-8af2-eff17e75c65b\") " pod="openstack/cinder-scheduler-0" Feb 19 10:04:09 crc kubenswrapper[4965]: I0219 10:04:09.180430 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2e838493-1547-4574-8af2-eff17e75c65b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2e838493-1547-4574-8af2-eff17e75c65b\") " pod="openstack/cinder-scheduler-0" Feb 19 10:04:09 crc kubenswrapper[4965]: I0219 10:04:09.180510 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e838493-1547-4574-8af2-eff17e75c65b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2e838493-1547-4574-8af2-eff17e75c65b\") " pod="openstack/cinder-scheduler-0" Feb 19 10:04:09 crc kubenswrapper[4965]: I0219 10:04:09.180571 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e838493-1547-4574-8af2-eff17e75c65b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2e838493-1547-4574-8af2-eff17e75c65b\") " pod="openstack/cinder-scheduler-0" Feb 19 10:04:09 crc kubenswrapper[4965]: I0219 10:04:09.180713 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2e838493-1547-4574-8af2-eff17e75c65b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2e838493-1547-4574-8af2-eff17e75c65b\") " pod="openstack/cinder-scheduler-0" Feb 19 10:04:09 crc kubenswrapper[4965]: I0219 10:04:09.180765 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcq9d\" (UniqueName: \"kubernetes.io/projected/2e838493-1547-4574-8af2-eff17e75c65b-kube-api-access-fcq9d\") pod \"cinder-scheduler-0\" (UID: \"2e838493-1547-4574-8af2-eff17e75c65b\") " pod="openstack/cinder-scheduler-0" Feb 19 10:04:09 crc kubenswrapper[4965]: I0219 10:04:09.180935 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e838493-1547-4574-8af2-eff17e75c65b-config-data\") pod \"cinder-scheduler-0\" (UID: \"2e838493-1547-4574-8af2-eff17e75c65b\") " pod="openstack/cinder-scheduler-0" Feb 19 10:04:09 crc kubenswrapper[4965]: I0219 10:04:09.185836 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e838493-1547-4574-8af2-eff17e75c65b-config-data\") pod \"cinder-scheduler-0\" (UID: \"2e838493-1547-4574-8af2-eff17e75c65b\") " pod="openstack/cinder-scheduler-0" Feb 19 10:04:09 crc kubenswrapper[4965]: I0219 10:04:09.186385 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e838493-1547-4574-8af2-eff17e75c65b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2e838493-1547-4574-8af2-eff17e75c65b\") " pod="openstack/cinder-scheduler-0" Feb 19 10:04:09 crc kubenswrapper[4965]: I0219 10:04:09.189848 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e838493-1547-4574-8af2-eff17e75c65b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2e838493-1547-4574-8af2-eff17e75c65b\") " pod="openstack/cinder-scheduler-0" Feb 19 10:04:09 crc kubenswrapper[4965]: I0219 10:04:09.192629 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e838493-1547-4574-8af2-eff17e75c65b-scripts\") pod \"cinder-scheduler-0\" (UID: \"2e838493-1547-4574-8af2-eff17e75c65b\") " pod="openstack/cinder-scheduler-0" Feb 19 10:04:09 crc kubenswrapper[4965]: I0219 10:04:09.216427 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcq9d\" (UniqueName: \"kubernetes.io/projected/2e838493-1547-4574-8af2-eff17e75c65b-kube-api-access-fcq9d\") pod \"cinder-scheduler-0\" (UID: \"2e838493-1547-4574-8af2-eff17e75c65b\") " pod="openstack/cinder-scheduler-0" Feb 19 10:04:09 crc kubenswrapper[4965]: I0219 10:04:09.216519 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bf5dff6-70bd-4013-95d1-6e30d7e765a0" path="/var/lib/kubelet/pods/5bf5dff6-70bd-4013-95d1-6e30d7e765a0/volumes" Feb 19 10:04:09 crc kubenswrapper[4965]: I0219 10:04:09.217990 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="908accc3-aea8-40d3-a13f-c197badfa0d1" path="/var/lib/kubelet/pods/908accc3-aea8-40d3-a13f-c197badfa0d1/volumes" Feb 19 10:04:09 crc kubenswrapper[4965]: I0219 10:04:09.240733 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 10:04:09 crc kubenswrapper[4965]: I0219 10:04:09.796113 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 19 10:04:09 crc kubenswrapper[4965]: W0219 10:04:09.802924 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81c49478_306d_44e9_99bd_157057f0ed27.slice/crio-93daa53f64fb1a55c99e091e9add6cc4b99f3244d5ee77a32d638135b35e4519 WatchSource:0}: Error finding container 93daa53f64fb1a55c99e091e9add6cc4b99f3244d5ee77a32d638135b35e4519: Status 404 returned error can't find the container with id 93daa53f64fb1a55c99e091e9add6cc4b99f3244d5ee77a32d638135b35e4519 Feb 19 10:04:09 crc kubenswrapper[4965]: I0219 10:04:09.930588 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 10:04:10 crc kubenswrapper[4965]: I0219 10:04:10.025756 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 19 10:04:10 crc kubenswrapper[4965]: I0219 10:04:10.752429 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2e838493-1547-4574-8af2-eff17e75c65b","Type":"ContainerStarted","Data":"22cfa5de580f72100ec6e73197d779b5dbb94528a7f5a878ce45eb64cf0981e7"} Feb 19 10:04:10 crc kubenswrapper[4965]: I0219 10:04:10.752724 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2e838493-1547-4574-8af2-eff17e75c65b","Type":"ContainerStarted","Data":"2410283b768c71f058141466e80542a7b85d57eb3654404b07b98d52ea486f8d"} Feb 19 10:04:10 crc kubenswrapper[4965]: I0219 10:04:10.756528 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"81c49478-306d-44e9-99bd-157057f0ed27","Type":"ContainerStarted","Data":"479abf975e75efce1c54c3a226386f041c2ca5083e07761a46d776e30efe55e9"} Feb 19 10:04:10 crc kubenswrapper[4965]: I0219 10:04:10.756565 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"81c49478-306d-44e9-99bd-157057f0ed27","Type":"ContainerStarted","Data":"93daa53f64fb1a55c99e091e9add6cc4b99f3244d5ee77a32d638135b35e4519"} Feb 19 10:04:10 crc kubenswrapper[4965]: I0219 10:04:10.778839 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-proc-0" podStartSLOduration=2.778822442 podStartE2EDuration="2.778822442s" podCreationTimestamp="2026-02-19 10:04:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:04:10.773565215 +0000 UTC m=+1306.394886535" watchObservedRunningTime="2026-02-19 10:04:10.778822442 +0000 UTC m=+1306.400143752" Feb 19 10:04:10 crc kubenswrapper[4965]: I0219 10:04:10.931994 4965 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-d7568bd5b-nvbtn" podUID="e59b9522-d30f-4640-8e62-55e0b0c91c9a" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.179:9311/healthcheck\": read tcp 10.217.0.2:53014->10.217.0.179:9311: read: connection reset by peer" Feb 19 10:04:10 crc kubenswrapper[4965]: I0219 10:04:10.932415 4965 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-d7568bd5b-nvbtn" podUID="e59b9522-d30f-4640-8e62-55e0b0c91c9a" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.179:9311/healthcheck\": read tcp 10.217.0.2:53008->10.217.0.179:9311: read: connection reset by peer" Feb 19 10:04:10 crc kubenswrapper[4965]: I0219 10:04:10.950385 4965 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6b7b667979-hprpt" podUID="0acf6ac2-e0ae-4315-9e82-656caeeedbb6" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.170:5353: i/o timeout" Feb 19 10:04:11 crc kubenswrapper[4965]: I0219 10:04:11.499385 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-d7568bd5b-nvbtn" Feb 19 10:04:11 crc kubenswrapper[4965]: I0219 10:04:11.651495 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e59b9522-d30f-4640-8e62-55e0b0c91c9a-config-data\") pod \"e59b9522-d30f-4640-8e62-55e0b0c91c9a\" (UID: \"e59b9522-d30f-4640-8e62-55e0b0c91c9a\") " Feb 19 10:04:11 crc kubenswrapper[4965]: I0219 10:04:11.651660 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e59b9522-d30f-4640-8e62-55e0b0c91c9a-logs\") pod \"e59b9522-d30f-4640-8e62-55e0b0c91c9a\" (UID: \"e59b9522-d30f-4640-8e62-55e0b0c91c9a\") " Feb 19 10:04:11 crc kubenswrapper[4965]: I0219 10:04:11.651687 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e59b9522-d30f-4640-8e62-55e0b0c91c9a-combined-ca-bundle\") pod \"e59b9522-d30f-4640-8e62-55e0b0c91c9a\" (UID: \"e59b9522-d30f-4640-8e62-55e0b0c91c9a\") " Feb 19 10:04:11 crc kubenswrapper[4965]: I0219 10:04:11.651717 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e59b9522-d30f-4640-8e62-55e0b0c91c9a-config-data-custom\") pod \"e59b9522-d30f-4640-8e62-55e0b0c91c9a\" (UID: \"e59b9522-d30f-4640-8e62-55e0b0c91c9a\") " Feb 19 10:04:11 crc kubenswrapper[4965]: I0219 10:04:11.651766 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnrhd\" (UniqueName: \"kubernetes.io/projected/e59b9522-d30f-4640-8e62-55e0b0c91c9a-kube-api-access-xnrhd\") pod \"e59b9522-d30f-4640-8e62-55e0b0c91c9a\" (UID: \"e59b9522-d30f-4640-8e62-55e0b0c91c9a\") " Feb 19 10:04:11 crc kubenswrapper[4965]: I0219 10:04:11.652921 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e59b9522-d30f-4640-8e62-55e0b0c91c9a-logs" (OuterVolumeSpecName: "logs") pod "e59b9522-d30f-4640-8e62-55e0b0c91c9a" (UID: "e59b9522-d30f-4640-8e62-55e0b0c91c9a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:04:11 crc kubenswrapper[4965]: I0219 10:04:11.657082 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e59b9522-d30f-4640-8e62-55e0b0c91c9a-kube-api-access-xnrhd" (OuterVolumeSpecName: "kube-api-access-xnrhd") pod "e59b9522-d30f-4640-8e62-55e0b0c91c9a" (UID: "e59b9522-d30f-4640-8e62-55e0b0c91c9a"). InnerVolumeSpecName "kube-api-access-xnrhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:04:11 crc kubenswrapper[4965]: I0219 10:04:11.657303 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e59b9522-d30f-4640-8e62-55e0b0c91c9a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e59b9522-d30f-4640-8e62-55e0b0c91c9a" (UID: "e59b9522-d30f-4640-8e62-55e0b0c91c9a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:11 crc kubenswrapper[4965]: I0219 10:04:11.690988 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e59b9522-d30f-4640-8e62-55e0b0c91c9a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e59b9522-d30f-4640-8e62-55e0b0c91c9a" (UID: "e59b9522-d30f-4640-8e62-55e0b0c91c9a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:11 crc kubenswrapper[4965]: I0219 10:04:11.732338 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e59b9522-d30f-4640-8e62-55e0b0c91c9a-config-data" (OuterVolumeSpecName: "config-data") pod "e59b9522-d30f-4640-8e62-55e0b0c91c9a" (UID: "e59b9522-d30f-4640-8e62-55e0b0c91c9a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:11 crc kubenswrapper[4965]: I0219 10:04:11.753573 4965 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e59b9522-d30f-4640-8e62-55e0b0c91c9a-logs\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:11 crc kubenswrapper[4965]: I0219 10:04:11.753605 4965 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e59b9522-d30f-4640-8e62-55e0b0c91c9a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:11 crc kubenswrapper[4965]: I0219 10:04:11.753616 4965 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e59b9522-d30f-4640-8e62-55e0b0c91c9a-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:11 crc kubenswrapper[4965]: I0219 10:04:11.753625 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnrhd\" (UniqueName: \"kubernetes.io/projected/e59b9522-d30f-4640-8e62-55e0b0c91c9a-kube-api-access-xnrhd\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:11 crc kubenswrapper[4965]: I0219 10:04:11.753633 4965 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e59b9522-d30f-4640-8e62-55e0b0c91c9a-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:11 crc kubenswrapper[4965]: I0219 10:04:11.767679 4965 generic.go:334] "Generic (PLEG): container finished" podID="e59b9522-d30f-4640-8e62-55e0b0c91c9a" containerID="c02d9e12468023d21ce3165981bc638d5ab08a661112da13016f680b97a1989b" exitCode=0 Feb 19 10:04:11 crc kubenswrapper[4965]: I0219 10:04:11.767736 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-d7568bd5b-nvbtn" Feb 19 10:04:11 crc kubenswrapper[4965]: I0219 10:04:11.767781 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d7568bd5b-nvbtn" event={"ID":"e59b9522-d30f-4640-8e62-55e0b0c91c9a","Type":"ContainerDied","Data":"c02d9e12468023d21ce3165981bc638d5ab08a661112da13016f680b97a1989b"} Feb 19 10:04:11 crc kubenswrapper[4965]: I0219 10:04:11.767816 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d7568bd5b-nvbtn" event={"ID":"e59b9522-d30f-4640-8e62-55e0b0c91c9a","Type":"ContainerDied","Data":"4ec3e80bcc7e823b3341d1c61be5db3cf22a497f7f37ecaad7ddbd5dc52f3e18"} Feb 19 10:04:11 crc kubenswrapper[4965]: I0219 10:04:11.767836 4965 scope.go:117] "RemoveContainer" containerID="c02d9e12468023d21ce3165981bc638d5ab08a661112da13016f680b97a1989b" Feb 19 10:04:11 crc kubenswrapper[4965]: I0219 10:04:11.779264 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2e838493-1547-4574-8af2-eff17e75c65b","Type":"ContainerStarted","Data":"4d1bc436683d74aa24984c3a9a0518e25ba29ea85f68ce9a9bdef017f39a7f28"} Feb 19 10:04:11 crc kubenswrapper[4965]: I0219 10:04:11.818319 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.818296944 podStartE2EDuration="3.818296944s" podCreationTimestamp="2026-02-19 10:04:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:04:11.800996743 +0000 UTC m=+1307.422318053" watchObservedRunningTime="2026-02-19 10:04:11.818296944 +0000 UTC m=+1307.439618254" Feb 19 10:04:11 crc kubenswrapper[4965]: I0219 10:04:11.936249 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-d7568bd5b-nvbtn"] Feb 19 10:04:11 crc kubenswrapper[4965]: I0219 10:04:11.940696 4965 scope.go:117] "RemoveContainer" containerID="bb586ffe32bb9aad537fdcfbf563a3527d9d8f1844ac3dc36c18cb230a15370d" Feb 19 10:04:11 crc kubenswrapper[4965]: I0219 10:04:11.950122 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-d7568bd5b-nvbtn"] Feb 19 10:04:11 crc kubenswrapper[4965]: I0219 10:04:11.988148 4965 scope.go:117] "RemoveContainer" containerID="c02d9e12468023d21ce3165981bc638d5ab08a661112da13016f680b97a1989b" Feb 19 10:04:11 crc kubenswrapper[4965]: E0219 10:04:11.991639 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c02d9e12468023d21ce3165981bc638d5ab08a661112da13016f680b97a1989b\": container with ID starting with c02d9e12468023d21ce3165981bc638d5ab08a661112da13016f680b97a1989b not found: ID does not exist" containerID="c02d9e12468023d21ce3165981bc638d5ab08a661112da13016f680b97a1989b" Feb 19 10:04:11 crc kubenswrapper[4965]: I0219 10:04:11.991682 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c02d9e12468023d21ce3165981bc638d5ab08a661112da13016f680b97a1989b"} err="failed to get container status \"c02d9e12468023d21ce3165981bc638d5ab08a661112da13016f680b97a1989b\": rpc error: code = NotFound desc = could not find container \"c02d9e12468023d21ce3165981bc638d5ab08a661112da13016f680b97a1989b\": container with ID starting with c02d9e12468023d21ce3165981bc638d5ab08a661112da13016f680b97a1989b not found: ID does not exist" Feb 19 10:04:11 crc kubenswrapper[4965]: I0219 10:04:11.991711 4965 scope.go:117] "RemoveContainer" containerID="bb586ffe32bb9aad537fdcfbf563a3527d9d8f1844ac3dc36c18cb230a15370d" Feb 19 10:04:11 crc kubenswrapper[4965]: E0219 10:04:11.992348 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb586ffe32bb9aad537fdcfbf563a3527d9d8f1844ac3dc36c18cb230a15370d\": container with ID starting with bb586ffe32bb9aad537fdcfbf563a3527d9d8f1844ac3dc36c18cb230a15370d not found: ID does not exist" containerID="bb586ffe32bb9aad537fdcfbf563a3527d9d8f1844ac3dc36c18cb230a15370d" Feb 19 10:04:11 crc kubenswrapper[4965]: I0219 10:04:11.992376 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb586ffe32bb9aad537fdcfbf563a3527d9d8f1844ac3dc36c18cb230a15370d"} err="failed to get container status \"bb586ffe32bb9aad537fdcfbf563a3527d9d8f1844ac3dc36c18cb230a15370d\": rpc error: code = NotFound desc = could not find container \"bb586ffe32bb9aad537fdcfbf563a3527d9d8f1844ac3dc36c18cb230a15370d\": container with ID starting with bb586ffe32bb9aad537fdcfbf563a3527d9d8f1844ac3dc36c18cb230a15370d not found: ID does not exist" Feb 19 10:04:12 crc kubenswrapper[4965]: I0219 10:04:12.771116 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 19 10:04:12 crc kubenswrapper[4965]: E0219 10:04:12.771744 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e59b9522-d30f-4640-8e62-55e0b0c91c9a" containerName="barbican-api-log" Feb 19 10:04:12 crc kubenswrapper[4965]: I0219 10:04:12.771761 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="e59b9522-d30f-4640-8e62-55e0b0c91c9a" containerName="barbican-api-log" Feb 19 10:04:12 crc kubenswrapper[4965]: E0219 10:04:12.771783 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e59b9522-d30f-4640-8e62-55e0b0c91c9a" containerName="barbican-api" Feb 19 10:04:12 crc kubenswrapper[4965]: I0219 10:04:12.771790 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="e59b9522-d30f-4640-8e62-55e0b0c91c9a" containerName="barbican-api" Feb 19 10:04:12 crc kubenswrapper[4965]: I0219 10:04:12.771982 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="e59b9522-d30f-4640-8e62-55e0b0c91c9a" containerName="barbican-api" Feb 19 10:04:12 crc kubenswrapper[4965]: I0219 10:04:12.772008 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="e59b9522-d30f-4640-8e62-55e0b0c91c9a" containerName="barbican-api-log" Feb 19 10:04:12 crc kubenswrapper[4965]: I0219 10:04:12.772688 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 10:04:12 crc kubenswrapper[4965]: I0219 10:04:12.774964 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 19 10:04:12 crc kubenswrapper[4965]: I0219 10:04:12.775049 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 19 10:04:12 crc kubenswrapper[4965]: I0219 10:04:12.775937 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-zz659" Feb 19 10:04:12 crc kubenswrapper[4965]: I0219 10:04:12.874559 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 19 10:04:12 crc kubenswrapper[4965]: I0219 10:04:12.882625 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq8jc\" (UniqueName: \"kubernetes.io/projected/956f39a0-e850-42ec-abc1-7419370d6592-kube-api-access-kq8jc\") pod \"openstackclient\" (UID: \"956f39a0-e850-42ec-abc1-7419370d6592\") " pod="openstack/openstackclient" Feb 19 10:04:12 crc kubenswrapper[4965]: I0219 10:04:12.882716 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/956f39a0-e850-42ec-abc1-7419370d6592-combined-ca-bundle\") pod \"openstackclient\" (UID: \"956f39a0-e850-42ec-abc1-7419370d6592\") " pod="openstack/openstackclient" Feb 19 10:04:12 crc kubenswrapper[4965]: I0219 10:04:12.882751 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/956f39a0-e850-42ec-abc1-7419370d6592-openstack-config\") pod \"openstackclient\" (UID: \"956f39a0-e850-42ec-abc1-7419370d6592\") " pod="openstack/openstackclient" Feb 19 10:04:12 crc kubenswrapper[4965]: I0219 10:04:12.882789 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/956f39a0-e850-42ec-abc1-7419370d6592-openstack-config-secret\") pod \"openstackclient\" (UID: \"956f39a0-e850-42ec-abc1-7419370d6592\") " pod="openstack/openstackclient" Feb 19 10:04:12 crc kubenswrapper[4965]: I0219 10:04:12.984412 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kq8jc\" (UniqueName: \"kubernetes.io/projected/956f39a0-e850-42ec-abc1-7419370d6592-kube-api-access-kq8jc\") pod \"openstackclient\" (UID: \"956f39a0-e850-42ec-abc1-7419370d6592\") " pod="openstack/openstackclient" Feb 19 10:04:12 crc kubenswrapper[4965]: I0219 10:04:12.984497 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/956f39a0-e850-42ec-abc1-7419370d6592-combined-ca-bundle\") pod \"openstackclient\" (UID: \"956f39a0-e850-42ec-abc1-7419370d6592\") " pod="openstack/openstackclient" Feb 19 10:04:12 crc kubenswrapper[4965]: I0219 10:04:12.984533 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/956f39a0-e850-42ec-abc1-7419370d6592-openstack-config\") pod \"openstackclient\" (UID: \"956f39a0-e850-42ec-abc1-7419370d6592\") " pod="openstack/openstackclient" Feb 19 10:04:12 crc kubenswrapper[4965]: I0219 10:04:12.984572 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/956f39a0-e850-42ec-abc1-7419370d6592-openstack-config-secret\") pod \"openstackclient\" (UID: \"956f39a0-e850-42ec-abc1-7419370d6592\") " pod="openstack/openstackclient" Feb 19 10:04:12 crc kubenswrapper[4965]: I0219 10:04:12.986224 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/956f39a0-e850-42ec-abc1-7419370d6592-openstack-config\") pod \"openstackclient\" (UID: \"956f39a0-e850-42ec-abc1-7419370d6592\") " pod="openstack/openstackclient" Feb 19 10:04:12 crc kubenswrapper[4965]: I0219 10:04:12.995220 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/956f39a0-e850-42ec-abc1-7419370d6592-openstack-config-secret\") pod \"openstackclient\" (UID: \"956f39a0-e850-42ec-abc1-7419370d6592\") " pod="openstack/openstackclient" Feb 19 10:04:12 crc kubenswrapper[4965]: I0219 10:04:12.999755 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/956f39a0-e850-42ec-abc1-7419370d6592-combined-ca-bundle\") pod \"openstackclient\" (UID: \"956f39a0-e850-42ec-abc1-7419370d6592\") " pod="openstack/openstackclient" Feb 19 10:04:13 crc kubenswrapper[4965]: I0219 10:04:13.013766 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kq8jc\" (UniqueName: \"kubernetes.io/projected/956f39a0-e850-42ec-abc1-7419370d6592-kube-api-access-kq8jc\") pod \"openstackclient\" (UID: \"956f39a0-e850-42ec-abc1-7419370d6592\") " pod="openstack/openstackclient" Feb 19 10:04:13 crc kubenswrapper[4965]: I0219 10:04:13.089339 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 10:04:13 crc kubenswrapper[4965]: I0219 10:04:13.155117 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Feb 19 10:04:13 crc kubenswrapper[4965]: I0219 10:04:13.185497 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Feb 19 10:04:13 crc kubenswrapper[4965]: I0219 10:04:13.252507 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e59b9522-d30f-4640-8e62-55e0b0c91c9a" path="/var/lib/kubelet/pods/e59b9522-d30f-4640-8e62-55e0b0c91c9a/volumes" Feb 19 10:04:13 crc kubenswrapper[4965]: I0219 10:04:13.253069 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 19 10:04:13 crc kubenswrapper[4965]: I0219 10:04:13.279756 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 10:04:13 crc kubenswrapper[4965]: I0219 10:04:13.286632 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 19 10:04:13 crc kubenswrapper[4965]: I0219 10:04:13.395159 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/96d45563-22bf-42f1-bc03-4fd3b223293d-openstack-config\") pod \"openstackclient\" (UID: \"96d45563-22bf-42f1-bc03-4fd3b223293d\") " pod="openstack/openstackclient" Feb 19 10:04:13 crc kubenswrapper[4965]: I0219 10:04:13.395296 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96d45563-22bf-42f1-bc03-4fd3b223293d-combined-ca-bundle\") pod \"openstackclient\" (UID: \"96d45563-22bf-42f1-bc03-4fd3b223293d\") " pod="openstack/openstackclient" Feb 19 10:04:13 crc kubenswrapper[4965]: I0219 10:04:13.395349 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/96d45563-22bf-42f1-bc03-4fd3b223293d-openstack-config-secret\") pod \"openstackclient\" (UID: \"96d45563-22bf-42f1-bc03-4fd3b223293d\") " pod="openstack/openstackclient" Feb 19 10:04:13 crc kubenswrapper[4965]: I0219 10:04:13.395457 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddk5d\" (UniqueName: \"kubernetes.io/projected/96d45563-22bf-42f1-bc03-4fd3b223293d-kube-api-access-ddk5d\") pod \"openstackclient\" (UID: \"96d45563-22bf-42f1-bc03-4fd3b223293d\") " pod="openstack/openstackclient" Feb 19 10:04:13 crc kubenswrapper[4965]: I0219 10:04:13.496839 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddk5d\" (UniqueName: \"kubernetes.io/projected/96d45563-22bf-42f1-bc03-4fd3b223293d-kube-api-access-ddk5d\") pod \"openstackclient\" (UID: \"96d45563-22bf-42f1-bc03-4fd3b223293d\") " pod="openstack/openstackclient" Feb 19 10:04:13 crc kubenswrapper[4965]: I0219 10:04:13.496968 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/96d45563-22bf-42f1-bc03-4fd3b223293d-openstack-config\") pod \"openstackclient\" (UID: \"96d45563-22bf-42f1-bc03-4fd3b223293d\") " pod="openstack/openstackclient" Feb 19 10:04:13 crc kubenswrapper[4965]: I0219 10:04:13.497025 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96d45563-22bf-42f1-bc03-4fd3b223293d-combined-ca-bundle\") pod \"openstackclient\" (UID: \"96d45563-22bf-42f1-bc03-4fd3b223293d\") " pod="openstack/openstackclient" Feb 19 10:04:13 crc kubenswrapper[4965]: I0219 10:04:13.497064 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/96d45563-22bf-42f1-bc03-4fd3b223293d-openstack-config-secret\") pod \"openstackclient\" (UID: \"96d45563-22bf-42f1-bc03-4fd3b223293d\") " pod="openstack/openstackclient" Feb 19 10:04:13 crc kubenswrapper[4965]: I0219 10:04:13.498437 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/96d45563-22bf-42f1-bc03-4fd3b223293d-openstack-config\") pod \"openstackclient\" (UID: \"96d45563-22bf-42f1-bc03-4fd3b223293d\") " pod="openstack/openstackclient" Feb 19 10:04:13 crc kubenswrapper[4965]: I0219 10:04:13.504371 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/96d45563-22bf-42f1-bc03-4fd3b223293d-openstack-config-secret\") pod \"openstackclient\" (UID: \"96d45563-22bf-42f1-bc03-4fd3b223293d\") " pod="openstack/openstackclient" Feb 19 10:04:13 crc kubenswrapper[4965]: I0219 10:04:13.507738 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96d45563-22bf-42f1-bc03-4fd3b223293d-combined-ca-bundle\") pod \"openstackclient\" (UID: \"96d45563-22bf-42f1-bc03-4fd3b223293d\") " pod="openstack/openstackclient" Feb 19 10:04:13 crc kubenswrapper[4965]: I0219 10:04:13.525542 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddk5d\" (UniqueName: \"kubernetes.io/projected/96d45563-22bf-42f1-bc03-4fd3b223293d-kube-api-access-ddk5d\") pod \"openstackclient\" (UID: \"96d45563-22bf-42f1-bc03-4fd3b223293d\") " pod="openstack/openstackclient" Feb 19 10:04:13 crc kubenswrapper[4965]: E0219 10:04:13.538464 4965 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 19 10:04:13 crc kubenswrapper[4965]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_956f39a0-e850-42ec-abc1-7419370d6592_0(489515c0205d60ec2d72658fe21355f6cb6562090ab9db7a336bf9d2bfae7485): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"489515c0205d60ec2d72658fe21355f6cb6562090ab9db7a336bf9d2bfae7485" Netns:"/var/run/netns/3fe9957a-15d9-4c29-8a9b-e4ed45106424" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=489515c0205d60ec2d72658fe21355f6cb6562090ab9db7a336bf9d2bfae7485;K8S_POD_UID=956f39a0-e850-42ec-abc1-7419370d6592" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/956f39a0-e850-42ec-abc1-7419370d6592]: expected pod UID "956f39a0-e850-42ec-abc1-7419370d6592" but got "96d45563-22bf-42f1-bc03-4fd3b223293d" from Kube API Feb 19 10:04:13 crc kubenswrapper[4965]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 19 10:04:13 crc kubenswrapper[4965]: > Feb 19 10:04:13 crc kubenswrapper[4965]: E0219 10:04:13.538536 4965 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 19 10:04:13 crc kubenswrapper[4965]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_956f39a0-e850-42ec-abc1-7419370d6592_0(489515c0205d60ec2d72658fe21355f6cb6562090ab9db7a336bf9d2bfae7485): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"489515c0205d60ec2d72658fe21355f6cb6562090ab9db7a336bf9d2bfae7485" Netns:"/var/run/netns/3fe9957a-15d9-4c29-8a9b-e4ed45106424" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=489515c0205d60ec2d72658fe21355f6cb6562090ab9db7a336bf9d2bfae7485;K8S_POD_UID=956f39a0-e850-42ec-abc1-7419370d6592" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/956f39a0-e850-42ec-abc1-7419370d6592]: expected pod UID "956f39a0-e850-42ec-abc1-7419370d6592" but got "96d45563-22bf-42f1-bc03-4fd3b223293d" from Kube API Feb 19 10:04:13 crc kubenswrapper[4965]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 19 10:04:13 crc kubenswrapper[4965]: > pod="openstack/openstackclient" Feb 19 10:04:13 crc kubenswrapper[4965]: I0219 10:04:13.632007 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 10:04:13 crc kubenswrapper[4965]: I0219 10:04:13.795267 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 10:04:13 crc kubenswrapper[4965]: I0219 10:04:13.822188 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 10:04:13 crc kubenswrapper[4965]: I0219 10:04:13.825409 4965 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="956f39a0-e850-42ec-abc1-7419370d6592" podUID="96d45563-22bf-42f1-bc03-4fd3b223293d" Feb 19 10:04:13 crc kubenswrapper[4965]: I0219 10:04:13.903170 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/956f39a0-e850-42ec-abc1-7419370d6592-openstack-config\") pod \"956f39a0-e850-42ec-abc1-7419370d6592\" (UID: \"956f39a0-e850-42ec-abc1-7419370d6592\") " Feb 19 10:04:13 crc kubenswrapper[4965]: I0219 10:04:13.903522 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kq8jc\" (UniqueName: \"kubernetes.io/projected/956f39a0-e850-42ec-abc1-7419370d6592-kube-api-access-kq8jc\") pod \"956f39a0-e850-42ec-abc1-7419370d6592\" (UID: \"956f39a0-e850-42ec-abc1-7419370d6592\") " Feb 19 10:04:13 crc kubenswrapper[4965]: I0219 10:04:13.903619 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/956f39a0-e850-42ec-abc1-7419370d6592-combined-ca-bundle\") pod \"956f39a0-e850-42ec-abc1-7419370d6592\" (UID: \"956f39a0-e850-42ec-abc1-7419370d6592\") " Feb 19 10:04:13 crc kubenswrapper[4965]: I0219 10:04:13.903661 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/956f39a0-e850-42ec-abc1-7419370d6592-openstack-config-secret\") pod \"956f39a0-e850-42ec-abc1-7419370d6592\" (UID: \"956f39a0-e850-42ec-abc1-7419370d6592\") " Feb 19 10:04:13 crc kubenswrapper[4965]: I0219 10:04:13.905494 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/956f39a0-e850-42ec-abc1-7419370d6592-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "956f39a0-e850-42ec-abc1-7419370d6592" (UID: "956f39a0-e850-42ec-abc1-7419370d6592"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:04:13 crc kubenswrapper[4965]: I0219 10:04:13.911374 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/956f39a0-e850-42ec-abc1-7419370d6592-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "956f39a0-e850-42ec-abc1-7419370d6592" (UID: "956f39a0-e850-42ec-abc1-7419370d6592"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:13 crc kubenswrapper[4965]: I0219 10:04:13.914369 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/956f39a0-e850-42ec-abc1-7419370d6592-kube-api-access-kq8jc" (OuterVolumeSpecName: "kube-api-access-kq8jc") pod "956f39a0-e850-42ec-abc1-7419370d6592" (UID: "956f39a0-e850-42ec-abc1-7419370d6592"). InnerVolumeSpecName "kube-api-access-kq8jc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:04:13 crc kubenswrapper[4965]: I0219 10:04:13.915264 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/956f39a0-e850-42ec-abc1-7419370d6592-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "956f39a0-e850-42ec-abc1-7419370d6592" (UID: "956f39a0-e850-42ec-abc1-7419370d6592"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:14 crc kubenswrapper[4965]: I0219 10:04:14.006709 4965 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/956f39a0-e850-42ec-abc1-7419370d6592-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:14 crc kubenswrapper[4965]: I0219 10:04:14.006748 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kq8jc\" (UniqueName: \"kubernetes.io/projected/956f39a0-e850-42ec-abc1-7419370d6592-kube-api-access-kq8jc\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:14 crc kubenswrapper[4965]: I0219 10:04:14.006760 4965 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/956f39a0-e850-42ec-abc1-7419370d6592-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:14 crc kubenswrapper[4965]: I0219 10:04:14.006770 4965 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/956f39a0-e850-42ec-abc1-7419370d6592-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:14 crc kubenswrapper[4965]: I0219 10:04:14.241877 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 19 10:04:14 crc kubenswrapper[4965]: I0219 10:04:14.266121 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 19 10:04:14 crc kubenswrapper[4965]: I0219 10:04:14.806131 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"96d45563-22bf-42f1-bc03-4fd3b223293d","Type":"ContainerStarted","Data":"459e9982968aba4059dc82518d0a0f84d751fde9d3e204a98a6601cc237e1cac"} Feb 19 10:04:14 crc kubenswrapper[4965]: I0219 10:04:14.806163 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 10:04:14 crc kubenswrapper[4965]: I0219 10:04:14.820460 4965 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="956f39a0-e850-42ec-abc1-7419370d6592" podUID="96d45563-22bf-42f1-bc03-4fd3b223293d" Feb 19 10:04:14 crc kubenswrapper[4965]: I0219 10:04:14.935607 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-76485c5b9f-wzzpl"] Feb 19 10:04:14 crc kubenswrapper[4965]: I0219 10:04:14.937168 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-76485c5b9f-wzzpl" Feb 19 10:04:14 crc kubenswrapper[4965]: I0219 10:04:14.949788 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 19 10:04:14 crc kubenswrapper[4965]: I0219 10:04:14.949980 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 19 10:04:14 crc kubenswrapper[4965]: I0219 10:04:14.950091 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 19 10:04:14 crc kubenswrapper[4965]: I0219 10:04:14.950687 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-76485c5b9f-wzzpl"] Feb 19 10:04:15 crc kubenswrapper[4965]: I0219 10:04:15.025093 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69ee7a64-2965-42d1-bad2-82087733b567-log-httpd\") pod \"swift-proxy-76485c5b9f-wzzpl\" (UID: \"69ee7a64-2965-42d1-bad2-82087733b567\") " pod="openstack/swift-proxy-76485c5b9f-wzzpl" Feb 19 10:04:15 crc kubenswrapper[4965]: I0219 10:04:15.025153 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69ee7a64-2965-42d1-bad2-82087733b567-run-httpd\") pod \"swift-proxy-76485c5b9f-wzzpl\" (UID: \"69ee7a64-2965-42d1-bad2-82087733b567\") " pod="openstack/swift-proxy-76485c5b9f-wzzpl" Feb 19 10:04:15 crc kubenswrapper[4965]: I0219 10:04:15.025209 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snwd4\" (UniqueName: \"kubernetes.io/projected/69ee7a64-2965-42d1-bad2-82087733b567-kube-api-access-snwd4\") pod \"swift-proxy-76485c5b9f-wzzpl\" (UID: \"69ee7a64-2965-42d1-bad2-82087733b567\") " pod="openstack/swift-proxy-76485c5b9f-wzzpl" Feb 19 10:04:15 crc kubenswrapper[4965]: I0219 10:04:15.025237 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69ee7a64-2965-42d1-bad2-82087733b567-combined-ca-bundle\") pod \"swift-proxy-76485c5b9f-wzzpl\" (UID: \"69ee7a64-2965-42d1-bad2-82087733b567\") " pod="openstack/swift-proxy-76485c5b9f-wzzpl" Feb 19 10:04:15 crc kubenswrapper[4965]: I0219 10:04:15.025280 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/69ee7a64-2965-42d1-bad2-82087733b567-public-tls-certs\") pod \"swift-proxy-76485c5b9f-wzzpl\" (UID: \"69ee7a64-2965-42d1-bad2-82087733b567\") " pod="openstack/swift-proxy-76485c5b9f-wzzpl" Feb 19 10:04:15 crc kubenswrapper[4965]: I0219 10:04:15.025301 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/69ee7a64-2965-42d1-bad2-82087733b567-internal-tls-certs\") pod \"swift-proxy-76485c5b9f-wzzpl\" (UID: \"69ee7a64-2965-42d1-bad2-82087733b567\") " pod="openstack/swift-proxy-76485c5b9f-wzzpl" Feb 19 10:04:15 crc kubenswrapper[4965]: I0219 10:04:15.025364 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69ee7a64-2965-42d1-bad2-82087733b567-config-data\") pod \"swift-proxy-76485c5b9f-wzzpl\" (UID: \"69ee7a64-2965-42d1-bad2-82087733b567\") " pod="openstack/swift-proxy-76485c5b9f-wzzpl" Feb 19 10:04:15 crc kubenswrapper[4965]: I0219 10:04:15.025393 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/69ee7a64-2965-42d1-bad2-82087733b567-etc-swift\") pod \"swift-proxy-76485c5b9f-wzzpl\" (UID: \"69ee7a64-2965-42d1-bad2-82087733b567\") " pod="openstack/swift-proxy-76485c5b9f-wzzpl" Feb 19 10:04:15 crc kubenswrapper[4965]: I0219 10:04:15.126885 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69ee7a64-2965-42d1-bad2-82087733b567-combined-ca-bundle\") pod \"swift-proxy-76485c5b9f-wzzpl\" (UID: \"69ee7a64-2965-42d1-bad2-82087733b567\") " pod="openstack/swift-proxy-76485c5b9f-wzzpl" Feb 19 10:04:15 crc kubenswrapper[4965]: I0219 10:04:15.126957 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/69ee7a64-2965-42d1-bad2-82087733b567-public-tls-certs\") pod \"swift-proxy-76485c5b9f-wzzpl\" (UID: \"69ee7a64-2965-42d1-bad2-82087733b567\") " pod="openstack/swift-proxy-76485c5b9f-wzzpl" Feb 19 10:04:15 crc kubenswrapper[4965]: I0219 10:04:15.126977 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/69ee7a64-2965-42d1-bad2-82087733b567-internal-tls-certs\") pod \"swift-proxy-76485c5b9f-wzzpl\" (UID: \"69ee7a64-2965-42d1-bad2-82087733b567\") " pod="openstack/swift-proxy-76485c5b9f-wzzpl" Feb 19 10:04:15 crc kubenswrapper[4965]: I0219 10:04:15.127043 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69ee7a64-2965-42d1-bad2-82087733b567-config-data\") pod \"swift-proxy-76485c5b9f-wzzpl\" (UID: \"69ee7a64-2965-42d1-bad2-82087733b567\") " pod="openstack/swift-proxy-76485c5b9f-wzzpl" Feb 19 10:04:15 crc kubenswrapper[4965]: I0219 10:04:15.127082 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/69ee7a64-2965-42d1-bad2-82087733b567-etc-swift\") pod \"swift-proxy-76485c5b9f-wzzpl\" (UID: \"69ee7a64-2965-42d1-bad2-82087733b567\") " pod="openstack/swift-proxy-76485c5b9f-wzzpl" Feb 19 10:04:15 crc kubenswrapper[4965]: I0219 10:04:15.127132 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69ee7a64-2965-42d1-bad2-82087733b567-log-httpd\") pod \"swift-proxy-76485c5b9f-wzzpl\" (UID: \"69ee7a64-2965-42d1-bad2-82087733b567\") " pod="openstack/swift-proxy-76485c5b9f-wzzpl" Feb 19 10:04:15 crc kubenswrapper[4965]: I0219 10:04:15.127164 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69ee7a64-2965-42d1-bad2-82087733b567-run-httpd\") pod \"swift-proxy-76485c5b9f-wzzpl\" (UID: \"69ee7a64-2965-42d1-bad2-82087733b567\") " pod="openstack/swift-proxy-76485c5b9f-wzzpl" Feb 19 10:04:15 crc kubenswrapper[4965]: I0219 10:04:15.127226 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snwd4\" (UniqueName: \"kubernetes.io/projected/69ee7a64-2965-42d1-bad2-82087733b567-kube-api-access-snwd4\") pod \"swift-proxy-76485c5b9f-wzzpl\" (UID: \"69ee7a64-2965-42d1-bad2-82087733b567\") " pod="openstack/swift-proxy-76485c5b9f-wzzpl" Feb 19 10:04:15 crc kubenswrapper[4965]: I0219 10:04:15.129269 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69ee7a64-2965-42d1-bad2-82087733b567-log-httpd\") pod \"swift-proxy-76485c5b9f-wzzpl\" (UID: \"69ee7a64-2965-42d1-bad2-82087733b567\") " pod="openstack/swift-proxy-76485c5b9f-wzzpl" Feb 19 10:04:15 crc kubenswrapper[4965]: I0219 10:04:15.129572 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69ee7a64-2965-42d1-bad2-82087733b567-run-httpd\") pod \"swift-proxy-76485c5b9f-wzzpl\" (UID: \"69ee7a64-2965-42d1-bad2-82087733b567\") " pod="openstack/swift-proxy-76485c5b9f-wzzpl" Feb 19 10:04:15 crc kubenswrapper[4965]: I0219 10:04:15.133445 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69ee7a64-2965-42d1-bad2-82087733b567-combined-ca-bundle\") pod \"swift-proxy-76485c5b9f-wzzpl\" (UID: \"69ee7a64-2965-42d1-bad2-82087733b567\") " pod="openstack/swift-proxy-76485c5b9f-wzzpl" Feb 19 10:04:15 crc kubenswrapper[4965]: I0219 10:04:15.136783 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/69ee7a64-2965-42d1-bad2-82087733b567-internal-tls-certs\") pod \"swift-proxy-76485c5b9f-wzzpl\" (UID: \"69ee7a64-2965-42d1-bad2-82087733b567\") " pod="openstack/swift-proxy-76485c5b9f-wzzpl" Feb 19 10:04:15 crc kubenswrapper[4965]: I0219 10:04:15.138001 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/69ee7a64-2965-42d1-bad2-82087733b567-etc-swift\") pod \"swift-proxy-76485c5b9f-wzzpl\" (UID: \"69ee7a64-2965-42d1-bad2-82087733b567\") " pod="openstack/swift-proxy-76485c5b9f-wzzpl" Feb 19 10:04:15 crc kubenswrapper[4965]: I0219 10:04:15.147360 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69ee7a64-2965-42d1-bad2-82087733b567-config-data\") pod \"swift-proxy-76485c5b9f-wzzpl\" (UID: \"69ee7a64-2965-42d1-bad2-82087733b567\") " pod="openstack/swift-proxy-76485c5b9f-wzzpl" Feb 19 10:04:15 crc kubenswrapper[4965]: I0219 10:04:15.149589 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snwd4\" (UniqueName: \"kubernetes.io/projected/69ee7a64-2965-42d1-bad2-82087733b567-kube-api-access-snwd4\") pod \"swift-proxy-76485c5b9f-wzzpl\" (UID: \"69ee7a64-2965-42d1-bad2-82087733b567\") " pod="openstack/swift-proxy-76485c5b9f-wzzpl" Feb 19 10:04:15 crc kubenswrapper[4965]: I0219 10:04:15.150898 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/69ee7a64-2965-42d1-bad2-82087733b567-public-tls-certs\") pod \"swift-proxy-76485c5b9f-wzzpl\" (UID: \"69ee7a64-2965-42d1-bad2-82087733b567\") " pod="openstack/swift-proxy-76485c5b9f-wzzpl" Feb 19 10:04:15 crc kubenswrapper[4965]: I0219 10:04:15.272850 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-76485c5b9f-wzzpl" Feb 19 10:04:15 crc kubenswrapper[4965]: I0219 10:04:15.288286 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="956f39a0-e850-42ec-abc1-7419370d6592" path="/var/lib/kubelet/pods/956f39a0-e850-42ec-abc1-7419370d6592/volumes" Feb 19 10:04:16 crc kubenswrapper[4965]: I0219 10:04:16.028534 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-76485c5b9f-wzzpl"] Feb 19 10:04:16 crc kubenswrapper[4965]: I0219 10:04:16.159507 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:04:16 crc kubenswrapper[4965]: I0219 10:04:16.159835 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="648dfd3e-578b-4808-84c2-6dd4b4a7954c" containerName="ceilometer-central-agent" containerID="cri-o://e142f3cf13924b1eb47f07fe5fdb88a893daad952a4d75c074a02fa0694e042d" gracePeriod=30 Feb 19 10:04:16 crc kubenswrapper[4965]: I0219 10:04:16.160148 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="648dfd3e-578b-4808-84c2-6dd4b4a7954c" containerName="proxy-httpd" containerID="cri-o://0400aa7c9dfd960cf23d08febf3d6ae9bd1f5875a6b095538c6e6b69821b1ee9" gracePeriod=30 Feb 19 10:04:16 crc kubenswrapper[4965]: I0219 10:04:16.160239 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="648dfd3e-578b-4808-84c2-6dd4b4a7954c" containerName="sg-core" containerID="cri-o://b4c83774e861ec73bc9c0dfddcfedf12568c709a0c36018bd09c4e6870629001" gracePeriod=30 Feb 19 10:04:16 crc kubenswrapper[4965]: I0219 10:04:16.160288 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="648dfd3e-578b-4808-84c2-6dd4b4a7954c" containerName="ceilometer-notification-agent" containerID="cri-o://28fb6b5acdca274bb67d1413f4c1739d36b826769dec9bc6ccbf95b4ebc87265" gracePeriod=30 Feb 19 10:04:16 crc kubenswrapper[4965]: I0219 10:04:16.168523 4965 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="648dfd3e-578b-4808-84c2-6dd4b4a7954c" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.184:3000/\": EOF" Feb 19 10:04:16 crc kubenswrapper[4965]: I0219 10:04:16.601290 4965 patch_prober.go:28] interesting pod/machine-config-daemon-7mhh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:04:16 crc kubenswrapper[4965]: I0219 10:04:16.601542 4965 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:04:16 crc kubenswrapper[4965]: I0219 10:04:16.720488 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-ffs2w"] Feb 19 10:04:16 crc kubenswrapper[4965]: I0219 10:04:16.722202 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-ffs2w" Feb 19 10:04:16 crc kubenswrapper[4965]: I0219 10:04:16.757237 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-ffs2w"] Feb 19 10:04:16 crc kubenswrapper[4965]: I0219 10:04:16.773222 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bg27f\" (UniqueName: \"kubernetes.io/projected/0ff16dee-aeea-4cb2-ba44-a8de28ff44ec-kube-api-access-bg27f\") pod \"nova-api-db-create-ffs2w\" (UID: \"0ff16dee-aeea-4cb2-ba44-a8de28ff44ec\") " pod="openstack/nova-api-db-create-ffs2w" Feb 19 10:04:16 crc kubenswrapper[4965]: I0219 10:04:16.773340 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ff16dee-aeea-4cb2-ba44-a8de28ff44ec-operator-scripts\") pod \"nova-api-db-create-ffs2w\" (UID: \"0ff16dee-aeea-4cb2-ba44-a8de28ff44ec\") " pod="openstack/nova-api-db-create-ffs2w" Feb 19 10:04:16 crc kubenswrapper[4965]: I0219 10:04:16.822150 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-fl9jp"] Feb 19 10:04:16 crc kubenswrapper[4965]: I0219 10:04:16.823487 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-fl9jp" Feb 19 10:04:16 crc kubenswrapper[4965]: I0219 10:04:16.845573 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-76485c5b9f-wzzpl" event={"ID":"69ee7a64-2965-42d1-bad2-82087733b567","Type":"ContainerStarted","Data":"8b4b8d59b08430f3894a4d26f07da9d9e2c472d82fc0b9f79b6ad33c94de532a"} Feb 19 10:04:16 crc kubenswrapper[4965]: I0219 10:04:16.845630 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-76485c5b9f-wzzpl" event={"ID":"69ee7a64-2965-42d1-bad2-82087733b567","Type":"ContainerStarted","Data":"a731dcea0aa7add919a090223b325544afe30af0925db82f7a399b469b2588e4"} Feb 19 10:04:16 crc kubenswrapper[4965]: I0219 10:04:16.845643 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-76485c5b9f-wzzpl" event={"ID":"69ee7a64-2965-42d1-bad2-82087733b567","Type":"ContainerStarted","Data":"d864132c4d78ec8427d1bea832cd411457cbead87dc6ab437f43a90e8c4b000c"} Feb 19 10:04:16 crc kubenswrapper[4965]: I0219 10:04:16.847302 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-fl9jp"] Feb 19 10:04:16 crc kubenswrapper[4965]: I0219 10:04:16.847332 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-76485c5b9f-wzzpl" Feb 19 10:04:16 crc kubenswrapper[4965]: I0219 10:04:16.847342 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-76485c5b9f-wzzpl" Feb 19 10:04:16 crc kubenswrapper[4965]: I0219 10:04:16.853610 4965 generic.go:334] "Generic (PLEG): container finished" podID="648dfd3e-578b-4808-84c2-6dd4b4a7954c" containerID="0400aa7c9dfd960cf23d08febf3d6ae9bd1f5875a6b095538c6e6b69821b1ee9" exitCode=0 Feb 19 10:04:16 crc kubenswrapper[4965]: I0219 10:04:16.853642 4965 generic.go:334] "Generic (PLEG): container finished" podID="648dfd3e-578b-4808-84c2-6dd4b4a7954c" containerID="b4c83774e861ec73bc9c0dfddcfedf12568c709a0c36018bd09c4e6870629001" exitCode=2 Feb 19 10:04:16 crc kubenswrapper[4965]: I0219 10:04:16.853650 4965 generic.go:334] "Generic (PLEG): container finished" podID="648dfd3e-578b-4808-84c2-6dd4b4a7954c" containerID="e142f3cf13924b1eb47f07fe5fdb88a893daad952a4d75c074a02fa0694e042d" exitCode=0 Feb 19 10:04:16 crc kubenswrapper[4965]: I0219 10:04:16.853667 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"648dfd3e-578b-4808-84c2-6dd4b4a7954c","Type":"ContainerDied","Data":"0400aa7c9dfd960cf23d08febf3d6ae9bd1f5875a6b095538c6e6b69821b1ee9"} Feb 19 10:04:16 crc kubenswrapper[4965]: I0219 10:04:16.853743 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"648dfd3e-578b-4808-84c2-6dd4b4a7954c","Type":"ContainerDied","Data":"b4c83774e861ec73bc9c0dfddcfedf12568c709a0c36018bd09c4e6870629001"} Feb 19 10:04:16 crc kubenswrapper[4965]: I0219 10:04:16.853757 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"648dfd3e-578b-4808-84c2-6dd4b4a7954c","Type":"ContainerDied","Data":"e142f3cf13924b1eb47f07fe5fdb88a893daad952a4d75c074a02fa0694e042d"} Feb 19 10:04:16 crc kubenswrapper[4965]: I0219 10:04:16.879652 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ff16dee-aeea-4cb2-ba44-a8de28ff44ec-operator-scripts\") pod \"nova-api-db-create-ffs2w\" (UID: \"0ff16dee-aeea-4cb2-ba44-a8de28ff44ec\") " pod="openstack/nova-api-db-create-ffs2w" Feb 19 10:04:16 crc kubenswrapper[4965]: I0219 10:04:16.879782 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvszx\" (UniqueName: \"kubernetes.io/projected/140e538f-9a0d-4c71-80a3-8710e0622021-kube-api-access-qvszx\") pod \"nova-cell0-db-create-fl9jp\" (UID: \"140e538f-9a0d-4c71-80a3-8710e0622021\") " pod="openstack/nova-cell0-db-create-fl9jp" Feb 19 10:04:16 crc kubenswrapper[4965]: I0219 10:04:16.879832 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/140e538f-9a0d-4c71-80a3-8710e0622021-operator-scripts\") pod \"nova-cell0-db-create-fl9jp\" (UID: \"140e538f-9a0d-4c71-80a3-8710e0622021\") " pod="openstack/nova-cell0-db-create-fl9jp" Feb 19 10:04:16 crc kubenswrapper[4965]: I0219 10:04:16.879854 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bg27f\" (UniqueName: \"kubernetes.io/projected/0ff16dee-aeea-4cb2-ba44-a8de28ff44ec-kube-api-access-bg27f\") pod \"nova-api-db-create-ffs2w\" (UID: \"0ff16dee-aeea-4cb2-ba44-a8de28ff44ec\") " pod="openstack/nova-api-db-create-ffs2w" Feb 19 10:04:16 crc kubenswrapper[4965]: I0219 10:04:16.881015 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ff16dee-aeea-4cb2-ba44-a8de28ff44ec-operator-scripts\") pod \"nova-api-db-create-ffs2w\" (UID: \"0ff16dee-aeea-4cb2-ba44-a8de28ff44ec\") " pod="openstack/nova-api-db-create-ffs2w" Feb 19 10:04:16 crc kubenswrapper[4965]: I0219 10:04:16.911012 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bg27f\" (UniqueName: \"kubernetes.io/projected/0ff16dee-aeea-4cb2-ba44-a8de28ff44ec-kube-api-access-bg27f\") pod \"nova-api-db-create-ffs2w\" (UID: \"0ff16dee-aeea-4cb2-ba44-a8de28ff44ec\") " pod="openstack/nova-api-db-create-ffs2w" Feb 19 10:04:16 crc kubenswrapper[4965]: I0219 10:04:16.915174 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-76485c5b9f-wzzpl" podStartSLOduration=2.915146815 podStartE2EDuration="2.915146815s" podCreationTimestamp="2026-02-19 10:04:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:04:16.878451584 +0000 UTC m=+1312.499772894" watchObservedRunningTime="2026-02-19 10:04:16.915146815 +0000 UTC m=+1312.536468135" Feb 19 10:04:16 crc kubenswrapper[4965]: I0219 10:04:16.943935 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-4d8f-account-create-update-xdplb"] Feb 19 10:04:16 crc kubenswrapper[4965]: I0219 10:04:16.945329 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4d8f-account-create-update-xdplb" Feb 19 10:04:16 crc kubenswrapper[4965]: I0219 10:04:16.950588 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 19 10:04:16 crc kubenswrapper[4965]: I0219 10:04:16.970101 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-2j7hd"] Feb 19 10:04:16 crc kubenswrapper[4965]: I0219 10:04:16.971445 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-2j7hd" Feb 19 10:04:16 crc kubenswrapper[4965]: I0219 10:04:16.986542 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvszx\" (UniqueName: \"kubernetes.io/projected/140e538f-9a0d-4c71-80a3-8710e0622021-kube-api-access-qvszx\") pod \"nova-cell0-db-create-fl9jp\" (UID: \"140e538f-9a0d-4c71-80a3-8710e0622021\") " pod="openstack/nova-cell0-db-create-fl9jp" Feb 19 10:04:16 crc kubenswrapper[4965]: I0219 10:04:16.986681 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/140e538f-9a0d-4c71-80a3-8710e0622021-operator-scripts\") pod \"nova-cell0-db-create-fl9jp\" (UID: \"140e538f-9a0d-4c71-80a3-8710e0622021\") " pod="openstack/nova-cell0-db-create-fl9jp" Feb 19 10:04:16 crc kubenswrapper[4965]: I0219 10:04:16.987260 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-2j7hd"] Feb 19 10:04:16 crc kubenswrapper[4965]: I0219 10:04:16.988809 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/140e538f-9a0d-4c71-80a3-8710e0622021-operator-scripts\") pod \"nova-cell0-db-create-fl9jp\" (UID: \"140e538f-9a0d-4c71-80a3-8710e0622021\") " pod="openstack/nova-cell0-db-create-fl9jp" Feb 19 10:04:17 crc kubenswrapper[4965]: I0219 10:04:17.004849 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-4d8f-account-create-update-xdplb"] Feb 19 10:04:17 crc kubenswrapper[4965]: I0219 10:04:17.024491 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvszx\" (UniqueName: \"kubernetes.io/projected/140e538f-9a0d-4c71-80a3-8710e0622021-kube-api-access-qvszx\") pod \"nova-cell0-db-create-fl9jp\" (UID: \"140e538f-9a0d-4c71-80a3-8710e0622021\") " pod="openstack/nova-cell0-db-create-fl9jp" Feb 19 10:04:17 crc kubenswrapper[4965]: I0219 10:04:17.043062 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-ffs2w" Feb 19 10:04:17 crc kubenswrapper[4965]: I0219 10:04:17.093984 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjc4r\" (UniqueName: \"kubernetes.io/projected/fd7da1a7-3ef7-4e09-bc9e-1e672e6b306f-kube-api-access-hjc4r\") pod \"nova-cell1-db-create-2j7hd\" (UID: \"fd7da1a7-3ef7-4e09-bc9e-1e672e6b306f\") " pod="openstack/nova-cell1-db-create-2j7hd" Feb 19 10:04:17 crc kubenswrapper[4965]: I0219 10:04:17.094414 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08122272-8ff7-4dad-95ea-c9190baad3ba-operator-scripts\") pod \"nova-api-4d8f-account-create-update-xdplb\" (UID: \"08122272-8ff7-4dad-95ea-c9190baad3ba\") " pod="openstack/nova-api-4d8f-account-create-update-xdplb" Feb 19 10:04:17 crc kubenswrapper[4965]: I0219 10:04:17.095672 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd7da1a7-3ef7-4e09-bc9e-1e672e6b306f-operator-scripts\") pod \"nova-cell1-db-create-2j7hd\" (UID: \"fd7da1a7-3ef7-4e09-bc9e-1e672e6b306f\") " pod="openstack/nova-cell1-db-create-2j7hd" Feb 19 10:04:17 crc kubenswrapper[4965]: I0219 10:04:17.095738 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmxrd\" (UniqueName: \"kubernetes.io/projected/08122272-8ff7-4dad-95ea-c9190baad3ba-kube-api-access-jmxrd\") pod \"nova-api-4d8f-account-create-update-xdplb\" (UID: \"08122272-8ff7-4dad-95ea-c9190baad3ba\") " pod="openstack/nova-api-4d8f-account-create-update-xdplb" Feb 19 10:04:17 crc kubenswrapper[4965]: I0219 10:04:17.137497 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-ec51-account-create-update-s5v66"] Feb 19 10:04:17 crc kubenswrapper[4965]: I0219 10:04:17.139417 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-ec51-account-create-update-s5v66" Feb 19 10:04:17 crc kubenswrapper[4965]: I0219 10:04:17.144878 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 19 10:04:17 crc kubenswrapper[4965]: I0219 10:04:17.176892 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-fl9jp" Feb 19 10:04:17 crc kubenswrapper[4965]: I0219 10:04:17.204830 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd7da1a7-3ef7-4e09-bc9e-1e672e6b306f-operator-scripts\") pod \"nova-cell1-db-create-2j7hd\" (UID: \"fd7da1a7-3ef7-4e09-bc9e-1e672e6b306f\") " pod="openstack/nova-cell1-db-create-2j7hd" Feb 19 10:04:17 crc kubenswrapper[4965]: I0219 10:04:17.204905 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmxrd\" (UniqueName: \"kubernetes.io/projected/08122272-8ff7-4dad-95ea-c9190baad3ba-kube-api-access-jmxrd\") pod \"nova-api-4d8f-account-create-update-xdplb\" (UID: \"08122272-8ff7-4dad-95ea-c9190baad3ba\") " pod="openstack/nova-api-4d8f-account-create-update-xdplb" Feb 19 10:04:17 crc kubenswrapper[4965]: I0219 10:04:17.204948 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b83107c-ba00-4687-9c58-94535f5a9a1e-operator-scripts\") pod \"nova-cell0-ec51-account-create-update-s5v66\" (UID: \"6b83107c-ba00-4687-9c58-94535f5a9a1e\") " pod="openstack/nova-cell0-ec51-account-create-update-s5v66" Feb 19 10:04:17 crc kubenswrapper[4965]: I0219 10:04:17.206706 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjc4r\" (UniqueName: \"kubernetes.io/projected/fd7da1a7-3ef7-4e09-bc9e-1e672e6b306f-kube-api-access-hjc4r\") pod \"nova-cell1-db-create-2j7hd\" (UID: \"fd7da1a7-3ef7-4e09-bc9e-1e672e6b306f\") " pod="openstack/nova-cell1-db-create-2j7hd" Feb 19 10:04:17 crc kubenswrapper[4965]: I0219 10:04:17.206745 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08122272-8ff7-4dad-95ea-c9190baad3ba-operator-scripts\") pod \"nova-api-4d8f-account-create-update-xdplb\" (UID: \"08122272-8ff7-4dad-95ea-c9190baad3ba\") " pod="openstack/nova-api-4d8f-account-create-update-xdplb" Feb 19 10:04:17 crc kubenswrapper[4965]: I0219 10:04:17.206834 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjdcp\" (UniqueName: \"kubernetes.io/projected/6b83107c-ba00-4687-9c58-94535f5a9a1e-kube-api-access-rjdcp\") pod \"nova-cell0-ec51-account-create-update-s5v66\" (UID: \"6b83107c-ba00-4687-9c58-94535f5a9a1e\") " pod="openstack/nova-cell0-ec51-account-create-update-s5v66" Feb 19 10:04:17 crc kubenswrapper[4965]: I0219 10:04:17.214415 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08122272-8ff7-4dad-95ea-c9190baad3ba-operator-scripts\") pod \"nova-api-4d8f-account-create-update-xdplb\" (UID: \"08122272-8ff7-4dad-95ea-c9190baad3ba\") " pod="openstack/nova-api-4d8f-account-create-update-xdplb" Feb 19 10:04:17 crc kubenswrapper[4965]: I0219 10:04:17.222905 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd7da1a7-3ef7-4e09-bc9e-1e672e6b306f-operator-scripts\") pod \"nova-cell1-db-create-2j7hd\" (UID: \"fd7da1a7-3ef7-4e09-bc9e-1e672e6b306f\") " pod="openstack/nova-cell1-db-create-2j7hd" Feb 19 10:04:17 crc kubenswrapper[4965]: I0219 10:04:17.264284 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-ec51-account-create-update-s5v66"] Feb 19 10:04:17 crc kubenswrapper[4965]: I0219 10:04:17.273210 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmxrd\" (UniqueName: \"kubernetes.io/projected/08122272-8ff7-4dad-95ea-c9190baad3ba-kube-api-access-jmxrd\") pod \"nova-api-4d8f-account-create-update-xdplb\" (UID: \"08122272-8ff7-4dad-95ea-c9190baad3ba\") " pod="openstack/nova-api-4d8f-account-create-update-xdplb" Feb 19 10:04:17 crc kubenswrapper[4965]: I0219 10:04:17.297343 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjc4r\" (UniqueName: \"kubernetes.io/projected/fd7da1a7-3ef7-4e09-bc9e-1e672e6b306f-kube-api-access-hjc4r\") pod \"nova-cell1-db-create-2j7hd\" (UID: \"fd7da1a7-3ef7-4e09-bc9e-1e672e6b306f\") " pod="openstack/nova-cell1-db-create-2j7hd" Feb 19 10:04:17 crc kubenswrapper[4965]: I0219 10:04:17.325748 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjdcp\" (UniqueName: \"kubernetes.io/projected/6b83107c-ba00-4687-9c58-94535f5a9a1e-kube-api-access-rjdcp\") pod \"nova-cell0-ec51-account-create-update-s5v66\" (UID: \"6b83107c-ba00-4687-9c58-94535f5a9a1e\") " pod="openstack/nova-cell0-ec51-account-create-update-s5v66" Feb 19 10:04:17 crc kubenswrapper[4965]: I0219 10:04:17.330837 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b83107c-ba00-4687-9c58-94535f5a9a1e-operator-scripts\") pod \"nova-cell0-ec51-account-create-update-s5v66\" (UID: \"6b83107c-ba00-4687-9c58-94535f5a9a1e\") " pod="openstack/nova-cell0-ec51-account-create-update-s5v66" Feb 19 10:04:17 crc kubenswrapper[4965]: I0219 10:04:17.331920 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b83107c-ba00-4687-9c58-94535f5a9a1e-operator-scripts\") pod \"nova-cell0-ec51-account-create-update-s5v66\" (UID: \"6b83107c-ba00-4687-9c58-94535f5a9a1e\") " pod="openstack/nova-cell0-ec51-account-create-update-s5v66" Feb 19 10:04:17 crc kubenswrapper[4965]: I0219 10:04:17.336294 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-b85b-account-create-update-z8f4q"] Feb 19 10:04:17 crc kubenswrapper[4965]: I0219 10:04:17.337841 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-b85b-account-create-update-z8f4q" Feb 19 10:04:17 crc kubenswrapper[4965]: I0219 10:04:17.341359 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 19 10:04:17 crc kubenswrapper[4965]: I0219 10:04:17.346616 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-b85b-account-create-update-z8f4q"] Feb 19 10:04:17 crc kubenswrapper[4965]: I0219 10:04:17.349494 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjdcp\" (UniqueName: \"kubernetes.io/projected/6b83107c-ba00-4687-9c58-94535f5a9a1e-kube-api-access-rjdcp\") pod \"nova-cell0-ec51-account-create-update-s5v66\" (UID: \"6b83107c-ba00-4687-9c58-94535f5a9a1e\") " pod="openstack/nova-cell0-ec51-account-create-update-s5v66" Feb 19 10:04:17 crc kubenswrapper[4965]: I0219 10:04:17.434087 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a128eedd-c0da-49f3-8d7f-631d3a66a6d2-operator-scripts\") pod \"nova-cell1-b85b-account-create-update-z8f4q\" (UID: \"a128eedd-c0da-49f3-8d7f-631d3a66a6d2\") " pod="openstack/nova-cell1-b85b-account-create-update-z8f4q" Feb 19 10:04:17 crc kubenswrapper[4965]: I0219 10:04:17.434307 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8gk9\" (UniqueName: \"kubernetes.io/projected/a128eedd-c0da-49f3-8d7f-631d3a66a6d2-kube-api-access-p8gk9\") pod \"nova-cell1-b85b-account-create-update-z8f4q\" (UID: \"a128eedd-c0da-49f3-8d7f-631d3a66a6d2\") " pod="openstack/nova-cell1-b85b-account-create-update-z8f4q" Feb 19 10:04:17 crc kubenswrapper[4965]: I0219 10:04:17.535807 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-ec51-account-create-update-s5v66" Feb 19 10:04:17 crc kubenswrapper[4965]: I0219 10:04:17.537912 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a128eedd-c0da-49f3-8d7f-631d3a66a6d2-operator-scripts\") pod \"nova-cell1-b85b-account-create-update-z8f4q\" (UID: \"a128eedd-c0da-49f3-8d7f-631d3a66a6d2\") " pod="openstack/nova-cell1-b85b-account-create-update-z8f4q" Feb 19 10:04:17 crc kubenswrapper[4965]: I0219 10:04:17.538083 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8gk9\" (UniqueName: \"kubernetes.io/projected/a128eedd-c0da-49f3-8d7f-631d3a66a6d2-kube-api-access-p8gk9\") pod \"nova-cell1-b85b-account-create-update-z8f4q\" (UID: \"a128eedd-c0da-49f3-8d7f-631d3a66a6d2\") " pod="openstack/nova-cell1-b85b-account-create-update-z8f4q" Feb 19 10:04:17 crc kubenswrapper[4965]: I0219 10:04:17.539313 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a128eedd-c0da-49f3-8d7f-631d3a66a6d2-operator-scripts\") pod \"nova-cell1-b85b-account-create-update-z8f4q\" (UID: \"a128eedd-c0da-49f3-8d7f-631d3a66a6d2\") " pod="openstack/nova-cell1-b85b-account-create-update-z8f4q" Feb 19 10:04:17 crc kubenswrapper[4965]: I0219 10:04:17.563926 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8gk9\" (UniqueName: \"kubernetes.io/projected/a128eedd-c0da-49f3-8d7f-631d3a66a6d2-kube-api-access-p8gk9\") pod \"nova-cell1-b85b-account-create-update-z8f4q\" (UID: \"a128eedd-c0da-49f3-8d7f-631d3a66a6d2\") " pod="openstack/nova-cell1-b85b-account-create-update-z8f4q" Feb 19 10:04:17 crc kubenswrapper[4965]: I0219 10:04:17.568656 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4d8f-account-create-update-xdplb" Feb 19 10:04:17 crc kubenswrapper[4965]: I0219 10:04:17.589902 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-2j7hd" Feb 19 10:04:17 crc kubenswrapper[4965]: I0219 10:04:17.671385 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-ffs2w"] Feb 19 10:04:17 crc kubenswrapper[4965]: W0219 10:04:17.693318 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ff16dee_aeea_4cb2_ba44_a8de28ff44ec.slice/crio-25d3d6b801a87e5e46407365d99e1b9df4a72e9e34c05630903f010ce81d336a WatchSource:0}: Error finding container 25d3d6b801a87e5e46407365d99e1b9df4a72e9e34c05630903f010ce81d336a: Status 404 returned error can't find the container with id 25d3d6b801a87e5e46407365d99e1b9df4a72e9e34c05630903f010ce81d336a Feb 19 10:04:17 crc kubenswrapper[4965]: I0219 10:04:17.740425 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-b85b-account-create-update-z8f4q" Feb 19 10:04:17 crc kubenswrapper[4965]: I0219 10:04:17.890901 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-fl9jp"] Feb 19 10:04:17 crc kubenswrapper[4965]: I0219 10:04:17.957139 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-ffs2w" event={"ID":"0ff16dee-aeea-4cb2-ba44-a8de28ff44ec","Type":"ContainerStarted","Data":"25d3d6b801a87e5e46407365d99e1b9df4a72e9e34c05630903f010ce81d336a"} Feb 19 10:04:18 crc kubenswrapper[4965]: I0219 10:04:18.695866 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-ec51-account-create-update-s5v66"] Feb 19 10:04:18 crc kubenswrapper[4965]: W0219 10:04:18.702268 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b83107c_ba00_4687_9c58_94535f5a9a1e.slice/crio-b71d17452259326f4dbdf5ebf333467733cee0ecd41292dd5160b0b73489a1e2 WatchSource:0}: Error finding container b71d17452259326f4dbdf5ebf333467733cee0ecd41292dd5160b0b73489a1e2: Status 404 returned error can't find the container with id b71d17452259326f4dbdf5ebf333467733cee0ecd41292dd5160b0b73489a1e2 Feb 19 10:04:18 crc kubenswrapper[4965]: I0219 10:04:18.731293 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-4d8f-account-create-update-xdplb"] Feb 19 10:04:18 crc kubenswrapper[4965]: I0219 10:04:18.755414 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-b85b-account-create-update-z8f4q"] Feb 19 10:04:18 crc kubenswrapper[4965]: I0219 10:04:18.922841 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-2j7hd"] Feb 19 10:04:18 crc kubenswrapper[4965]: I0219 10:04:18.992409 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-b85b-account-create-update-z8f4q" event={"ID":"a128eedd-c0da-49f3-8d7f-631d3a66a6d2","Type":"ContainerStarted","Data":"e08d830825c3ea5b1547f7e4d57c4eccfcbef77e748ba478dcdfb17e56344e03"} Feb 19 10:04:19 crc kubenswrapper[4965]: I0219 10:04:19.020348 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-ec51-account-create-update-s5v66" event={"ID":"6b83107c-ba00-4687-9c58-94535f5a9a1e","Type":"ContainerStarted","Data":"b71d17452259326f4dbdf5ebf333467733cee0ecd41292dd5160b0b73489a1e2"} Feb 19 10:04:19 crc kubenswrapper[4965]: I0219 10:04:19.030705 4965 generic.go:334] "Generic (PLEG): container finished" podID="140e538f-9a0d-4c71-80a3-8710e0622021" containerID="220ad7a2f95fc4c08b0feaebaa4f9a1b765b451488b3bd9d79b06f35021fe479" exitCode=0 Feb 19 10:04:19 crc kubenswrapper[4965]: I0219 10:04:19.030762 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-fl9jp" event={"ID":"140e538f-9a0d-4c71-80a3-8710e0622021","Type":"ContainerDied","Data":"220ad7a2f95fc4c08b0feaebaa4f9a1b765b451488b3bd9d79b06f35021fe479"} Feb 19 10:04:19 crc kubenswrapper[4965]: I0219 10:04:19.030787 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-fl9jp" event={"ID":"140e538f-9a0d-4c71-80a3-8710e0622021","Type":"ContainerStarted","Data":"37cdc2791f79e69782cb61b79786c7b5eb7bb44bfac02bf796fcba99d585577b"} Feb 19 10:04:19 crc kubenswrapper[4965]: I0219 10:04:19.044619 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-4d8f-account-create-update-xdplb" event={"ID":"08122272-8ff7-4dad-95ea-c9190baad3ba","Type":"ContainerStarted","Data":"d73361f8a0952d5a9eece39d7f0199a968519272c8fc4ebe388178b12205d992"} Feb 19 10:04:19 crc kubenswrapper[4965]: I0219 10:04:19.052821 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-2j7hd" event={"ID":"fd7da1a7-3ef7-4e09-bc9e-1e672e6b306f","Type":"ContainerStarted","Data":"281c2038cf8421eab8dcb0f8b9008edafe2f25500ed747bc6d2140456ccd7d44"} Feb 19 10:04:19 crc kubenswrapper[4965]: I0219 10:04:19.057692 4965 generic.go:334] "Generic (PLEG): container finished" podID="0ff16dee-aeea-4cb2-ba44-a8de28ff44ec" containerID="1ed610a5cb8146470f514103c0c093ce71001aa6764d9e2039af708f123150f7" exitCode=0 Feb 19 10:04:19 crc kubenswrapper[4965]: I0219 10:04:19.058859 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-ffs2w" event={"ID":"0ff16dee-aeea-4cb2-ba44-a8de28ff44ec","Type":"ContainerDied","Data":"1ed610a5cb8146470f514103c0c093ce71001aa6764d9e2039af708f123150f7"} Feb 19 10:04:19 crc kubenswrapper[4965]: I0219 10:04:19.548269 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 19 10:04:19 crc kubenswrapper[4965]: I0219 10:04:19.993000 4965 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="648dfd3e-578b-4808-84c2-6dd4b4a7954c" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.184:3000/\": dial tcp 10.217.0.184:3000: connect: connection refused" Feb 19 10:04:20 crc kubenswrapper[4965]: I0219 10:04:20.069100 4965 generic.go:334] "Generic (PLEG): container finished" podID="6b83107c-ba00-4687-9c58-94535f5a9a1e" containerID="4aa009512bde6c9449b3cd2aec121e4159117a2462f90cdfe4e51b8a11236f69" exitCode=0 Feb 19 10:04:20 crc kubenswrapper[4965]: I0219 10:04:20.069181 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-ec51-account-create-update-s5v66" event={"ID":"6b83107c-ba00-4687-9c58-94535f5a9a1e","Type":"ContainerDied","Data":"4aa009512bde6c9449b3cd2aec121e4159117a2462f90cdfe4e51b8a11236f69"} Feb 19 10:04:20 crc kubenswrapper[4965]: I0219 10:04:20.071096 4965 generic.go:334] "Generic (PLEG): container finished" podID="08122272-8ff7-4dad-95ea-c9190baad3ba" containerID="172a6ea9e25649dca72e46ff39997afa27a809b70090953c37d31aea0aaa238d" exitCode=0 Feb 19 10:04:20 crc kubenswrapper[4965]: I0219 10:04:20.071187 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-4d8f-account-create-update-xdplb" event={"ID":"08122272-8ff7-4dad-95ea-c9190baad3ba","Type":"ContainerDied","Data":"172a6ea9e25649dca72e46ff39997afa27a809b70090953c37d31aea0aaa238d"} Feb 19 10:04:20 crc kubenswrapper[4965]: I0219 10:04:20.074321 4965 generic.go:334] "Generic (PLEG): container finished" podID="fd7da1a7-3ef7-4e09-bc9e-1e672e6b306f" containerID="5a6e55ba4e494a7e74b24f036da932914ab2bfec978e4cf314b6624b157d1726" exitCode=0 Feb 19 10:04:20 crc kubenswrapper[4965]: I0219 10:04:20.074378 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-2j7hd" event={"ID":"fd7da1a7-3ef7-4e09-bc9e-1e672e6b306f","Type":"ContainerDied","Data":"5a6e55ba4e494a7e74b24f036da932914ab2bfec978e4cf314b6624b157d1726"} Feb 19 10:04:20 crc kubenswrapper[4965]: I0219 10:04:20.078395 4965 generic.go:334] "Generic (PLEG): container finished" podID="a128eedd-c0da-49f3-8d7f-631d3a66a6d2" containerID="aad2b0a33d1032ed03bb2c05b9c8d0033cbf75a14ada835aeb73885c2eb9a14c" exitCode=0 Feb 19 10:04:20 crc kubenswrapper[4965]: I0219 10:04:20.078511 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-b85b-account-create-update-z8f4q" event={"ID":"a128eedd-c0da-49f3-8d7f-631d3a66a6d2","Type":"ContainerDied","Data":"aad2b0a33d1032ed03bb2c05b9c8d0033cbf75a14ada835aeb73885c2eb9a14c"} Feb 19 10:04:20 crc kubenswrapper[4965]: I0219 10:04:20.722573 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-ffs2w" Feb 19 10:04:20 crc kubenswrapper[4965]: I0219 10:04:20.841878 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bg27f\" (UniqueName: \"kubernetes.io/projected/0ff16dee-aeea-4cb2-ba44-a8de28ff44ec-kube-api-access-bg27f\") pod \"0ff16dee-aeea-4cb2-ba44-a8de28ff44ec\" (UID: \"0ff16dee-aeea-4cb2-ba44-a8de28ff44ec\") " Feb 19 10:04:20 crc kubenswrapper[4965]: I0219 10:04:20.842043 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ff16dee-aeea-4cb2-ba44-a8de28ff44ec-operator-scripts\") pod \"0ff16dee-aeea-4cb2-ba44-a8de28ff44ec\" (UID: \"0ff16dee-aeea-4cb2-ba44-a8de28ff44ec\") " Feb 19 10:04:20 crc kubenswrapper[4965]: I0219 10:04:20.842750 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ff16dee-aeea-4cb2-ba44-a8de28ff44ec-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0ff16dee-aeea-4cb2-ba44-a8de28ff44ec" (UID: "0ff16dee-aeea-4cb2-ba44-a8de28ff44ec"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:04:20 crc kubenswrapper[4965]: I0219 10:04:20.847554 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-fl9jp" Feb 19 10:04:20 crc kubenswrapper[4965]: I0219 10:04:20.847804 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ff16dee-aeea-4cb2-ba44-a8de28ff44ec-kube-api-access-bg27f" (OuterVolumeSpecName: "kube-api-access-bg27f") pod "0ff16dee-aeea-4cb2-ba44-a8de28ff44ec" (UID: "0ff16dee-aeea-4cb2-ba44-a8de28ff44ec"). InnerVolumeSpecName "kube-api-access-bg27f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:04:20 crc kubenswrapper[4965]: I0219 10:04:20.944306 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvszx\" (UniqueName: \"kubernetes.io/projected/140e538f-9a0d-4c71-80a3-8710e0622021-kube-api-access-qvszx\") pod \"140e538f-9a0d-4c71-80a3-8710e0622021\" (UID: \"140e538f-9a0d-4c71-80a3-8710e0622021\") " Feb 19 10:04:20 crc kubenswrapper[4965]: I0219 10:04:20.944685 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/140e538f-9a0d-4c71-80a3-8710e0622021-operator-scripts\") pod \"140e538f-9a0d-4c71-80a3-8710e0622021\" (UID: \"140e538f-9a0d-4c71-80a3-8710e0622021\") " Feb 19 10:04:20 crc kubenswrapper[4965]: I0219 10:04:20.945183 4965 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ff16dee-aeea-4cb2-ba44-a8de28ff44ec-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:20 crc kubenswrapper[4965]: I0219 10:04:20.945221 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bg27f\" (UniqueName: \"kubernetes.io/projected/0ff16dee-aeea-4cb2-ba44-a8de28ff44ec-kube-api-access-bg27f\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:20 crc kubenswrapper[4965]: I0219 10:04:20.945644 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/140e538f-9a0d-4c71-80a3-8710e0622021-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "140e538f-9a0d-4c71-80a3-8710e0622021" (UID: "140e538f-9a0d-4c71-80a3-8710e0622021"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:04:20 crc kubenswrapper[4965]: I0219 10:04:20.948330 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/140e538f-9a0d-4c71-80a3-8710e0622021-kube-api-access-qvszx" (OuterVolumeSpecName: "kube-api-access-qvszx") pod "140e538f-9a0d-4c71-80a3-8710e0622021" (UID: "140e538f-9a0d-4c71-80a3-8710e0622021"). InnerVolumeSpecName "kube-api-access-qvszx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:04:21 crc kubenswrapper[4965]: I0219 10:04:21.047074 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvszx\" (UniqueName: \"kubernetes.io/projected/140e538f-9a0d-4c71-80a3-8710e0622021-kube-api-access-qvszx\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:21 crc kubenswrapper[4965]: I0219 10:04:21.047114 4965 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/140e538f-9a0d-4c71-80a3-8710e0622021-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:21 crc kubenswrapper[4965]: I0219 10:04:21.097632 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-ffs2w" event={"ID":"0ff16dee-aeea-4cb2-ba44-a8de28ff44ec","Type":"ContainerDied","Data":"25d3d6b801a87e5e46407365d99e1b9df4a72e9e34c05630903f010ce81d336a"} Feb 19 10:04:21 crc kubenswrapper[4965]: I0219 10:04:21.097694 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25d3d6b801a87e5e46407365d99e1b9df4a72e9e34c05630903f010ce81d336a" Feb 19 10:04:21 crc kubenswrapper[4965]: I0219 10:04:21.097656 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-ffs2w" Feb 19 10:04:21 crc kubenswrapper[4965]: I0219 10:04:21.103404 4965 generic.go:334] "Generic (PLEG): container finished" podID="648dfd3e-578b-4808-84c2-6dd4b4a7954c" containerID="28fb6b5acdca274bb67d1413f4c1739d36b826769dec9bc6ccbf95b4ebc87265" exitCode=0 Feb 19 10:04:21 crc kubenswrapper[4965]: I0219 10:04:21.103484 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"648dfd3e-578b-4808-84c2-6dd4b4a7954c","Type":"ContainerDied","Data":"28fb6b5acdca274bb67d1413f4c1739d36b826769dec9bc6ccbf95b4ebc87265"} Feb 19 10:04:21 crc kubenswrapper[4965]: I0219 10:04:21.105643 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-fl9jp" event={"ID":"140e538f-9a0d-4c71-80a3-8710e0622021","Type":"ContainerDied","Data":"37cdc2791f79e69782cb61b79786c7b5eb7bb44bfac02bf796fcba99d585577b"} Feb 19 10:04:21 crc kubenswrapper[4965]: I0219 10:04:21.105682 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37cdc2791f79e69782cb61b79786c7b5eb7bb44bfac02bf796fcba99d585577b" Feb 19 10:04:21 crc kubenswrapper[4965]: I0219 10:04:21.105713 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-fl9jp" Feb 19 10:04:21 crc kubenswrapper[4965]: I0219 10:04:21.402926 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:04:21 crc kubenswrapper[4965]: I0219 10:04:21.563922 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/648dfd3e-578b-4808-84c2-6dd4b4a7954c-config-data\") pod \"648dfd3e-578b-4808-84c2-6dd4b4a7954c\" (UID: \"648dfd3e-578b-4808-84c2-6dd4b4a7954c\") " Feb 19 10:04:21 crc kubenswrapper[4965]: I0219 10:04:21.563980 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slt4m\" (UniqueName: \"kubernetes.io/projected/648dfd3e-578b-4808-84c2-6dd4b4a7954c-kube-api-access-slt4m\") pod \"648dfd3e-578b-4808-84c2-6dd4b4a7954c\" (UID: \"648dfd3e-578b-4808-84c2-6dd4b4a7954c\") " Feb 19 10:04:21 crc kubenswrapper[4965]: I0219 10:04:21.564090 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/648dfd3e-578b-4808-84c2-6dd4b4a7954c-sg-core-conf-yaml\") pod \"648dfd3e-578b-4808-84c2-6dd4b4a7954c\" (UID: \"648dfd3e-578b-4808-84c2-6dd4b4a7954c\") " Feb 19 10:04:21 crc kubenswrapper[4965]: I0219 10:04:21.564128 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/648dfd3e-578b-4808-84c2-6dd4b4a7954c-run-httpd\") pod \"648dfd3e-578b-4808-84c2-6dd4b4a7954c\" (UID: \"648dfd3e-578b-4808-84c2-6dd4b4a7954c\") " Feb 19 10:04:21 crc kubenswrapper[4965]: I0219 10:04:21.564356 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/648dfd3e-578b-4808-84c2-6dd4b4a7954c-combined-ca-bundle\") pod \"648dfd3e-578b-4808-84c2-6dd4b4a7954c\" (UID: \"648dfd3e-578b-4808-84c2-6dd4b4a7954c\") " Feb 19 10:04:21 crc kubenswrapper[4965]: I0219 10:04:21.564502 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/648dfd3e-578b-4808-84c2-6dd4b4a7954c-log-httpd\") pod \"648dfd3e-578b-4808-84c2-6dd4b4a7954c\" (UID: \"648dfd3e-578b-4808-84c2-6dd4b4a7954c\") " Feb 19 10:04:21 crc kubenswrapper[4965]: I0219 10:04:21.564539 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/648dfd3e-578b-4808-84c2-6dd4b4a7954c-scripts\") pod \"648dfd3e-578b-4808-84c2-6dd4b4a7954c\" (UID: \"648dfd3e-578b-4808-84c2-6dd4b4a7954c\") " Feb 19 10:04:21 crc kubenswrapper[4965]: I0219 10:04:21.570067 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/648dfd3e-578b-4808-84c2-6dd4b4a7954c-scripts" (OuterVolumeSpecName: "scripts") pod "648dfd3e-578b-4808-84c2-6dd4b4a7954c" (UID: "648dfd3e-578b-4808-84c2-6dd4b4a7954c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:21 crc kubenswrapper[4965]: I0219 10:04:21.570362 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/648dfd3e-578b-4808-84c2-6dd4b4a7954c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "648dfd3e-578b-4808-84c2-6dd4b4a7954c" (UID: "648dfd3e-578b-4808-84c2-6dd4b4a7954c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:04:21 crc kubenswrapper[4965]: I0219 10:04:21.580638 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/648dfd3e-578b-4808-84c2-6dd4b4a7954c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "648dfd3e-578b-4808-84c2-6dd4b4a7954c" (UID: "648dfd3e-578b-4808-84c2-6dd4b4a7954c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:04:21 crc kubenswrapper[4965]: I0219 10:04:21.583482 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/648dfd3e-578b-4808-84c2-6dd4b4a7954c-kube-api-access-slt4m" (OuterVolumeSpecName: "kube-api-access-slt4m") pod "648dfd3e-578b-4808-84c2-6dd4b4a7954c" (UID: "648dfd3e-578b-4808-84c2-6dd4b4a7954c"). InnerVolumeSpecName "kube-api-access-slt4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:04:21 crc kubenswrapper[4965]: I0219 10:04:21.601830 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/648dfd3e-578b-4808-84c2-6dd4b4a7954c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "648dfd3e-578b-4808-84c2-6dd4b4a7954c" (UID: "648dfd3e-578b-4808-84c2-6dd4b4a7954c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:21 crc kubenswrapper[4965]: I0219 10:04:21.667885 4965 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/648dfd3e-578b-4808-84c2-6dd4b4a7954c-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:21 crc kubenswrapper[4965]: I0219 10:04:21.668227 4965 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/648dfd3e-578b-4808-84c2-6dd4b4a7954c-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:21 crc kubenswrapper[4965]: I0219 10:04:21.668240 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slt4m\" (UniqueName: \"kubernetes.io/projected/648dfd3e-578b-4808-84c2-6dd4b4a7954c-kube-api-access-slt4m\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:21 crc kubenswrapper[4965]: I0219 10:04:21.668252 4965 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/648dfd3e-578b-4808-84c2-6dd4b4a7954c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:21 crc kubenswrapper[4965]: I0219 10:04:21.668265 4965 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/648dfd3e-578b-4808-84c2-6dd4b4a7954c-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:21 crc kubenswrapper[4965]: I0219 10:04:21.713122 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/648dfd3e-578b-4808-84c2-6dd4b4a7954c-config-data" (OuterVolumeSpecName: "config-data") pod "648dfd3e-578b-4808-84c2-6dd4b4a7954c" (UID: "648dfd3e-578b-4808-84c2-6dd4b4a7954c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:21 crc kubenswrapper[4965]: I0219 10:04:21.713772 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/648dfd3e-578b-4808-84c2-6dd4b4a7954c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "648dfd3e-578b-4808-84c2-6dd4b4a7954c" (UID: "648dfd3e-578b-4808-84c2-6dd4b4a7954c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:21 crc kubenswrapper[4965]: I0219 10:04:21.739893 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-76546766f9-plbd4" Feb 19 10:04:21 crc kubenswrapper[4965]: I0219 10:04:21.771747 4965 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/648dfd3e-578b-4808-84c2-6dd4b4a7954c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:21 crc kubenswrapper[4965]: I0219 10:04:21.771775 4965 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/648dfd3e-578b-4808-84c2-6dd4b4a7954c-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:21 crc kubenswrapper[4965]: I0219 10:04:21.829714 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-54864c6876-6fmg4"] Feb 19 10:04:21 crc kubenswrapper[4965]: I0219 10:04:21.829955 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-54864c6876-6fmg4" podUID="4bccdd96-d87f-4f40-979a-b650eabac24f" containerName="neutron-api" containerID="cri-o://03979efa8ac1d4da20fb280931fde41b1fc59b331cb50e516ea29ab30a6bde45" gracePeriod=30 Feb 19 10:04:21 crc kubenswrapper[4965]: I0219 10:04:21.830384 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-54864c6876-6fmg4" podUID="4bccdd96-d87f-4f40-979a-b650eabac24f" containerName="neutron-httpd" containerID="cri-o://ff0f6d36cd2dc7803669894a9345a3b8a8cc724d286dfbef459b4b0ac0db8074" gracePeriod=30 Feb 19 10:04:21 crc kubenswrapper[4965]: I0219 10:04:21.873579 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4d8f-account-create-update-xdplb" Feb 19 10:04:21 crc kubenswrapper[4965]: I0219 10:04:21.896758 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-ec51-account-create-update-s5v66" Feb 19 10:04:21 crc kubenswrapper[4965]: I0219 10:04:21.958678 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-2j7hd" Feb 19 10:04:21 crc kubenswrapper[4965]: I0219 10:04:21.965294 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-b85b-account-create-update-z8f4q" Feb 19 10:04:21 crc kubenswrapper[4965]: I0219 10:04:21.978929 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjdcp\" (UniqueName: \"kubernetes.io/projected/6b83107c-ba00-4687-9c58-94535f5a9a1e-kube-api-access-rjdcp\") pod \"6b83107c-ba00-4687-9c58-94535f5a9a1e\" (UID: \"6b83107c-ba00-4687-9c58-94535f5a9a1e\") " Feb 19 10:04:21 crc kubenswrapper[4965]: I0219 10:04:21.979024 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmxrd\" (UniqueName: \"kubernetes.io/projected/08122272-8ff7-4dad-95ea-c9190baad3ba-kube-api-access-jmxrd\") pod \"08122272-8ff7-4dad-95ea-c9190baad3ba\" (UID: \"08122272-8ff7-4dad-95ea-c9190baad3ba\") " Feb 19 10:04:21 crc kubenswrapper[4965]: I0219 10:04:21.979122 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b83107c-ba00-4687-9c58-94535f5a9a1e-operator-scripts\") pod \"6b83107c-ba00-4687-9c58-94535f5a9a1e\" (UID: \"6b83107c-ba00-4687-9c58-94535f5a9a1e\") " Feb 19 10:04:21 crc kubenswrapper[4965]: I0219 10:04:21.979146 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08122272-8ff7-4dad-95ea-c9190baad3ba-operator-scripts\") pod \"08122272-8ff7-4dad-95ea-c9190baad3ba\" (UID: \"08122272-8ff7-4dad-95ea-c9190baad3ba\") " Feb 19 10:04:21 crc kubenswrapper[4965]: I0219 10:04:21.980306 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08122272-8ff7-4dad-95ea-c9190baad3ba-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "08122272-8ff7-4dad-95ea-c9190baad3ba" (UID: "08122272-8ff7-4dad-95ea-c9190baad3ba"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:04:21 crc kubenswrapper[4965]: I0219 10:04:21.981698 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b83107c-ba00-4687-9c58-94535f5a9a1e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6b83107c-ba00-4687-9c58-94535f5a9a1e" (UID: "6b83107c-ba00-4687-9c58-94535f5a9a1e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:04:21 crc kubenswrapper[4965]: I0219 10:04:21.988375 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08122272-8ff7-4dad-95ea-c9190baad3ba-kube-api-access-jmxrd" (OuterVolumeSpecName: "kube-api-access-jmxrd") pod "08122272-8ff7-4dad-95ea-c9190baad3ba" (UID: "08122272-8ff7-4dad-95ea-c9190baad3ba"). InnerVolumeSpecName "kube-api-access-jmxrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:04:21 crc kubenswrapper[4965]: I0219 10:04:21.988466 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b83107c-ba00-4687-9c58-94535f5a9a1e-kube-api-access-rjdcp" (OuterVolumeSpecName: "kube-api-access-rjdcp") pod "6b83107c-ba00-4687-9c58-94535f5a9a1e" (UID: "6b83107c-ba00-4687-9c58-94535f5a9a1e"). InnerVolumeSpecName "kube-api-access-rjdcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.080776 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjc4r\" (UniqueName: \"kubernetes.io/projected/fd7da1a7-3ef7-4e09-bc9e-1e672e6b306f-kube-api-access-hjc4r\") pod \"fd7da1a7-3ef7-4e09-bc9e-1e672e6b306f\" (UID: \"fd7da1a7-3ef7-4e09-bc9e-1e672e6b306f\") " Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.080916 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd7da1a7-3ef7-4e09-bc9e-1e672e6b306f-operator-scripts\") pod \"fd7da1a7-3ef7-4e09-bc9e-1e672e6b306f\" (UID: \"fd7da1a7-3ef7-4e09-bc9e-1e672e6b306f\") " Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.081075 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8gk9\" (UniqueName: \"kubernetes.io/projected/a128eedd-c0da-49f3-8d7f-631d3a66a6d2-kube-api-access-p8gk9\") pod \"a128eedd-c0da-49f3-8d7f-631d3a66a6d2\" (UID: \"a128eedd-c0da-49f3-8d7f-631d3a66a6d2\") " Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.081096 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a128eedd-c0da-49f3-8d7f-631d3a66a6d2-operator-scripts\") pod \"a128eedd-c0da-49f3-8d7f-631d3a66a6d2\" (UID: \"a128eedd-c0da-49f3-8d7f-631d3a66a6d2\") " Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.081380 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd7da1a7-3ef7-4e09-bc9e-1e672e6b306f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fd7da1a7-3ef7-4e09-bc9e-1e672e6b306f" (UID: "fd7da1a7-3ef7-4e09-bc9e-1e672e6b306f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.081721 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a128eedd-c0da-49f3-8d7f-631d3a66a6d2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a128eedd-c0da-49f3-8d7f-631d3a66a6d2" (UID: "a128eedd-c0da-49f3-8d7f-631d3a66a6d2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.081801 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmxrd\" (UniqueName: \"kubernetes.io/projected/08122272-8ff7-4dad-95ea-c9190baad3ba-kube-api-access-jmxrd\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.081816 4965 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b83107c-ba00-4687-9c58-94535f5a9a1e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.081826 4965 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08122272-8ff7-4dad-95ea-c9190baad3ba-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.081835 4965 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd7da1a7-3ef7-4e09-bc9e-1e672e6b306f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.081844 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjdcp\" (UniqueName: \"kubernetes.io/projected/6b83107c-ba00-4687-9c58-94535f5a9a1e-kube-api-access-rjdcp\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.087392 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd7da1a7-3ef7-4e09-bc9e-1e672e6b306f-kube-api-access-hjc4r" (OuterVolumeSpecName: "kube-api-access-hjc4r") pod "fd7da1a7-3ef7-4e09-bc9e-1e672e6b306f" (UID: "fd7da1a7-3ef7-4e09-bc9e-1e672e6b306f"). InnerVolumeSpecName "kube-api-access-hjc4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.087798 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a128eedd-c0da-49f3-8d7f-631d3a66a6d2-kube-api-access-p8gk9" (OuterVolumeSpecName: "kube-api-access-p8gk9") pod "a128eedd-c0da-49f3-8d7f-631d3a66a6d2" (UID: "a128eedd-c0da-49f3-8d7f-631d3a66a6d2"). InnerVolumeSpecName "kube-api-access-p8gk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.128318 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-b85b-account-create-update-z8f4q" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.128522 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-b85b-account-create-update-z8f4q" event={"ID":"a128eedd-c0da-49f3-8d7f-631d3a66a6d2","Type":"ContainerDied","Data":"e08d830825c3ea5b1547f7e4d57c4eccfcbef77e748ba478dcdfb17e56344e03"} Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.128598 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e08d830825c3ea5b1547f7e4d57c4eccfcbef77e748ba478dcdfb17e56344e03" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.130586 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-ec51-account-create-update-s5v66" event={"ID":"6b83107c-ba00-4687-9c58-94535f5a9a1e","Type":"ContainerDied","Data":"b71d17452259326f4dbdf5ebf333467733cee0ecd41292dd5160b0b73489a1e2"} Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.130617 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b71d17452259326f4dbdf5ebf333467733cee0ecd41292dd5160b0b73489a1e2" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.130672 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-ec51-account-create-update-s5v66" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.137740 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4d8f-account-create-update-xdplb" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.137747 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-4d8f-account-create-update-xdplb" event={"ID":"08122272-8ff7-4dad-95ea-c9190baad3ba","Type":"ContainerDied","Data":"d73361f8a0952d5a9eece39d7f0199a968519272c8fc4ebe388178b12205d992"} Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.137896 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d73361f8a0952d5a9eece39d7f0199a968519272c8fc4ebe388178b12205d992" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.147139 4965 generic.go:334] "Generic (PLEG): container finished" podID="4bccdd96-d87f-4f40-979a-b650eabac24f" containerID="ff0f6d36cd2dc7803669894a9345a3b8a8cc724d286dfbef459b4b0ac0db8074" exitCode=0 Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.147317 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54864c6876-6fmg4" event={"ID":"4bccdd96-d87f-4f40-979a-b650eabac24f","Type":"ContainerDied","Data":"ff0f6d36cd2dc7803669894a9345a3b8a8cc724d286dfbef459b4b0ac0db8074"} Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.150148 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-2j7hd" event={"ID":"fd7da1a7-3ef7-4e09-bc9e-1e672e6b306f","Type":"ContainerDied","Data":"281c2038cf8421eab8dcb0f8b9008edafe2f25500ed747bc6d2140456ccd7d44"} Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.150192 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="281c2038cf8421eab8dcb0f8b9008edafe2f25500ed747bc6d2140456ccd7d44" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.150342 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-2j7hd" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.156783 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"648dfd3e-578b-4808-84c2-6dd4b4a7954c","Type":"ContainerDied","Data":"f48cda5af32bbd778a95326b79f84412dc1e633b7815a9c03917eda9c3c9fd8e"} Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.156841 4965 scope.go:117] "RemoveContainer" containerID="0400aa7c9dfd960cf23d08febf3d6ae9bd1f5875a6b095538c6e6b69821b1ee9" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.157026 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.184398 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8gk9\" (UniqueName: \"kubernetes.io/projected/a128eedd-c0da-49f3-8d7f-631d3a66a6d2-kube-api-access-p8gk9\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.184427 4965 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a128eedd-c0da-49f3-8d7f-631d3a66a6d2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.184461 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjc4r\" (UniqueName: \"kubernetes.io/projected/fd7da1a7-3ef7-4e09-bc9e-1e672e6b306f-kube-api-access-hjc4r\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.231643 4965 scope.go:117] "RemoveContainer" containerID="b4c83774e861ec73bc9c0dfddcfedf12568c709a0c36018bd09c4e6870629001" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.244275 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.263664 4965 scope.go:117] "RemoveContainer" containerID="28fb6b5acdca274bb67d1413f4c1739d36b826769dec9bc6ccbf95b4ebc87265" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.266333 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.290690 4965 scope.go:117] "RemoveContainer" containerID="e142f3cf13924b1eb47f07fe5fdb88a893daad952a4d75c074a02fa0694e042d" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.292160 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:04:22 crc kubenswrapper[4965]: E0219 10:04:22.292639 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="648dfd3e-578b-4808-84c2-6dd4b4a7954c" containerName="sg-core" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.292654 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="648dfd3e-578b-4808-84c2-6dd4b4a7954c" containerName="sg-core" Feb 19 10:04:22 crc kubenswrapper[4965]: E0219 10:04:22.292666 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08122272-8ff7-4dad-95ea-c9190baad3ba" containerName="mariadb-account-create-update" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.292674 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="08122272-8ff7-4dad-95ea-c9190baad3ba" containerName="mariadb-account-create-update" Feb 19 10:04:22 crc kubenswrapper[4965]: E0219 10:04:22.292690 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ff16dee-aeea-4cb2-ba44-a8de28ff44ec" containerName="mariadb-database-create" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.292696 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ff16dee-aeea-4cb2-ba44-a8de28ff44ec" containerName="mariadb-database-create" Feb 19 10:04:22 crc kubenswrapper[4965]: E0219 10:04:22.292708 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a128eedd-c0da-49f3-8d7f-631d3a66a6d2" containerName="mariadb-account-create-update" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.292714 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="a128eedd-c0da-49f3-8d7f-631d3a66a6d2" containerName="mariadb-account-create-update" Feb 19 10:04:22 crc kubenswrapper[4965]: E0219 10:04:22.292725 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd7da1a7-3ef7-4e09-bc9e-1e672e6b306f" containerName="mariadb-database-create" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.292730 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd7da1a7-3ef7-4e09-bc9e-1e672e6b306f" containerName="mariadb-database-create" Feb 19 10:04:22 crc kubenswrapper[4965]: E0219 10:04:22.292740 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="648dfd3e-578b-4808-84c2-6dd4b4a7954c" containerName="ceilometer-central-agent" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.292745 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="648dfd3e-578b-4808-84c2-6dd4b4a7954c" containerName="ceilometer-central-agent" Feb 19 10:04:22 crc kubenswrapper[4965]: E0219 10:04:22.292769 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="648dfd3e-578b-4808-84c2-6dd4b4a7954c" containerName="proxy-httpd" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.292777 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="648dfd3e-578b-4808-84c2-6dd4b4a7954c" containerName="proxy-httpd" Feb 19 10:04:22 crc kubenswrapper[4965]: E0219 10:04:22.292790 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="648dfd3e-578b-4808-84c2-6dd4b4a7954c" containerName="ceilometer-notification-agent" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.292797 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="648dfd3e-578b-4808-84c2-6dd4b4a7954c" containerName="ceilometer-notification-agent" Feb 19 10:04:22 crc kubenswrapper[4965]: E0219 10:04:22.292812 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b83107c-ba00-4687-9c58-94535f5a9a1e" containerName="mariadb-account-create-update" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.292819 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b83107c-ba00-4687-9c58-94535f5a9a1e" containerName="mariadb-account-create-update" Feb 19 10:04:22 crc kubenswrapper[4965]: E0219 10:04:22.292834 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="140e538f-9a0d-4c71-80a3-8710e0622021" containerName="mariadb-database-create" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.292842 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="140e538f-9a0d-4c71-80a3-8710e0622021" containerName="mariadb-database-create" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.293044 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="648dfd3e-578b-4808-84c2-6dd4b4a7954c" containerName="ceilometer-notification-agent" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.293063 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="a128eedd-c0da-49f3-8d7f-631d3a66a6d2" containerName="mariadb-account-create-update" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.293079 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ff16dee-aeea-4cb2-ba44-a8de28ff44ec" containerName="mariadb-database-create" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.293102 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="648dfd3e-578b-4808-84c2-6dd4b4a7954c" containerName="proxy-httpd" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.293115 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="08122272-8ff7-4dad-95ea-c9190baad3ba" containerName="mariadb-account-create-update" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.293127 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="140e538f-9a0d-4c71-80a3-8710e0622021" containerName="mariadb-database-create" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.293135 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd7da1a7-3ef7-4e09-bc9e-1e672e6b306f" containerName="mariadb-database-create" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.293144 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="648dfd3e-578b-4808-84c2-6dd4b4a7954c" containerName="sg-core" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.293152 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="648dfd3e-578b-4808-84c2-6dd4b4a7954c" containerName="ceilometer-central-agent" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.293162 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b83107c-ba00-4687-9c58-94535f5a9a1e" containerName="mariadb-account-create-update" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.295151 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.297993 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.298143 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.311790 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.388367 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnccw\" (UniqueName: \"kubernetes.io/projected/05248f7d-0b63-4529-a251-00944910acce-kube-api-access-hnccw\") pod \"ceilometer-0\" (UID: \"05248f7d-0b63-4529-a251-00944910acce\") " pod="openstack/ceilometer-0" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.388471 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05248f7d-0b63-4529-a251-00944910acce-config-data\") pod \"ceilometer-0\" (UID: \"05248f7d-0b63-4529-a251-00944910acce\") " pod="openstack/ceilometer-0" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.388507 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05248f7d-0b63-4529-a251-00944910acce-scripts\") pod \"ceilometer-0\" (UID: \"05248f7d-0b63-4529-a251-00944910acce\") " pod="openstack/ceilometer-0" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.388564 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05248f7d-0b63-4529-a251-00944910acce-log-httpd\") pod \"ceilometer-0\" (UID: \"05248f7d-0b63-4529-a251-00944910acce\") " pod="openstack/ceilometer-0" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.388824 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/05248f7d-0b63-4529-a251-00944910acce-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"05248f7d-0b63-4529-a251-00944910acce\") " pod="openstack/ceilometer-0" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.388911 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05248f7d-0b63-4529-a251-00944910acce-run-httpd\") pod \"ceilometer-0\" (UID: \"05248f7d-0b63-4529-a251-00944910acce\") " pod="openstack/ceilometer-0" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.389065 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05248f7d-0b63-4529-a251-00944910acce-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"05248f7d-0b63-4529-a251-00944910acce\") " pod="openstack/ceilometer-0" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.491331 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05248f7d-0b63-4529-a251-00944910acce-scripts\") pod \"ceilometer-0\" (UID: \"05248f7d-0b63-4529-a251-00944910acce\") " pod="openstack/ceilometer-0" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.491436 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05248f7d-0b63-4529-a251-00944910acce-log-httpd\") pod \"ceilometer-0\" (UID: \"05248f7d-0b63-4529-a251-00944910acce\") " pod="openstack/ceilometer-0" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.491476 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/05248f7d-0b63-4529-a251-00944910acce-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"05248f7d-0b63-4529-a251-00944910acce\") " pod="openstack/ceilometer-0" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.491504 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05248f7d-0b63-4529-a251-00944910acce-run-httpd\") pod \"ceilometer-0\" (UID: \"05248f7d-0b63-4529-a251-00944910acce\") " pod="openstack/ceilometer-0" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.491543 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05248f7d-0b63-4529-a251-00944910acce-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"05248f7d-0b63-4529-a251-00944910acce\") " pod="openstack/ceilometer-0" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.491586 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnccw\" (UniqueName: \"kubernetes.io/projected/05248f7d-0b63-4529-a251-00944910acce-kube-api-access-hnccw\") pod \"ceilometer-0\" (UID: \"05248f7d-0b63-4529-a251-00944910acce\") " pod="openstack/ceilometer-0" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.491627 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05248f7d-0b63-4529-a251-00944910acce-config-data\") pod \"ceilometer-0\" (UID: \"05248f7d-0b63-4529-a251-00944910acce\") " pod="openstack/ceilometer-0" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.491904 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05248f7d-0b63-4529-a251-00944910acce-log-httpd\") pod \"ceilometer-0\" (UID: \"05248f7d-0b63-4529-a251-00944910acce\") " pod="openstack/ceilometer-0" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.492172 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05248f7d-0b63-4529-a251-00944910acce-run-httpd\") pod \"ceilometer-0\" (UID: \"05248f7d-0b63-4529-a251-00944910acce\") " pod="openstack/ceilometer-0" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.495275 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/05248f7d-0b63-4529-a251-00944910acce-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"05248f7d-0b63-4529-a251-00944910acce\") " pod="openstack/ceilometer-0" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.497226 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05248f7d-0b63-4529-a251-00944910acce-config-data\") pod \"ceilometer-0\" (UID: \"05248f7d-0b63-4529-a251-00944910acce\") " pod="openstack/ceilometer-0" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.497942 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05248f7d-0b63-4529-a251-00944910acce-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"05248f7d-0b63-4529-a251-00944910acce\") " pod="openstack/ceilometer-0" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.498923 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05248f7d-0b63-4529-a251-00944910acce-scripts\") pod \"ceilometer-0\" (UID: \"05248f7d-0b63-4529-a251-00944910acce\") " pod="openstack/ceilometer-0" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.512031 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnccw\" (UniqueName: \"kubernetes.io/projected/05248f7d-0b63-4529-a251-00944910acce-kube-api-access-hnccw\") pod \"ceilometer-0\" (UID: \"05248f7d-0b63-4529-a251-00944910acce\") " pod="openstack/ceilometer-0" Feb 19 10:04:22 crc kubenswrapper[4965]: I0219 10:04:22.619244 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:04:23 crc kubenswrapper[4965]: I0219 10:04:23.108041 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:04:23 crc kubenswrapper[4965]: I0219 10:04:23.211570 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="648dfd3e-578b-4808-84c2-6dd4b4a7954c" path="/var/lib/kubelet/pods/648dfd3e-578b-4808-84c2-6dd4b4a7954c/volumes" Feb 19 10:04:25 crc kubenswrapper[4965]: I0219 10:04:25.281505 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-76485c5b9f-wzzpl" Feb 19 10:04:25 crc kubenswrapper[4965]: I0219 10:04:25.282554 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-76485c5b9f-wzzpl" Feb 19 10:04:26 crc kubenswrapper[4965]: I0219 10:04:26.210817 4965 generic.go:334] "Generic (PLEG): container finished" podID="4bccdd96-d87f-4f40-979a-b650eabac24f" containerID="03979efa8ac1d4da20fb280931fde41b1fc59b331cb50e516ea29ab30a6bde45" exitCode=0 Feb 19 10:04:26 crc kubenswrapper[4965]: I0219 10:04:26.210883 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54864c6876-6fmg4" event={"ID":"4bccdd96-d87f-4f40-979a-b650eabac24f","Type":"ContainerDied","Data":"03979efa8ac1d4da20fb280931fde41b1fc59b331cb50e516ea29ab30a6bde45"} Feb 19 10:04:27 crc kubenswrapper[4965]: I0219 10:04:27.411147 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-pkhzc"] Feb 19 10:04:27 crc kubenswrapper[4965]: I0219 10:04:27.412649 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-pkhzc" Feb 19 10:04:27 crc kubenswrapper[4965]: I0219 10:04:27.418886 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 19 10:04:27 crc kubenswrapper[4965]: I0219 10:04:27.419252 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 19 10:04:27 crc kubenswrapper[4965]: I0219 10:04:27.419289 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-wxc9w" Feb 19 10:04:27 crc kubenswrapper[4965]: I0219 10:04:27.438117 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-pkhzc"] Feb 19 10:04:27 crc kubenswrapper[4965]: I0219 10:04:27.500744 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jngbg\" (UniqueName: \"kubernetes.io/projected/d5e000de-4745-47c0-b6e6-8735c626518e-kube-api-access-jngbg\") pod \"nova-cell0-conductor-db-sync-pkhzc\" (UID: \"d5e000de-4745-47c0-b6e6-8735c626518e\") " pod="openstack/nova-cell0-conductor-db-sync-pkhzc" Feb 19 10:04:27 crc kubenswrapper[4965]: I0219 10:04:27.500814 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5e000de-4745-47c0-b6e6-8735c626518e-scripts\") pod \"nova-cell0-conductor-db-sync-pkhzc\" (UID: \"d5e000de-4745-47c0-b6e6-8735c626518e\") " pod="openstack/nova-cell0-conductor-db-sync-pkhzc" Feb 19 10:04:27 crc kubenswrapper[4965]: I0219 10:04:27.500872 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5e000de-4745-47c0-b6e6-8735c626518e-config-data\") pod \"nova-cell0-conductor-db-sync-pkhzc\" (UID: \"d5e000de-4745-47c0-b6e6-8735c626518e\") " pod="openstack/nova-cell0-conductor-db-sync-pkhzc" Feb 19 10:04:27 crc kubenswrapper[4965]: I0219 10:04:27.500986 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5e000de-4745-47c0-b6e6-8735c626518e-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-pkhzc\" (UID: \"d5e000de-4745-47c0-b6e6-8735c626518e\") " pod="openstack/nova-cell0-conductor-db-sync-pkhzc" Feb 19 10:04:27 crc kubenswrapper[4965]: I0219 10:04:27.602695 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jngbg\" (UniqueName: \"kubernetes.io/projected/d5e000de-4745-47c0-b6e6-8735c626518e-kube-api-access-jngbg\") pod \"nova-cell0-conductor-db-sync-pkhzc\" (UID: \"d5e000de-4745-47c0-b6e6-8735c626518e\") " pod="openstack/nova-cell0-conductor-db-sync-pkhzc" Feb 19 10:04:27 crc kubenswrapper[4965]: I0219 10:04:27.602746 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5e000de-4745-47c0-b6e6-8735c626518e-scripts\") pod \"nova-cell0-conductor-db-sync-pkhzc\" (UID: \"d5e000de-4745-47c0-b6e6-8735c626518e\") " pod="openstack/nova-cell0-conductor-db-sync-pkhzc" Feb 19 10:04:27 crc kubenswrapper[4965]: I0219 10:04:27.602804 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5e000de-4745-47c0-b6e6-8735c626518e-config-data\") pod \"nova-cell0-conductor-db-sync-pkhzc\" (UID: \"d5e000de-4745-47c0-b6e6-8735c626518e\") " pod="openstack/nova-cell0-conductor-db-sync-pkhzc" Feb 19 10:04:27 crc kubenswrapper[4965]: I0219 10:04:27.602872 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5e000de-4745-47c0-b6e6-8735c626518e-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-pkhzc\" (UID: \"d5e000de-4745-47c0-b6e6-8735c626518e\") " pod="openstack/nova-cell0-conductor-db-sync-pkhzc" Feb 19 10:04:27 crc kubenswrapper[4965]: I0219 10:04:27.615320 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5e000de-4745-47c0-b6e6-8735c626518e-config-data\") pod \"nova-cell0-conductor-db-sync-pkhzc\" (UID: \"d5e000de-4745-47c0-b6e6-8735c626518e\") " pod="openstack/nova-cell0-conductor-db-sync-pkhzc" Feb 19 10:04:27 crc kubenswrapper[4965]: I0219 10:04:27.616085 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5e000de-4745-47c0-b6e6-8735c626518e-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-pkhzc\" (UID: \"d5e000de-4745-47c0-b6e6-8735c626518e\") " pod="openstack/nova-cell0-conductor-db-sync-pkhzc" Feb 19 10:04:27 crc kubenswrapper[4965]: I0219 10:04:27.629504 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jngbg\" (UniqueName: \"kubernetes.io/projected/d5e000de-4745-47c0-b6e6-8735c626518e-kube-api-access-jngbg\") pod \"nova-cell0-conductor-db-sync-pkhzc\" (UID: \"d5e000de-4745-47c0-b6e6-8735c626518e\") " pod="openstack/nova-cell0-conductor-db-sync-pkhzc" Feb 19 10:04:27 crc kubenswrapper[4965]: I0219 10:04:27.631967 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5e000de-4745-47c0-b6e6-8735c626518e-scripts\") pod \"nova-cell0-conductor-db-sync-pkhzc\" (UID: \"d5e000de-4745-47c0-b6e6-8735c626518e\") " pod="openstack/nova-cell0-conductor-db-sync-pkhzc" Feb 19 10:04:27 crc kubenswrapper[4965]: I0219 10:04:27.748282 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-pkhzc" Feb 19 10:04:28 crc kubenswrapper[4965]: W0219 10:04:28.306692 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05248f7d_0b63_4529_a251_00944910acce.slice/crio-90443eca7fdb2273342f6cca36ee0d5591900a603693983d3170e62ed3021cd5 WatchSource:0}: Error finding container 90443eca7fdb2273342f6cca36ee0d5591900a603693983d3170e62ed3021cd5: Status 404 returned error can't find the container with id 90443eca7fdb2273342f6cca36ee0d5591900a603693983d3170e62ed3021cd5 Feb 19 10:04:29 crc kubenswrapper[4965]: I0219 10:04:29.049548 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-54864c6876-6fmg4" Feb 19 10:04:29 crc kubenswrapper[4965]: I0219 10:04:29.070641 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-pkhzc"] Feb 19 10:04:29 crc kubenswrapper[4965]: W0219 10:04:29.078613 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5e000de_4745_47c0_b6e6_8735c626518e.slice/crio-f250f6c6397aee1740ca2facf207e2acd5cd929f1225e8bc74d84d4e0649401c WatchSource:0}: Error finding container f250f6c6397aee1740ca2facf207e2acd5cd929f1225e8bc74d84d4e0649401c: Status 404 returned error can't find the container with id f250f6c6397aee1740ca2facf207e2acd5cd929f1225e8bc74d84d4e0649401c Feb 19 10:04:29 crc kubenswrapper[4965]: I0219 10:04:29.150345 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4bccdd96-d87f-4f40-979a-b650eabac24f-httpd-config\") pod \"4bccdd96-d87f-4f40-979a-b650eabac24f\" (UID: \"4bccdd96-d87f-4f40-979a-b650eabac24f\") " Feb 19 10:04:29 crc kubenswrapper[4965]: I0219 10:04:29.150434 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4bccdd96-d87f-4f40-979a-b650eabac24f-config\") pod \"4bccdd96-d87f-4f40-979a-b650eabac24f\" (UID: \"4bccdd96-d87f-4f40-979a-b650eabac24f\") " Feb 19 10:04:29 crc kubenswrapper[4965]: I0219 10:04:29.150641 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bccdd96-d87f-4f40-979a-b650eabac24f-combined-ca-bundle\") pod \"4bccdd96-d87f-4f40-979a-b650eabac24f\" (UID: \"4bccdd96-d87f-4f40-979a-b650eabac24f\") " Feb 19 10:04:29 crc kubenswrapper[4965]: I0219 10:04:29.150689 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9qwr\" (UniqueName: \"kubernetes.io/projected/4bccdd96-d87f-4f40-979a-b650eabac24f-kube-api-access-n9qwr\") pod \"4bccdd96-d87f-4f40-979a-b650eabac24f\" (UID: \"4bccdd96-d87f-4f40-979a-b650eabac24f\") " Feb 19 10:04:29 crc kubenswrapper[4965]: I0219 10:04:29.150718 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bccdd96-d87f-4f40-979a-b650eabac24f-ovndb-tls-certs\") pod \"4bccdd96-d87f-4f40-979a-b650eabac24f\" (UID: \"4bccdd96-d87f-4f40-979a-b650eabac24f\") " Feb 19 10:04:29 crc kubenswrapper[4965]: I0219 10:04:29.155350 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bccdd96-d87f-4f40-979a-b650eabac24f-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "4bccdd96-d87f-4f40-979a-b650eabac24f" (UID: "4bccdd96-d87f-4f40-979a-b650eabac24f"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:29 crc kubenswrapper[4965]: I0219 10:04:29.156065 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bccdd96-d87f-4f40-979a-b650eabac24f-kube-api-access-n9qwr" (OuterVolumeSpecName: "kube-api-access-n9qwr") pod "4bccdd96-d87f-4f40-979a-b650eabac24f" (UID: "4bccdd96-d87f-4f40-979a-b650eabac24f"). InnerVolumeSpecName "kube-api-access-n9qwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:04:29 crc kubenswrapper[4965]: I0219 10:04:29.230782 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bccdd96-d87f-4f40-979a-b650eabac24f-config" (OuterVolumeSpecName: "config") pod "4bccdd96-d87f-4f40-979a-b650eabac24f" (UID: "4bccdd96-d87f-4f40-979a-b650eabac24f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:29 crc kubenswrapper[4965]: I0219 10:04:29.236405 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bccdd96-d87f-4f40-979a-b650eabac24f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4bccdd96-d87f-4f40-979a-b650eabac24f" (UID: "4bccdd96-d87f-4f40-979a-b650eabac24f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:29 crc kubenswrapper[4965]: I0219 10:04:29.240139 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bccdd96-d87f-4f40-979a-b650eabac24f-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "4bccdd96-d87f-4f40-979a-b650eabac24f" (UID: "4bccdd96-d87f-4f40-979a-b650eabac24f"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:29 crc kubenswrapper[4965]: I0219 10:04:29.249782 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"96d45563-22bf-42f1-bc03-4fd3b223293d","Type":"ContainerStarted","Data":"ddd5f06178ac8e7f9249c4d53b5478596d4f5e2e7e788d570c11a6bd215bd095"} Feb 19 10:04:29 crc kubenswrapper[4965]: I0219 10:04:29.251882 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-pkhzc" event={"ID":"d5e000de-4745-47c0-b6e6-8735c626518e","Type":"ContainerStarted","Data":"f250f6c6397aee1740ca2facf207e2acd5cd929f1225e8bc74d84d4e0649401c"} Feb 19 10:04:29 crc kubenswrapper[4965]: I0219 10:04:29.252110 4965 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bccdd96-d87f-4f40-979a-b650eabac24f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:29 crc kubenswrapper[4965]: I0219 10:04:29.252127 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9qwr\" (UniqueName: \"kubernetes.io/projected/4bccdd96-d87f-4f40-979a-b650eabac24f-kube-api-access-n9qwr\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:29 crc kubenswrapper[4965]: I0219 10:04:29.252139 4965 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bccdd96-d87f-4f40-979a-b650eabac24f-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:29 crc kubenswrapper[4965]: I0219 10:04:29.252148 4965 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4bccdd96-d87f-4f40-979a-b650eabac24f-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:29 crc kubenswrapper[4965]: I0219 10:04:29.252156 4965 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/4bccdd96-d87f-4f40-979a-b650eabac24f-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:29 crc kubenswrapper[4965]: I0219 10:04:29.254176 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54864c6876-6fmg4" event={"ID":"4bccdd96-d87f-4f40-979a-b650eabac24f","Type":"ContainerDied","Data":"220b144dc90ee6d345a2a8093536ebb7bb664734a54933d23c9b3c921826f885"} Feb 19 10:04:29 crc kubenswrapper[4965]: I0219 10:04:29.254233 4965 scope.go:117] "RemoveContainer" containerID="ff0f6d36cd2dc7803669894a9345a3b8a8cc724d286dfbef459b4b0ac0db8074" Feb 19 10:04:29 crc kubenswrapper[4965]: I0219 10:04:29.254344 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-54864c6876-6fmg4" Feb 19 10:04:29 crc kubenswrapper[4965]: I0219 10:04:29.261393 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05248f7d-0b63-4529-a251-00944910acce","Type":"ContainerStarted","Data":"aff0e6e5d6e8038481e818e4366393d1195eb84f3213b1374079b13fd2f1be9e"} Feb 19 10:04:29 crc kubenswrapper[4965]: I0219 10:04:29.261427 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05248f7d-0b63-4529-a251-00944910acce","Type":"ContainerStarted","Data":"90443eca7fdb2273342f6cca36ee0d5591900a603693983d3170e62ed3021cd5"} Feb 19 10:04:29 crc kubenswrapper[4965]: I0219 10:04:29.272133 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.08019977 podStartE2EDuration="16.272115988s" podCreationTimestamp="2026-02-19 10:04:13 +0000 UTC" firstStartedPulling="2026-02-19 10:04:14.258424379 +0000 UTC m=+1309.879745689" lastFinishedPulling="2026-02-19 10:04:28.450340597 +0000 UTC m=+1324.071661907" observedRunningTime="2026-02-19 10:04:29.269781082 +0000 UTC m=+1324.891102392" watchObservedRunningTime="2026-02-19 10:04:29.272115988 +0000 UTC m=+1324.893437298" Feb 19 10:04:29 crc kubenswrapper[4965]: I0219 10:04:29.284284 4965 scope.go:117] "RemoveContainer" containerID="03979efa8ac1d4da20fb280931fde41b1fc59b331cb50e516ea29ab30a6bde45" Feb 19 10:04:29 crc kubenswrapper[4965]: I0219 10:04:29.308344 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-54864c6876-6fmg4"] Feb 19 10:04:29 crc kubenswrapper[4965]: I0219 10:04:29.324956 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-54864c6876-6fmg4"] Feb 19 10:04:30 crc kubenswrapper[4965]: I0219 10:04:30.280731 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05248f7d-0b63-4529-a251-00944910acce","Type":"ContainerStarted","Data":"d6d466979ab0d0de91c9d545c8444bd61b165e58c8d44785af167c286f5e4f90"} Feb 19 10:04:30 crc kubenswrapper[4965]: I0219 10:04:30.807954 4965 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 10:04:31 crc kubenswrapper[4965]: I0219 10:04:31.013947 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:04:31 crc kubenswrapper[4965]: I0219 10:04:31.211615 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bccdd96-d87f-4f40-979a-b650eabac24f" path="/var/lib/kubelet/pods/4bccdd96-d87f-4f40-979a-b650eabac24f/volumes" Feb 19 10:04:31 crc kubenswrapper[4965]: I0219 10:04:31.290821 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05248f7d-0b63-4529-a251-00944910acce","Type":"ContainerStarted","Data":"7ebeada2bdc06b470d9b8f64f26cbb5c9976b8e062a0d17601206135c3c3c360"} Feb 19 10:04:33 crc kubenswrapper[4965]: I0219 10:04:33.318073 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05248f7d-0b63-4529-a251-00944910acce","Type":"ContainerStarted","Data":"66a9192a0caa9a9358de00ee5fb10361cfd139d68a18a207d756d98bc38c3f74"} Feb 19 10:04:33 crc kubenswrapper[4965]: I0219 10:04:33.318465 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 10:04:33 crc kubenswrapper[4965]: I0219 10:04:33.318350 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="05248f7d-0b63-4529-a251-00944910acce" containerName="proxy-httpd" containerID="cri-o://66a9192a0caa9a9358de00ee5fb10361cfd139d68a18a207d756d98bc38c3f74" gracePeriod=30 Feb 19 10:04:33 crc kubenswrapper[4965]: I0219 10:04:33.318240 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="05248f7d-0b63-4529-a251-00944910acce" containerName="ceilometer-central-agent" containerID="cri-o://aff0e6e5d6e8038481e818e4366393d1195eb84f3213b1374079b13fd2f1be9e" gracePeriod=30 Feb 19 10:04:33 crc kubenswrapper[4965]: I0219 10:04:33.318360 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="05248f7d-0b63-4529-a251-00944910acce" containerName="sg-core" containerID="cri-o://7ebeada2bdc06b470d9b8f64f26cbb5c9976b8e062a0d17601206135c3c3c360" gracePeriod=30 Feb 19 10:04:33 crc kubenswrapper[4965]: I0219 10:04:33.318360 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="05248f7d-0b63-4529-a251-00944910acce" containerName="ceilometer-notification-agent" containerID="cri-o://d6d466979ab0d0de91c9d545c8444bd61b165e58c8d44785af167c286f5e4f90" gracePeriod=30 Feb 19 10:04:33 crc kubenswrapper[4965]: I0219 10:04:33.344907 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=7.235231244 podStartE2EDuration="11.344892274s" podCreationTimestamp="2026-02-19 10:04:22 +0000 UTC" firstStartedPulling="2026-02-19 10:04:28.370068307 +0000 UTC m=+1323.991389617" lastFinishedPulling="2026-02-19 10:04:32.479729337 +0000 UTC m=+1328.101050647" observedRunningTime="2026-02-19 10:04:33.34227588 +0000 UTC m=+1328.963597200" watchObservedRunningTime="2026-02-19 10:04:33.344892274 +0000 UTC m=+1328.966213584" Feb 19 10:04:34 crc kubenswrapper[4965]: I0219 10:04:34.347112 4965 generic.go:334] "Generic (PLEG): container finished" podID="05248f7d-0b63-4529-a251-00944910acce" containerID="66a9192a0caa9a9358de00ee5fb10361cfd139d68a18a207d756d98bc38c3f74" exitCode=0 Feb 19 10:04:34 crc kubenswrapper[4965]: I0219 10:04:34.347459 4965 generic.go:334] "Generic (PLEG): container finished" podID="05248f7d-0b63-4529-a251-00944910acce" containerID="7ebeada2bdc06b470d9b8f64f26cbb5c9976b8e062a0d17601206135c3c3c360" exitCode=2 Feb 19 10:04:34 crc kubenswrapper[4965]: I0219 10:04:34.347473 4965 generic.go:334] "Generic (PLEG): container finished" podID="05248f7d-0b63-4529-a251-00944910acce" containerID="d6d466979ab0d0de91c9d545c8444bd61b165e58c8d44785af167c286f5e4f90" exitCode=0 Feb 19 10:04:34 crc kubenswrapper[4965]: I0219 10:04:34.347482 4965 generic.go:334] "Generic (PLEG): container finished" podID="05248f7d-0b63-4529-a251-00944910acce" containerID="aff0e6e5d6e8038481e818e4366393d1195eb84f3213b1374079b13fd2f1be9e" exitCode=0 Feb 19 10:04:34 crc kubenswrapper[4965]: I0219 10:04:34.347181 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05248f7d-0b63-4529-a251-00944910acce","Type":"ContainerDied","Data":"66a9192a0caa9a9358de00ee5fb10361cfd139d68a18a207d756d98bc38c3f74"} Feb 19 10:04:34 crc kubenswrapper[4965]: I0219 10:04:34.347524 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05248f7d-0b63-4529-a251-00944910acce","Type":"ContainerDied","Data":"7ebeada2bdc06b470d9b8f64f26cbb5c9976b8e062a0d17601206135c3c3c360"} Feb 19 10:04:34 crc kubenswrapper[4965]: I0219 10:04:34.347545 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05248f7d-0b63-4529-a251-00944910acce","Type":"ContainerDied","Data":"d6d466979ab0d0de91c9d545c8444bd61b165e58c8d44785af167c286f5e4f90"} Feb 19 10:04:34 crc kubenswrapper[4965]: I0219 10:04:34.347557 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05248f7d-0b63-4529-a251-00944910acce","Type":"ContainerDied","Data":"aff0e6e5d6e8038481e818e4366393d1195eb84f3213b1374079b13fd2f1be9e"} Feb 19 10:04:37 crc kubenswrapper[4965]: I0219 10:04:37.150980 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 10:04:37 crc kubenswrapper[4965]: I0219 10:04:37.151452 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="8a107a22-ae05-4559-aa4b-73a727fc2c29" containerName="glance-log" containerID="cri-o://9ca77900431a612cdfef278233ad7dcc12b792210dfff4bd5d9ad5548faa2706" gracePeriod=30 Feb 19 10:04:37 crc kubenswrapper[4965]: I0219 10:04:37.151577 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="8a107a22-ae05-4559-aa4b-73a727fc2c29" containerName="glance-httpd" containerID="cri-o://d4b61b76e38165e1d4dada1b2501f40b791392e37e333c8f2a65659c6dbc7a61" gracePeriod=30 Feb 19 10:04:37 crc kubenswrapper[4965]: I0219 10:04:37.380606 4965 generic.go:334] "Generic (PLEG): container finished" podID="8a107a22-ae05-4559-aa4b-73a727fc2c29" containerID="9ca77900431a612cdfef278233ad7dcc12b792210dfff4bd5d9ad5548faa2706" exitCode=143 Feb 19 10:04:37 crc kubenswrapper[4965]: I0219 10:04:37.380682 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8a107a22-ae05-4559-aa4b-73a727fc2c29","Type":"ContainerDied","Data":"9ca77900431a612cdfef278233ad7dcc12b792210dfff4bd5d9ad5548faa2706"} Feb 19 10:04:38 crc kubenswrapper[4965]: I0219 10:04:38.640165 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 10:04:38 crc kubenswrapper[4965]: I0219 10:04:38.640837 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4" containerName="glance-log" containerID="cri-o://49fcaca484061b23abe5163e6400766b358539b4d4e48b2ae87e123e3faf885b" gracePeriod=30 Feb 19 10:04:38 crc kubenswrapper[4965]: I0219 10:04:38.641319 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4" containerName="glance-httpd" containerID="cri-o://875e5c0e930aae00aa948239350bc6c796d241e6326b1a4f9a9c5c8717df04da" gracePeriod=30 Feb 19 10:04:38 crc kubenswrapper[4965]: I0219 10:04:38.748008 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:04:38 crc kubenswrapper[4965]: I0219 10:04:38.885026 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05248f7d-0b63-4529-a251-00944910acce-scripts\") pod \"05248f7d-0b63-4529-a251-00944910acce\" (UID: \"05248f7d-0b63-4529-a251-00944910acce\") " Feb 19 10:04:38 crc kubenswrapper[4965]: I0219 10:04:38.885161 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05248f7d-0b63-4529-a251-00944910acce-log-httpd\") pod \"05248f7d-0b63-4529-a251-00944910acce\" (UID: \"05248f7d-0b63-4529-a251-00944910acce\") " Feb 19 10:04:38 crc kubenswrapper[4965]: I0219 10:04:38.885234 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05248f7d-0b63-4529-a251-00944910acce-run-httpd\") pod \"05248f7d-0b63-4529-a251-00944910acce\" (UID: \"05248f7d-0b63-4529-a251-00944910acce\") " Feb 19 10:04:38 crc kubenswrapper[4965]: I0219 10:04:38.885268 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05248f7d-0b63-4529-a251-00944910acce-config-data\") pod \"05248f7d-0b63-4529-a251-00944910acce\" (UID: \"05248f7d-0b63-4529-a251-00944910acce\") " Feb 19 10:04:38 crc kubenswrapper[4965]: I0219 10:04:38.885326 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/05248f7d-0b63-4529-a251-00944910acce-sg-core-conf-yaml\") pod \"05248f7d-0b63-4529-a251-00944910acce\" (UID: \"05248f7d-0b63-4529-a251-00944910acce\") " Feb 19 10:04:38 crc kubenswrapper[4965]: I0219 10:04:38.885453 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnccw\" (UniqueName: \"kubernetes.io/projected/05248f7d-0b63-4529-a251-00944910acce-kube-api-access-hnccw\") pod \"05248f7d-0b63-4529-a251-00944910acce\" (UID: \"05248f7d-0b63-4529-a251-00944910acce\") " Feb 19 10:04:38 crc kubenswrapper[4965]: I0219 10:04:38.885503 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05248f7d-0b63-4529-a251-00944910acce-combined-ca-bundle\") pod \"05248f7d-0b63-4529-a251-00944910acce\" (UID: \"05248f7d-0b63-4529-a251-00944910acce\") " Feb 19 10:04:38 crc kubenswrapper[4965]: I0219 10:04:38.885975 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05248f7d-0b63-4529-a251-00944910acce-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "05248f7d-0b63-4529-a251-00944910acce" (UID: "05248f7d-0b63-4529-a251-00944910acce"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:04:38 crc kubenswrapper[4965]: I0219 10:04:38.886109 4965 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05248f7d-0b63-4529-a251-00944910acce-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:38 crc kubenswrapper[4965]: I0219 10:04:38.893293 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05248f7d-0b63-4529-a251-00944910acce-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "05248f7d-0b63-4529-a251-00944910acce" (UID: "05248f7d-0b63-4529-a251-00944910acce"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:04:38 crc kubenswrapper[4965]: I0219 10:04:38.898300 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05248f7d-0b63-4529-a251-00944910acce-scripts" (OuterVolumeSpecName: "scripts") pod "05248f7d-0b63-4529-a251-00944910acce" (UID: "05248f7d-0b63-4529-a251-00944910acce"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:38 crc kubenswrapper[4965]: I0219 10:04:38.912061 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05248f7d-0b63-4529-a251-00944910acce-kube-api-access-hnccw" (OuterVolumeSpecName: "kube-api-access-hnccw") pod "05248f7d-0b63-4529-a251-00944910acce" (UID: "05248f7d-0b63-4529-a251-00944910acce"). InnerVolumeSpecName "kube-api-access-hnccw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:04:38 crc kubenswrapper[4965]: I0219 10:04:38.985970 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05248f7d-0b63-4529-a251-00944910acce-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "05248f7d-0b63-4529-a251-00944910acce" (UID: "05248f7d-0b63-4529-a251-00944910acce"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:38 crc kubenswrapper[4965]: I0219 10:04:38.987446 4965 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/05248f7d-0b63-4529-a251-00944910acce-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:38 crc kubenswrapper[4965]: I0219 10:04:38.987543 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hnccw\" (UniqueName: \"kubernetes.io/projected/05248f7d-0b63-4529-a251-00944910acce-kube-api-access-hnccw\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:38 crc kubenswrapper[4965]: I0219 10:04:38.987607 4965 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05248f7d-0b63-4529-a251-00944910acce-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:38 crc kubenswrapper[4965]: I0219 10:04:38.987660 4965 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05248f7d-0b63-4529-a251-00944910acce-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:39 crc kubenswrapper[4965]: I0219 10:04:39.085086 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05248f7d-0b63-4529-a251-00944910acce-config-data" (OuterVolumeSpecName: "config-data") pod "05248f7d-0b63-4529-a251-00944910acce" (UID: "05248f7d-0b63-4529-a251-00944910acce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:39 crc kubenswrapper[4965]: I0219 10:04:39.089847 4965 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05248f7d-0b63-4529-a251-00944910acce-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:39 crc kubenswrapper[4965]: I0219 10:04:39.241325 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05248f7d-0b63-4529-a251-00944910acce-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "05248f7d-0b63-4529-a251-00944910acce" (UID: "05248f7d-0b63-4529-a251-00944910acce"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:39 crc kubenswrapper[4965]: I0219 10:04:39.306130 4965 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05248f7d-0b63-4529-a251-00944910acce-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:39 crc kubenswrapper[4965]: I0219 10:04:39.403637 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-pkhzc" event={"ID":"d5e000de-4745-47c0-b6e6-8735c626518e","Type":"ContainerStarted","Data":"4ca0cb278135cdd36718827ee36f015f4fe1729fe7693d3cd38cc8ff8e2ced90"} Feb 19 10:04:39 crc kubenswrapper[4965]: I0219 10:04:39.406574 4965 generic.go:334] "Generic (PLEG): container finished" podID="5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4" containerID="49fcaca484061b23abe5163e6400766b358539b4d4e48b2ae87e123e3faf885b" exitCode=143 Feb 19 10:04:39 crc kubenswrapper[4965]: I0219 10:04:39.406660 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4","Type":"ContainerDied","Data":"49fcaca484061b23abe5163e6400766b358539b4d4e48b2ae87e123e3faf885b"} Feb 19 10:04:39 crc kubenswrapper[4965]: I0219 10:04:39.410174 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05248f7d-0b63-4529-a251-00944910acce","Type":"ContainerDied","Data":"90443eca7fdb2273342f6cca36ee0d5591900a603693983d3170e62ed3021cd5"} Feb 19 10:04:39 crc kubenswrapper[4965]: I0219 10:04:39.410223 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:04:39 crc kubenswrapper[4965]: I0219 10:04:39.410240 4965 scope.go:117] "RemoveContainer" containerID="66a9192a0caa9a9358de00ee5fb10361cfd139d68a18a207d756d98bc38c3f74" Feb 19 10:04:39 crc kubenswrapper[4965]: I0219 10:04:39.430037 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-pkhzc" podStartSLOduration=3.321776257 podStartE2EDuration="12.430019133s" podCreationTimestamp="2026-02-19 10:04:27 +0000 UTC" firstStartedPulling="2026-02-19 10:04:29.083378444 +0000 UTC m=+1324.704699744" lastFinishedPulling="2026-02-19 10:04:38.19162131 +0000 UTC m=+1333.812942620" observedRunningTime="2026-02-19 10:04:39.421375203 +0000 UTC m=+1335.042696513" watchObservedRunningTime="2026-02-19 10:04:39.430019133 +0000 UTC m=+1335.051340443" Feb 19 10:04:39 crc kubenswrapper[4965]: I0219 10:04:39.451946 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:04:39 crc kubenswrapper[4965]: I0219 10:04:39.467686 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:04:39 crc kubenswrapper[4965]: I0219 10:04:39.484888 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:04:39 crc kubenswrapper[4965]: E0219 10:04:39.485320 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05248f7d-0b63-4529-a251-00944910acce" containerName="proxy-httpd" Feb 19 10:04:39 crc kubenswrapper[4965]: I0219 10:04:39.485336 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="05248f7d-0b63-4529-a251-00944910acce" containerName="proxy-httpd" Feb 19 10:04:39 crc kubenswrapper[4965]: E0219 10:04:39.485350 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bccdd96-d87f-4f40-979a-b650eabac24f" containerName="neutron-httpd" Feb 19 10:04:39 crc kubenswrapper[4965]: I0219 10:04:39.485358 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bccdd96-d87f-4f40-979a-b650eabac24f" containerName="neutron-httpd" Feb 19 10:04:39 crc kubenswrapper[4965]: E0219 10:04:39.485378 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bccdd96-d87f-4f40-979a-b650eabac24f" containerName="neutron-api" Feb 19 10:04:39 crc kubenswrapper[4965]: I0219 10:04:39.485385 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bccdd96-d87f-4f40-979a-b650eabac24f" containerName="neutron-api" Feb 19 10:04:39 crc kubenswrapper[4965]: E0219 10:04:39.485402 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05248f7d-0b63-4529-a251-00944910acce" containerName="ceilometer-notification-agent" Feb 19 10:04:39 crc kubenswrapper[4965]: I0219 10:04:39.485407 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="05248f7d-0b63-4529-a251-00944910acce" containerName="ceilometer-notification-agent" Feb 19 10:04:39 crc kubenswrapper[4965]: E0219 10:04:39.485418 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05248f7d-0b63-4529-a251-00944910acce" containerName="ceilometer-central-agent" Feb 19 10:04:39 crc kubenswrapper[4965]: I0219 10:04:39.485424 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="05248f7d-0b63-4529-a251-00944910acce" containerName="ceilometer-central-agent" Feb 19 10:04:39 crc kubenswrapper[4965]: E0219 10:04:39.485441 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05248f7d-0b63-4529-a251-00944910acce" containerName="sg-core" Feb 19 10:04:39 crc kubenswrapper[4965]: I0219 10:04:39.485447 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="05248f7d-0b63-4529-a251-00944910acce" containerName="sg-core" Feb 19 10:04:39 crc kubenswrapper[4965]: I0219 10:04:39.485609 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="05248f7d-0b63-4529-a251-00944910acce" containerName="sg-core" Feb 19 10:04:39 crc kubenswrapper[4965]: I0219 10:04:39.485619 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="05248f7d-0b63-4529-a251-00944910acce" containerName="proxy-httpd" Feb 19 10:04:39 crc kubenswrapper[4965]: I0219 10:04:39.485628 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="05248f7d-0b63-4529-a251-00944910acce" containerName="ceilometer-notification-agent" Feb 19 10:04:39 crc kubenswrapper[4965]: I0219 10:04:39.485639 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="05248f7d-0b63-4529-a251-00944910acce" containerName="ceilometer-central-agent" Feb 19 10:04:39 crc kubenswrapper[4965]: I0219 10:04:39.485648 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bccdd96-d87f-4f40-979a-b650eabac24f" containerName="neutron-httpd" Feb 19 10:04:39 crc kubenswrapper[4965]: I0219 10:04:39.485664 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bccdd96-d87f-4f40-979a-b650eabac24f" containerName="neutron-api" Feb 19 10:04:39 crc kubenswrapper[4965]: I0219 10:04:39.487694 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:04:39 crc kubenswrapper[4965]: I0219 10:04:39.493089 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 10:04:39 crc kubenswrapper[4965]: I0219 10:04:39.493301 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 10:04:39 crc kubenswrapper[4965]: I0219 10:04:39.503505 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:04:39 crc kubenswrapper[4965]: I0219 10:04:39.611240 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8fpd\" (UniqueName: \"kubernetes.io/projected/62dadf17-d312-4ce0-b6f5-9319261705e2-kube-api-access-h8fpd\") pod \"ceilometer-0\" (UID: \"62dadf17-d312-4ce0-b6f5-9319261705e2\") " pod="openstack/ceilometer-0" Feb 19 10:04:39 crc kubenswrapper[4965]: I0219 10:04:39.611531 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62dadf17-d312-4ce0-b6f5-9319261705e2-run-httpd\") pod \"ceilometer-0\" (UID: \"62dadf17-d312-4ce0-b6f5-9319261705e2\") " pod="openstack/ceilometer-0" Feb 19 10:04:39 crc kubenswrapper[4965]: I0219 10:04:39.611570 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/62dadf17-d312-4ce0-b6f5-9319261705e2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"62dadf17-d312-4ce0-b6f5-9319261705e2\") " pod="openstack/ceilometer-0" Feb 19 10:04:39 crc kubenswrapper[4965]: I0219 10:04:39.612063 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62dadf17-d312-4ce0-b6f5-9319261705e2-scripts\") pod \"ceilometer-0\" (UID: \"62dadf17-d312-4ce0-b6f5-9319261705e2\") " pod="openstack/ceilometer-0" Feb 19 10:04:39 crc kubenswrapper[4965]: I0219 10:04:39.612127 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62dadf17-d312-4ce0-b6f5-9319261705e2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"62dadf17-d312-4ce0-b6f5-9319261705e2\") " pod="openstack/ceilometer-0" Feb 19 10:04:39 crc kubenswrapper[4965]: I0219 10:04:39.612308 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62dadf17-d312-4ce0-b6f5-9319261705e2-log-httpd\") pod \"ceilometer-0\" (UID: \"62dadf17-d312-4ce0-b6f5-9319261705e2\") " pod="openstack/ceilometer-0" Feb 19 10:04:39 crc kubenswrapper[4965]: I0219 10:04:39.612563 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62dadf17-d312-4ce0-b6f5-9319261705e2-config-data\") pod \"ceilometer-0\" (UID: \"62dadf17-d312-4ce0-b6f5-9319261705e2\") " pod="openstack/ceilometer-0" Feb 19 10:04:39 crc kubenswrapper[4965]: I0219 10:04:39.714182 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62dadf17-d312-4ce0-b6f5-9319261705e2-log-httpd\") pod \"ceilometer-0\" (UID: \"62dadf17-d312-4ce0-b6f5-9319261705e2\") " pod="openstack/ceilometer-0" Feb 19 10:04:39 crc kubenswrapper[4965]: I0219 10:04:39.714295 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62dadf17-d312-4ce0-b6f5-9319261705e2-config-data\") pod \"ceilometer-0\" (UID: \"62dadf17-d312-4ce0-b6f5-9319261705e2\") " pod="openstack/ceilometer-0" Feb 19 10:04:39 crc kubenswrapper[4965]: I0219 10:04:39.714338 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8fpd\" (UniqueName: \"kubernetes.io/projected/62dadf17-d312-4ce0-b6f5-9319261705e2-kube-api-access-h8fpd\") pod \"ceilometer-0\" (UID: \"62dadf17-d312-4ce0-b6f5-9319261705e2\") " pod="openstack/ceilometer-0" Feb 19 10:04:39 crc kubenswrapper[4965]: I0219 10:04:39.714374 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62dadf17-d312-4ce0-b6f5-9319261705e2-run-httpd\") pod \"ceilometer-0\" (UID: \"62dadf17-d312-4ce0-b6f5-9319261705e2\") " pod="openstack/ceilometer-0" Feb 19 10:04:39 crc kubenswrapper[4965]: I0219 10:04:39.714414 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/62dadf17-d312-4ce0-b6f5-9319261705e2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"62dadf17-d312-4ce0-b6f5-9319261705e2\") " pod="openstack/ceilometer-0" Feb 19 10:04:39 crc kubenswrapper[4965]: I0219 10:04:39.714491 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62dadf17-d312-4ce0-b6f5-9319261705e2-scripts\") pod \"ceilometer-0\" (UID: \"62dadf17-d312-4ce0-b6f5-9319261705e2\") " pod="openstack/ceilometer-0" Feb 19 10:04:39 crc kubenswrapper[4965]: I0219 10:04:39.714516 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62dadf17-d312-4ce0-b6f5-9319261705e2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"62dadf17-d312-4ce0-b6f5-9319261705e2\") " pod="openstack/ceilometer-0" Feb 19 10:04:39 crc kubenswrapper[4965]: I0219 10:04:39.714709 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62dadf17-d312-4ce0-b6f5-9319261705e2-log-httpd\") pod \"ceilometer-0\" (UID: \"62dadf17-d312-4ce0-b6f5-9319261705e2\") " pod="openstack/ceilometer-0" Feb 19 10:04:39 crc kubenswrapper[4965]: I0219 10:04:39.714916 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62dadf17-d312-4ce0-b6f5-9319261705e2-run-httpd\") pod \"ceilometer-0\" (UID: \"62dadf17-d312-4ce0-b6f5-9319261705e2\") " pod="openstack/ceilometer-0" Feb 19 10:04:39 crc kubenswrapper[4965]: I0219 10:04:39.723790 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/62dadf17-d312-4ce0-b6f5-9319261705e2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"62dadf17-d312-4ce0-b6f5-9319261705e2\") " pod="openstack/ceilometer-0" Feb 19 10:04:39 crc kubenswrapper[4965]: I0219 10:04:39.724373 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62dadf17-d312-4ce0-b6f5-9319261705e2-config-data\") pod \"ceilometer-0\" (UID: \"62dadf17-d312-4ce0-b6f5-9319261705e2\") " pod="openstack/ceilometer-0" Feb 19 10:04:39 crc kubenswrapper[4965]: I0219 10:04:39.727863 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62dadf17-d312-4ce0-b6f5-9319261705e2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"62dadf17-d312-4ce0-b6f5-9319261705e2\") " pod="openstack/ceilometer-0" Feb 19 10:04:39 crc kubenswrapper[4965]: I0219 10:04:39.727910 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62dadf17-d312-4ce0-b6f5-9319261705e2-scripts\") pod \"ceilometer-0\" (UID: \"62dadf17-d312-4ce0-b6f5-9319261705e2\") " pod="openstack/ceilometer-0" Feb 19 10:04:39 crc kubenswrapper[4965]: I0219 10:04:39.747014 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8fpd\" (UniqueName: \"kubernetes.io/projected/62dadf17-d312-4ce0-b6f5-9319261705e2-kube-api-access-h8fpd\") pod \"ceilometer-0\" (UID: \"62dadf17-d312-4ce0-b6f5-9319261705e2\") " pod="openstack/ceilometer-0" Feb 19 10:04:39 crc kubenswrapper[4965]: I0219 10:04:39.804013 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:04:39 crc kubenswrapper[4965]: I0219 10:04:39.842832 4965 scope.go:117] "RemoveContainer" containerID="7ebeada2bdc06b470d9b8f64f26cbb5c9976b8e062a0d17601206135c3c3c360" Feb 19 10:04:40 crc kubenswrapper[4965]: I0219 10:04:40.021444 4965 scope.go:117] "RemoveContainer" containerID="d6d466979ab0d0de91c9d545c8444bd61b165e58c8d44785af167c286f5e4f90" Feb 19 10:04:40 crc kubenswrapper[4965]: I0219 10:04:40.137027 4965 scope.go:117] "RemoveContainer" containerID="aff0e6e5d6e8038481e818e4366393d1195eb84f3213b1374079b13fd2f1be9e" Feb 19 10:04:40 crc kubenswrapper[4965]: I0219 10:04:40.376962 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:04:40 crc kubenswrapper[4965]: I0219 10:04:40.431252 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62dadf17-d312-4ce0-b6f5-9319261705e2","Type":"ContainerStarted","Data":"59353da03a9c2c5daddecaf9108b8927fb5ad20ab43d62e31cfb85f0835ab4f8"} Feb 19 10:04:41 crc kubenswrapper[4965]: I0219 10:04:41.061377 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-api-0" Feb 19 10:04:41 crc kubenswrapper[4965]: I0219 10:04:41.213744 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05248f7d-0b63-4529-a251-00944910acce" path="/var/lib/kubelet/pods/05248f7d-0b63-4529-a251-00944910acce/volumes" Feb 19 10:04:41 crc kubenswrapper[4965]: I0219 10:04:41.444009 4965 generic.go:334] "Generic (PLEG): container finished" podID="8a107a22-ae05-4559-aa4b-73a727fc2c29" containerID="d4b61b76e38165e1d4dada1b2501f40b791392e37e333c8f2a65659c6dbc7a61" exitCode=0 Feb 19 10:04:41 crc kubenswrapper[4965]: I0219 10:04:41.444087 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8a107a22-ae05-4559-aa4b-73a727fc2c29","Type":"ContainerDied","Data":"d4b61b76e38165e1d4dada1b2501f40b791392e37e333c8f2a65659c6dbc7a61"} Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.149242 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.283907 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a107a22-ae05-4559-aa4b-73a727fc2c29-config-data\") pod \"8a107a22-ae05-4559-aa4b-73a727fc2c29\" (UID: \"8a107a22-ae05-4559-aa4b-73a727fc2c29\") " Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.283978 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a107a22-ae05-4559-aa4b-73a727fc2c29-logs\") pod \"8a107a22-ae05-4559-aa4b-73a727fc2c29\" (UID: \"8a107a22-ae05-4559-aa4b-73a727fc2c29\") " Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.284308 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8954926-b989-4d4b-b68d-eda06a80d48a\") pod \"8a107a22-ae05-4559-aa4b-73a727fc2c29\" (UID: \"8a107a22-ae05-4559-aa4b-73a727fc2c29\") " Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.284359 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a107a22-ae05-4559-aa4b-73a727fc2c29-combined-ca-bundle\") pod \"8a107a22-ae05-4559-aa4b-73a727fc2c29\" (UID: \"8a107a22-ae05-4559-aa4b-73a727fc2c29\") " Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.284409 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjhz8\" (UniqueName: \"kubernetes.io/projected/8a107a22-ae05-4559-aa4b-73a727fc2c29-kube-api-access-jjhz8\") pod \"8a107a22-ae05-4559-aa4b-73a727fc2c29\" (UID: \"8a107a22-ae05-4559-aa4b-73a727fc2c29\") " Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.284469 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a107a22-ae05-4559-aa4b-73a727fc2c29-scripts\") pod \"8a107a22-ae05-4559-aa4b-73a727fc2c29\" (UID: \"8a107a22-ae05-4559-aa4b-73a727fc2c29\") " Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.284541 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8a107a22-ae05-4559-aa4b-73a727fc2c29-httpd-run\") pod \"8a107a22-ae05-4559-aa4b-73a727fc2c29\" (UID: \"8a107a22-ae05-4559-aa4b-73a727fc2c29\") " Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.284754 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a107a22-ae05-4559-aa4b-73a727fc2c29-public-tls-certs\") pod \"8a107a22-ae05-4559-aa4b-73a727fc2c29\" (UID: \"8a107a22-ae05-4559-aa4b-73a727fc2c29\") " Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.285071 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a107a22-ae05-4559-aa4b-73a727fc2c29-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "8a107a22-ae05-4559-aa4b-73a727fc2c29" (UID: "8a107a22-ae05-4559-aa4b-73a727fc2c29"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.285647 4965 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8a107a22-ae05-4559-aa4b-73a727fc2c29-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.286573 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a107a22-ae05-4559-aa4b-73a727fc2c29-logs" (OuterVolumeSpecName: "logs") pod "8a107a22-ae05-4559-aa4b-73a727fc2c29" (UID: "8a107a22-ae05-4559-aa4b-73a727fc2c29"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.300705 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a107a22-ae05-4559-aa4b-73a727fc2c29-scripts" (OuterVolumeSpecName: "scripts") pod "8a107a22-ae05-4559-aa4b-73a727fc2c29" (UID: "8a107a22-ae05-4559-aa4b-73a727fc2c29"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.302919 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a107a22-ae05-4559-aa4b-73a727fc2c29-kube-api-access-jjhz8" (OuterVolumeSpecName: "kube-api-access-jjhz8") pod "8a107a22-ae05-4559-aa4b-73a727fc2c29" (UID: "8a107a22-ae05-4559-aa4b-73a727fc2c29"). InnerVolumeSpecName "kube-api-access-jjhz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.365918 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8954926-b989-4d4b-b68d-eda06a80d48a" (OuterVolumeSpecName: "glance") pod "8a107a22-ae05-4559-aa4b-73a727fc2c29" (UID: "8a107a22-ae05-4559-aa4b-73a727fc2c29"). InnerVolumeSpecName "pvc-b8954926-b989-4d4b-b68d-eda06a80d48a". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.387624 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a107a22-ae05-4559-aa4b-73a727fc2c29-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8a107a22-ae05-4559-aa4b-73a727fc2c29" (UID: "8a107a22-ae05-4559-aa4b-73a727fc2c29"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.387857 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a107a22-ae05-4559-aa4b-73a727fc2c29-combined-ca-bundle\") pod \"8a107a22-ae05-4559-aa4b-73a727fc2c29\" (UID: \"8a107a22-ae05-4559-aa4b-73a727fc2c29\") " Feb 19 10:04:42 crc kubenswrapper[4965]: W0219 10:04:42.388098 4965 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/8a107a22-ae05-4559-aa4b-73a727fc2c29/volumes/kubernetes.io~secret/combined-ca-bundle Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.388117 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a107a22-ae05-4559-aa4b-73a727fc2c29-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8a107a22-ae05-4559-aa4b-73a727fc2c29" (UID: "8a107a22-ae05-4559-aa4b-73a727fc2c29"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.399568 4965 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-b8954926-b989-4d4b-b68d-eda06a80d48a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8954926-b989-4d4b-b68d-eda06a80d48a\") on node \"crc\" " Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.399634 4965 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a107a22-ae05-4559-aa4b-73a727fc2c29-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.399655 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjhz8\" (UniqueName: \"kubernetes.io/projected/8a107a22-ae05-4559-aa4b-73a727fc2c29-kube-api-access-jjhz8\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.399673 4965 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a107a22-ae05-4559-aa4b-73a727fc2c29-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.399684 4965 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a107a22-ae05-4559-aa4b-73a727fc2c29-logs\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.452051 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a107a22-ae05-4559-aa4b-73a727fc2c29-config-data" (OuterVolumeSpecName: "config-data") pod "8a107a22-ae05-4559-aa4b-73a727fc2c29" (UID: "8a107a22-ae05-4559-aa4b-73a727fc2c29"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.477093 4965 generic.go:334] "Generic (PLEG): container finished" podID="5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4" containerID="875e5c0e930aae00aa948239350bc6c796d241e6326b1a4f9a9c5c8717df04da" exitCode=0 Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.477565 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4","Type":"ContainerDied","Data":"875e5c0e930aae00aa948239350bc6c796d241e6326b1a4f9a9c5c8717df04da"} Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.479155 4965 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.479388 4965 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-b8954926-b989-4d4b-b68d-eda06a80d48a" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8954926-b989-4d4b-b68d-eda06a80d48a") on node "crc" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.492627 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62dadf17-d312-4ce0-b6f5-9319261705e2","Type":"ContainerStarted","Data":"cd2223af625aaf5c14380d7e00959262f24260882cb9053e7ef6e0f0c8857b92"} Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.502883 4965 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a107a22-ae05-4559-aa4b-73a727fc2c29-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.503288 4965 reconciler_common.go:293] "Volume detached for volume \"pvc-b8954926-b989-4d4b-b68d-eda06a80d48a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8954926-b989-4d4b-b68d-eda06a80d48a\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.503450 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a107a22-ae05-4559-aa4b-73a727fc2c29-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8a107a22-ae05-4559-aa4b-73a727fc2c29" (UID: "8a107a22-ae05-4559-aa4b-73a727fc2c29"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.560557 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8a107a22-ae05-4559-aa4b-73a727fc2c29","Type":"ContainerDied","Data":"0b6183937f51b8faa2ef4499a4ab2c59e567a700f7a270bee37294ec72975af4"} Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.560805 4965 scope.go:117] "RemoveContainer" containerID="d4b61b76e38165e1d4dada1b2501f40b791392e37e333c8f2a65659c6dbc7a61" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.561161 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.567801 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.609939 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95d9w\" (UniqueName: \"kubernetes.io/projected/5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4-kube-api-access-95d9w\") pod \"5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4\" (UID: \"5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4\") " Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.610085 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dd8fd35d-2a75-4403-b478-22ed0ff75616\") pod \"5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4\" (UID: \"5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4\") " Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.610130 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4-logs\") pod \"5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4\" (UID: \"5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4\") " Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.610149 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4-httpd-run\") pod \"5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4\" (UID: \"5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4\") " Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.610255 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4-config-data\") pod \"5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4\" (UID: \"5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4\") " Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.610296 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4-scripts\") pod \"5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4\" (UID: \"5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4\") " Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.610330 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4-internal-tls-certs\") pod \"5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4\" (UID: \"5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4\") " Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.610427 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4-combined-ca-bundle\") pod \"5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4\" (UID: \"5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4\") " Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.611085 4965 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a107a22-ae05-4559-aa4b-73a727fc2c29-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.612844 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4" (UID: "5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.613217 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4-logs" (OuterVolumeSpecName: "logs") pod "5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4" (UID: "5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.624534 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4-kube-api-access-95d9w" (OuterVolumeSpecName: "kube-api-access-95d9w") pod "5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4" (UID: "5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4"). InnerVolumeSpecName "kube-api-access-95d9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.641656 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4-scripts" (OuterVolumeSpecName: "scripts") pod "5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4" (UID: "5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.650268 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.656767 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.667884 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 10:04:42 crc kubenswrapper[4965]: E0219 10:04:42.668313 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4" containerName="glance-log" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.668328 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4" containerName="glance-log" Feb 19 10:04:42 crc kubenswrapper[4965]: E0219 10:04:42.668355 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4" containerName="glance-httpd" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.668361 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4" containerName="glance-httpd" Feb 19 10:04:42 crc kubenswrapper[4965]: E0219 10:04:42.668373 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a107a22-ae05-4559-aa4b-73a727fc2c29" containerName="glance-log" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.668379 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a107a22-ae05-4559-aa4b-73a727fc2c29" containerName="glance-log" Feb 19 10:04:42 crc kubenswrapper[4965]: E0219 10:04:42.668393 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a107a22-ae05-4559-aa4b-73a727fc2c29" containerName="glance-httpd" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.668399 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a107a22-ae05-4559-aa4b-73a727fc2c29" containerName="glance-httpd" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.668573 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a107a22-ae05-4559-aa4b-73a727fc2c29" containerName="glance-log" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.668593 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a107a22-ae05-4559-aa4b-73a727fc2c29" containerName="glance-httpd" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.668604 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4" containerName="glance-httpd" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.668617 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4" containerName="glance-log" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.669638 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.673714 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.676039 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.678858 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.688238 4965 scope.go:117] "RemoveContainer" containerID="9ca77900431a612cdfef278233ad7dcc12b792210dfff4bd5d9ad5548faa2706" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.714579 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1ece847-d2dd-42e7-ad4c-5f9ad04529f8-scripts\") pod \"glance-default-external-api-0\" (UID: \"d1ece847-d2dd-42e7-ad4c-5f9ad04529f8\") " pod="openstack/glance-default-external-api-0" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.714619 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d1ece847-d2dd-42e7-ad4c-5f9ad04529f8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d1ece847-d2dd-42e7-ad4c-5f9ad04529f8\") " pod="openstack/glance-default-external-api-0" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.714650 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1ece847-d2dd-42e7-ad4c-5f9ad04529f8-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d1ece847-d2dd-42e7-ad4c-5f9ad04529f8\") " pod="openstack/glance-default-external-api-0" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.714696 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b8954926-b989-4d4b-b68d-eda06a80d48a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8954926-b989-4d4b-b68d-eda06a80d48a\") pod \"glance-default-external-api-0\" (UID: \"d1ece847-d2dd-42e7-ad4c-5f9ad04529f8\") " pod="openstack/glance-default-external-api-0" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.714766 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1ece847-d2dd-42e7-ad4c-5f9ad04529f8-config-data\") pod \"glance-default-external-api-0\" (UID: \"d1ece847-d2dd-42e7-ad4c-5f9ad04529f8\") " pod="openstack/glance-default-external-api-0" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.714781 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1ece847-d2dd-42e7-ad4c-5f9ad04529f8-logs\") pod \"glance-default-external-api-0\" (UID: \"d1ece847-d2dd-42e7-ad4c-5f9ad04529f8\") " pod="openstack/glance-default-external-api-0" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.714821 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vxq4\" (UniqueName: \"kubernetes.io/projected/d1ece847-d2dd-42e7-ad4c-5f9ad04529f8-kube-api-access-9vxq4\") pod \"glance-default-external-api-0\" (UID: \"d1ece847-d2dd-42e7-ad4c-5f9ad04529f8\") " pod="openstack/glance-default-external-api-0" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.714869 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ece847-d2dd-42e7-ad4c-5f9ad04529f8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d1ece847-d2dd-42e7-ad4c-5f9ad04529f8\") " pod="openstack/glance-default-external-api-0" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.714961 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95d9w\" (UniqueName: \"kubernetes.io/projected/5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4-kube-api-access-95d9w\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.714973 4965 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4-logs\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.714982 4965 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.714990 4965 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.760562 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dd8fd35d-2a75-4403-b478-22ed0ff75616" (OuterVolumeSpecName: "glance") pod "5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4" (UID: "5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4"). InnerVolumeSpecName "pvc-dd8fd35d-2a75-4403-b478-22ed0ff75616". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.767495 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4" (UID: "5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.811411 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4" (UID: "5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.816102 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1ece847-d2dd-42e7-ad4c-5f9ad04529f8-config-data\") pod \"glance-default-external-api-0\" (UID: \"d1ece847-d2dd-42e7-ad4c-5f9ad04529f8\") " pod="openstack/glance-default-external-api-0" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.816139 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1ece847-d2dd-42e7-ad4c-5f9ad04529f8-logs\") pod \"glance-default-external-api-0\" (UID: \"d1ece847-d2dd-42e7-ad4c-5f9ad04529f8\") " pod="openstack/glance-default-external-api-0" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.816179 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vxq4\" (UniqueName: \"kubernetes.io/projected/d1ece847-d2dd-42e7-ad4c-5f9ad04529f8-kube-api-access-9vxq4\") pod \"glance-default-external-api-0\" (UID: \"d1ece847-d2dd-42e7-ad4c-5f9ad04529f8\") " pod="openstack/glance-default-external-api-0" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.816236 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ece847-d2dd-42e7-ad4c-5f9ad04529f8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d1ece847-d2dd-42e7-ad4c-5f9ad04529f8\") " pod="openstack/glance-default-external-api-0" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.816295 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1ece847-d2dd-42e7-ad4c-5f9ad04529f8-scripts\") pod \"glance-default-external-api-0\" (UID: \"d1ece847-d2dd-42e7-ad4c-5f9ad04529f8\") " pod="openstack/glance-default-external-api-0" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.816315 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d1ece847-d2dd-42e7-ad4c-5f9ad04529f8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d1ece847-d2dd-42e7-ad4c-5f9ad04529f8\") " pod="openstack/glance-default-external-api-0" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.816336 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1ece847-d2dd-42e7-ad4c-5f9ad04529f8-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d1ece847-d2dd-42e7-ad4c-5f9ad04529f8\") " pod="openstack/glance-default-external-api-0" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.816364 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b8954926-b989-4d4b-b68d-eda06a80d48a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8954926-b989-4d4b-b68d-eda06a80d48a\") pod \"glance-default-external-api-0\" (UID: \"d1ece847-d2dd-42e7-ad4c-5f9ad04529f8\") " pod="openstack/glance-default-external-api-0" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.816448 4965 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-dd8fd35d-2a75-4403-b478-22ed0ff75616\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dd8fd35d-2a75-4403-b478-22ed0ff75616\") on node \"crc\" " Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.816462 4965 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.816473 4965 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.822781 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d1ece847-d2dd-42e7-ad4c-5f9ad04529f8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d1ece847-d2dd-42e7-ad4c-5f9ad04529f8\") " pod="openstack/glance-default-external-api-0" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.823329 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1ece847-d2dd-42e7-ad4c-5f9ad04529f8-logs\") pod \"glance-default-external-api-0\" (UID: \"d1ece847-d2dd-42e7-ad4c-5f9ad04529f8\") " pod="openstack/glance-default-external-api-0" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.826446 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1ece847-d2dd-42e7-ad4c-5f9ad04529f8-config-data\") pod \"glance-default-external-api-0\" (UID: \"d1ece847-d2dd-42e7-ad4c-5f9ad04529f8\") " pod="openstack/glance-default-external-api-0" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.830011 4965 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.830276 4965 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b8954926-b989-4d4b-b68d-eda06a80d48a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8954926-b989-4d4b-b68d-eda06a80d48a\") pod \"glance-default-external-api-0\" (UID: \"d1ece847-d2dd-42e7-ad4c-5f9ad04529f8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6e7bd73c7e8cf1522bc205031417ace7701fabab6d8bd5d89d84d48b59498ea6/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.834263 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1ece847-d2dd-42e7-ad4c-5f9ad04529f8-scripts\") pod \"glance-default-external-api-0\" (UID: \"d1ece847-d2dd-42e7-ad4c-5f9ad04529f8\") " pod="openstack/glance-default-external-api-0" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.845272 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1ece847-d2dd-42e7-ad4c-5f9ad04529f8-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d1ece847-d2dd-42e7-ad4c-5f9ad04529f8\") " pod="openstack/glance-default-external-api-0" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.845832 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ece847-d2dd-42e7-ad4c-5f9ad04529f8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d1ece847-d2dd-42e7-ad4c-5f9ad04529f8\") " pod="openstack/glance-default-external-api-0" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.860515 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4-config-data" (OuterVolumeSpecName: "config-data") pod "5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4" (UID: "5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.875726 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vxq4\" (UniqueName: \"kubernetes.io/projected/d1ece847-d2dd-42e7-ad4c-5f9ad04529f8-kube-api-access-9vxq4\") pod \"glance-default-external-api-0\" (UID: \"d1ece847-d2dd-42e7-ad4c-5f9ad04529f8\") " pod="openstack/glance-default-external-api-0" Feb 19 10:04:42 crc kubenswrapper[4965]: I0219 10:04:42.919706 4965 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:43 crc kubenswrapper[4965]: I0219 10:04:43.095968 4965 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 19 10:04:43 crc kubenswrapper[4965]: I0219 10:04:43.096221 4965 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-dd8fd35d-2a75-4403-b478-22ed0ff75616" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dd8fd35d-2a75-4403-b478-22ed0ff75616") on node "crc" Feb 19 10:04:43 crc kubenswrapper[4965]: I0219 10:04:43.127140 4965 reconciler_common.go:293] "Volume detached for volume \"pvc-dd8fd35d-2a75-4403-b478-22ed0ff75616\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dd8fd35d-2a75-4403-b478-22ed0ff75616\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:43 crc kubenswrapper[4965]: I0219 10:04:43.256654 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a107a22-ae05-4559-aa4b-73a727fc2c29" path="/var/lib/kubelet/pods/8a107a22-ae05-4559-aa4b-73a727fc2c29/volumes" Feb 19 10:04:43 crc kubenswrapper[4965]: I0219 10:04:43.272276 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b8954926-b989-4d4b-b68d-eda06a80d48a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8954926-b989-4d4b-b68d-eda06a80d48a\") pod \"glance-default-external-api-0\" (UID: \"d1ece847-d2dd-42e7-ad4c-5f9ad04529f8\") " pod="openstack/glance-default-external-api-0" Feb 19 10:04:43 crc kubenswrapper[4965]: I0219 10:04:43.562658 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 10:04:43 crc kubenswrapper[4965]: I0219 10:04:43.574970 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62dadf17-d312-4ce0-b6f5-9319261705e2","Type":"ContainerStarted","Data":"4f5cf7e8baf7d6cab5ae59ce06c0ee44b12e6416ae5f5dbe5dc3af81f258cfe7"} Feb 19 10:04:43 crc kubenswrapper[4965]: I0219 10:04:43.575323 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62dadf17-d312-4ce0-b6f5-9319261705e2","Type":"ContainerStarted","Data":"1d01fec3c6e56a2f55da4dc0ed1e7913354afb8b3557ad30632dfb3e647f2a49"} Feb 19 10:04:43 crc kubenswrapper[4965]: I0219 10:04:43.578487 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4","Type":"ContainerDied","Data":"c6b7bdfc41c1ed122f6a4051463c7fa61d7e6bb34c838c8b7f40d4625f4665cb"} Feb 19 10:04:43 crc kubenswrapper[4965]: I0219 10:04:43.578535 4965 scope.go:117] "RemoveContainer" containerID="875e5c0e930aae00aa948239350bc6c796d241e6326b1a4f9a9c5c8717df04da" Feb 19 10:04:43 crc kubenswrapper[4965]: I0219 10:04:43.578696 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 10:04:43 crc kubenswrapper[4965]: I0219 10:04:43.604230 4965 scope.go:117] "RemoveContainer" containerID="49fcaca484061b23abe5163e6400766b358539b4d4e48b2ae87e123e3faf885b" Feb 19 10:04:43 crc kubenswrapper[4965]: I0219 10:04:43.609783 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 10:04:43 crc kubenswrapper[4965]: I0219 10:04:43.623148 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 10:04:43 crc kubenswrapper[4965]: I0219 10:04:43.662870 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 10:04:43 crc kubenswrapper[4965]: I0219 10:04:43.664843 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 10:04:43 crc kubenswrapper[4965]: I0219 10:04:43.669467 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 19 10:04:43 crc kubenswrapper[4965]: I0219 10:04:43.669678 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 19 10:04:43 crc kubenswrapper[4965]: I0219 10:04:43.679637 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 10:04:43 crc kubenswrapper[4965]: I0219 10:04:43.845344 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7e632f4-f05e-4ac6-a1cd-96ae3244c450-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b7e632f4-f05e-4ac6-a1cd-96ae3244c450\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:04:43 crc kubenswrapper[4965]: I0219 10:04:43.845654 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7e632f4-f05e-4ac6-a1cd-96ae3244c450-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b7e632f4-f05e-4ac6-a1cd-96ae3244c450\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:04:43 crc kubenswrapper[4965]: I0219 10:04:43.845702 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-dd8fd35d-2a75-4403-b478-22ed0ff75616\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dd8fd35d-2a75-4403-b478-22ed0ff75616\") pod \"glance-default-internal-api-0\" (UID: \"b7e632f4-f05e-4ac6-a1cd-96ae3244c450\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:04:43 crc kubenswrapper[4965]: I0219 10:04:43.845744 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x25vp\" (UniqueName: \"kubernetes.io/projected/b7e632f4-f05e-4ac6-a1cd-96ae3244c450-kube-api-access-x25vp\") pod \"glance-default-internal-api-0\" (UID: \"b7e632f4-f05e-4ac6-a1cd-96ae3244c450\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:04:43 crc kubenswrapper[4965]: I0219 10:04:43.845772 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7e632f4-f05e-4ac6-a1cd-96ae3244c450-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b7e632f4-f05e-4ac6-a1cd-96ae3244c450\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:04:43 crc kubenswrapper[4965]: I0219 10:04:43.845817 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b7e632f4-f05e-4ac6-a1cd-96ae3244c450-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b7e632f4-f05e-4ac6-a1cd-96ae3244c450\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:04:43 crc kubenswrapper[4965]: I0219 10:04:43.845840 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7e632f4-f05e-4ac6-a1cd-96ae3244c450-logs\") pod \"glance-default-internal-api-0\" (UID: \"b7e632f4-f05e-4ac6-a1cd-96ae3244c450\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:04:43 crc kubenswrapper[4965]: I0219 10:04:43.845886 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7e632f4-f05e-4ac6-a1cd-96ae3244c450-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b7e632f4-f05e-4ac6-a1cd-96ae3244c450\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:04:43 crc kubenswrapper[4965]: I0219 10:04:43.947106 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-dd8fd35d-2a75-4403-b478-22ed0ff75616\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dd8fd35d-2a75-4403-b478-22ed0ff75616\") pod \"glance-default-internal-api-0\" (UID: \"b7e632f4-f05e-4ac6-a1cd-96ae3244c450\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:04:43 crc kubenswrapper[4965]: I0219 10:04:43.948029 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x25vp\" (UniqueName: \"kubernetes.io/projected/b7e632f4-f05e-4ac6-a1cd-96ae3244c450-kube-api-access-x25vp\") pod \"glance-default-internal-api-0\" (UID: \"b7e632f4-f05e-4ac6-a1cd-96ae3244c450\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:04:43 crc kubenswrapper[4965]: I0219 10:04:43.948070 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7e632f4-f05e-4ac6-a1cd-96ae3244c450-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b7e632f4-f05e-4ac6-a1cd-96ae3244c450\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:04:43 crc kubenswrapper[4965]: I0219 10:04:43.948135 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b7e632f4-f05e-4ac6-a1cd-96ae3244c450-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b7e632f4-f05e-4ac6-a1cd-96ae3244c450\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:04:43 crc kubenswrapper[4965]: I0219 10:04:43.948163 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7e632f4-f05e-4ac6-a1cd-96ae3244c450-logs\") pod \"glance-default-internal-api-0\" (UID: \"b7e632f4-f05e-4ac6-a1cd-96ae3244c450\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:04:43 crc kubenswrapper[4965]: I0219 10:04:43.948239 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7e632f4-f05e-4ac6-a1cd-96ae3244c450-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b7e632f4-f05e-4ac6-a1cd-96ae3244c450\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:04:43 crc kubenswrapper[4965]: I0219 10:04:43.948322 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7e632f4-f05e-4ac6-a1cd-96ae3244c450-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b7e632f4-f05e-4ac6-a1cd-96ae3244c450\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:04:43 crc kubenswrapper[4965]: I0219 10:04:43.948356 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7e632f4-f05e-4ac6-a1cd-96ae3244c450-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b7e632f4-f05e-4ac6-a1cd-96ae3244c450\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:04:43 crc kubenswrapper[4965]: I0219 10:04:43.949350 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b7e632f4-f05e-4ac6-a1cd-96ae3244c450-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b7e632f4-f05e-4ac6-a1cd-96ae3244c450\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:04:43 crc kubenswrapper[4965]: I0219 10:04:43.951629 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7e632f4-f05e-4ac6-a1cd-96ae3244c450-logs\") pod \"glance-default-internal-api-0\" (UID: \"b7e632f4-f05e-4ac6-a1cd-96ae3244c450\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:04:43 crc kubenswrapper[4965]: I0219 10:04:43.952063 4965 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 10:04:43 crc kubenswrapper[4965]: I0219 10:04:43.952109 4965 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-dd8fd35d-2a75-4403-b478-22ed0ff75616\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dd8fd35d-2a75-4403-b478-22ed0ff75616\") pod \"glance-default-internal-api-0\" (UID: \"b7e632f4-f05e-4ac6-a1cd-96ae3244c450\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/bbd5634dc66e040ac4fcb8a10b0a021d0db9968a1cda30e816c0dbc4187cf813/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 19 10:04:43 crc kubenswrapper[4965]: I0219 10:04:43.959964 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7e632f4-f05e-4ac6-a1cd-96ae3244c450-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b7e632f4-f05e-4ac6-a1cd-96ae3244c450\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:04:43 crc kubenswrapper[4965]: I0219 10:04:43.960061 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7e632f4-f05e-4ac6-a1cd-96ae3244c450-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b7e632f4-f05e-4ac6-a1cd-96ae3244c450\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:04:43 crc kubenswrapper[4965]: I0219 10:04:43.960373 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7e632f4-f05e-4ac6-a1cd-96ae3244c450-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b7e632f4-f05e-4ac6-a1cd-96ae3244c450\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:04:43 crc kubenswrapper[4965]: I0219 10:04:43.968994 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x25vp\" (UniqueName: \"kubernetes.io/projected/b7e632f4-f05e-4ac6-a1cd-96ae3244c450-kube-api-access-x25vp\") pod \"glance-default-internal-api-0\" (UID: \"b7e632f4-f05e-4ac6-a1cd-96ae3244c450\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:04:43 crc kubenswrapper[4965]: I0219 10:04:43.981219 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7e632f4-f05e-4ac6-a1cd-96ae3244c450-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b7e632f4-f05e-4ac6-a1cd-96ae3244c450\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:04:44 crc kubenswrapper[4965]: I0219 10:04:44.002139 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-dd8fd35d-2a75-4403-b478-22ed0ff75616\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dd8fd35d-2a75-4403-b478-22ed0ff75616\") pod \"glance-default-internal-api-0\" (UID: \"b7e632f4-f05e-4ac6-a1cd-96ae3244c450\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:04:44 crc kubenswrapper[4965]: I0219 10:04:44.101686 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 10:04:44 crc kubenswrapper[4965]: I0219 10:04:44.370485 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 10:04:44 crc kubenswrapper[4965]: I0219 10:04:44.607452 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d1ece847-d2dd-42e7-ad4c-5f9ad04529f8","Type":"ContainerStarted","Data":"3eaa2445a5499969ac06d2b496b94027f4b8becc68f48fff6ae9cfe1772d2d35"} Feb 19 10:04:44 crc kubenswrapper[4965]: I0219 10:04:44.747598 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 10:04:45 crc kubenswrapper[4965]: I0219 10:04:45.217430 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4" path="/var/lib/kubelet/pods/5d50c3b9-1e4d-4ad0-9977-b0a3a45ce9b4/volumes" Feb 19 10:04:45 crc kubenswrapper[4965]: I0219 10:04:45.661266 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62dadf17-d312-4ce0-b6f5-9319261705e2","Type":"ContainerStarted","Data":"f85798d7f6105aae216db015b6205b18df3d53f8707ab9a1dfab0d4aec4f5ecb"} Feb 19 10:04:45 crc kubenswrapper[4965]: I0219 10:04:45.661576 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 10:04:45 crc kubenswrapper[4965]: I0219 10:04:45.682949 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b7e632f4-f05e-4ac6-a1cd-96ae3244c450","Type":"ContainerStarted","Data":"7fdd94ab39dd170b14bcec4ce8c1a4c6d54f355a16bb3250c89ca119007a8e05"} Feb 19 10:04:45 crc kubenswrapper[4965]: I0219 10:04:45.682993 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b7e632f4-f05e-4ac6-a1cd-96ae3244c450","Type":"ContainerStarted","Data":"f3cb4b2a9dff15866305ff9337460eab516dbd0b6e636a6e19386a692c3fb836"} Feb 19 10:04:45 crc kubenswrapper[4965]: I0219 10:04:45.691766 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d1ece847-d2dd-42e7-ad4c-5f9ad04529f8","Type":"ContainerStarted","Data":"94e8ed160a56eb711f1142d37943e8adc371d55abd5c60aa18ff6528884517a6"} Feb 19 10:04:45 crc kubenswrapper[4965]: I0219 10:04:45.695329 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.016175634 podStartE2EDuration="6.695315139s" podCreationTimestamp="2026-02-19 10:04:39 +0000 UTC" firstStartedPulling="2026-02-19 10:04:40.398382217 +0000 UTC m=+1336.019703527" lastFinishedPulling="2026-02-19 10:04:45.077521732 +0000 UTC m=+1340.698843032" observedRunningTime="2026-02-19 10:04:45.683538793 +0000 UTC m=+1341.304860103" watchObservedRunningTime="2026-02-19 10:04:45.695315139 +0000 UTC m=+1341.316636449" Feb 19 10:04:45 crc kubenswrapper[4965]: I0219 10:04:45.986361 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:04:46 crc kubenswrapper[4965]: I0219 10:04:46.601216 4965 patch_prober.go:28] interesting pod/machine-config-daemon-7mhh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:04:46 crc kubenswrapper[4965]: I0219 10:04:46.601604 4965 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:04:46 crc kubenswrapper[4965]: I0219 10:04:46.704729 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b7e632f4-f05e-4ac6-a1cd-96ae3244c450","Type":"ContainerStarted","Data":"9cd410dc80e7ac2fe734b552785f0326a135122514456f5b1180243984972713"} Feb 19 10:04:46 crc kubenswrapper[4965]: I0219 10:04:46.709048 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d1ece847-d2dd-42e7-ad4c-5f9ad04529f8","Type":"ContainerStarted","Data":"cdc6fff32ebb64fc646ef358ed7b8b494e9ce448afecc7d2e64a8eb1479f9d5c"} Feb 19 10:04:46 crc kubenswrapper[4965]: I0219 10:04:46.733541 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.7335013889999997 podStartE2EDuration="3.733501389s" podCreationTimestamp="2026-02-19 10:04:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:04:46.723047514 +0000 UTC m=+1342.344368864" watchObservedRunningTime="2026-02-19 10:04:46.733501389 +0000 UTC m=+1342.354822699" Feb 19 10:04:47 crc kubenswrapper[4965]: I0219 10:04:47.716566 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="62dadf17-d312-4ce0-b6f5-9319261705e2" containerName="ceilometer-central-agent" containerID="cri-o://cd2223af625aaf5c14380d7e00959262f24260882cb9053e7ef6e0f0c8857b92" gracePeriod=30 Feb 19 10:04:47 crc kubenswrapper[4965]: I0219 10:04:47.716639 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="62dadf17-d312-4ce0-b6f5-9319261705e2" containerName="proxy-httpd" containerID="cri-o://f85798d7f6105aae216db015b6205b18df3d53f8707ab9a1dfab0d4aec4f5ecb" gracePeriod=30 Feb 19 10:04:47 crc kubenswrapper[4965]: I0219 10:04:47.716698 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="62dadf17-d312-4ce0-b6f5-9319261705e2" containerName="sg-core" containerID="cri-o://4f5cf7e8baf7d6cab5ae59ce06c0ee44b12e6416ae5f5dbe5dc3af81f258cfe7" gracePeriod=30 Feb 19 10:04:47 crc kubenswrapper[4965]: I0219 10:04:47.716695 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="62dadf17-d312-4ce0-b6f5-9319261705e2" containerName="ceilometer-notification-agent" containerID="cri-o://1d01fec3c6e56a2f55da4dc0ed1e7913354afb8b3557ad30632dfb3e647f2a49" gracePeriod=30 Feb 19 10:04:48 crc kubenswrapper[4965]: I0219 10:04:48.728783 4965 generic.go:334] "Generic (PLEG): container finished" podID="62dadf17-d312-4ce0-b6f5-9319261705e2" containerID="f85798d7f6105aae216db015b6205b18df3d53f8707ab9a1dfab0d4aec4f5ecb" exitCode=0 Feb 19 10:04:48 crc kubenswrapper[4965]: I0219 10:04:48.729084 4965 generic.go:334] "Generic (PLEG): container finished" podID="62dadf17-d312-4ce0-b6f5-9319261705e2" containerID="4f5cf7e8baf7d6cab5ae59ce06c0ee44b12e6416ae5f5dbe5dc3af81f258cfe7" exitCode=2 Feb 19 10:04:48 crc kubenswrapper[4965]: I0219 10:04:48.729094 4965 generic.go:334] "Generic (PLEG): container finished" podID="62dadf17-d312-4ce0-b6f5-9319261705e2" containerID="1d01fec3c6e56a2f55da4dc0ed1e7913354afb8b3557ad30632dfb3e647f2a49" exitCode=0 Feb 19 10:04:48 crc kubenswrapper[4965]: I0219 10:04:48.728822 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62dadf17-d312-4ce0-b6f5-9319261705e2","Type":"ContainerDied","Data":"f85798d7f6105aae216db015b6205b18df3d53f8707ab9a1dfab0d4aec4f5ecb"} Feb 19 10:04:48 crc kubenswrapper[4965]: I0219 10:04:48.729177 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62dadf17-d312-4ce0-b6f5-9319261705e2","Type":"ContainerDied","Data":"4f5cf7e8baf7d6cab5ae59ce06c0ee44b12e6416ae5f5dbe5dc3af81f258cfe7"} Feb 19 10:04:48 crc kubenswrapper[4965]: I0219 10:04:48.729228 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62dadf17-d312-4ce0-b6f5-9319261705e2","Type":"ContainerDied","Data":"1d01fec3c6e56a2f55da4dc0ed1e7913354afb8b3557ad30632dfb3e647f2a49"} Feb 19 10:04:52 crc kubenswrapper[4965]: I0219 10:04:52.766261 4965 generic.go:334] "Generic (PLEG): container finished" podID="62dadf17-d312-4ce0-b6f5-9319261705e2" containerID="cd2223af625aaf5c14380d7e00959262f24260882cb9053e7ef6e0f0c8857b92" exitCode=0 Feb 19 10:04:52 crc kubenswrapper[4965]: I0219 10:04:52.766325 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62dadf17-d312-4ce0-b6f5-9319261705e2","Type":"ContainerDied","Data":"cd2223af625aaf5c14380d7e00959262f24260882cb9053e7ef6e0f0c8857b92"} Feb 19 10:04:52 crc kubenswrapper[4965]: I0219 10:04:52.766856 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62dadf17-d312-4ce0-b6f5-9319261705e2","Type":"ContainerDied","Data":"59353da03a9c2c5daddecaf9108b8927fb5ad20ab43d62e31cfb85f0835ab4f8"} Feb 19 10:04:52 crc kubenswrapper[4965]: I0219 10:04:52.766873 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59353da03a9c2c5daddecaf9108b8927fb5ad20ab43d62e31cfb85f0835ab4f8" Feb 19 10:04:52 crc kubenswrapper[4965]: I0219 10:04:52.801181 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:04:52 crc kubenswrapper[4965]: I0219 10:04:52.825582 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=10.825558996 podStartE2EDuration="10.825558996s" podCreationTimestamp="2026-02-19 10:04:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:04:46.747665313 +0000 UTC m=+1342.368986623" watchObservedRunningTime="2026-02-19 10:04:52.825558996 +0000 UTC m=+1348.446880306" Feb 19 10:04:52 crc kubenswrapper[4965]: I0219 10:04:52.961404 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62dadf17-d312-4ce0-b6f5-9319261705e2-run-httpd\") pod \"62dadf17-d312-4ce0-b6f5-9319261705e2\" (UID: \"62dadf17-d312-4ce0-b6f5-9319261705e2\") " Feb 19 10:04:52 crc kubenswrapper[4965]: I0219 10:04:52.962081 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62dadf17-d312-4ce0-b6f5-9319261705e2-scripts\") pod \"62dadf17-d312-4ce0-b6f5-9319261705e2\" (UID: \"62dadf17-d312-4ce0-b6f5-9319261705e2\") " Feb 19 10:04:52 crc kubenswrapper[4965]: I0219 10:04:52.962966 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8fpd\" (UniqueName: \"kubernetes.io/projected/62dadf17-d312-4ce0-b6f5-9319261705e2-kube-api-access-h8fpd\") pod \"62dadf17-d312-4ce0-b6f5-9319261705e2\" (UID: \"62dadf17-d312-4ce0-b6f5-9319261705e2\") " Feb 19 10:04:52 crc kubenswrapper[4965]: I0219 10:04:52.963343 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62dadf17-d312-4ce0-b6f5-9319261705e2-log-httpd\") pod \"62dadf17-d312-4ce0-b6f5-9319261705e2\" (UID: \"62dadf17-d312-4ce0-b6f5-9319261705e2\") " Feb 19 10:04:52 crc kubenswrapper[4965]: I0219 10:04:52.963459 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62dadf17-d312-4ce0-b6f5-9319261705e2-config-data\") pod \"62dadf17-d312-4ce0-b6f5-9319261705e2\" (UID: \"62dadf17-d312-4ce0-b6f5-9319261705e2\") " Feb 19 10:04:52 crc kubenswrapper[4965]: I0219 10:04:52.962022 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62dadf17-d312-4ce0-b6f5-9319261705e2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "62dadf17-d312-4ce0-b6f5-9319261705e2" (UID: "62dadf17-d312-4ce0-b6f5-9319261705e2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:04:52 crc kubenswrapper[4965]: I0219 10:04:52.963757 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62dadf17-d312-4ce0-b6f5-9319261705e2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "62dadf17-d312-4ce0-b6f5-9319261705e2" (UID: "62dadf17-d312-4ce0-b6f5-9319261705e2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:04:52 crc kubenswrapper[4965]: I0219 10:04:52.963907 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/62dadf17-d312-4ce0-b6f5-9319261705e2-sg-core-conf-yaml\") pod \"62dadf17-d312-4ce0-b6f5-9319261705e2\" (UID: \"62dadf17-d312-4ce0-b6f5-9319261705e2\") " Feb 19 10:04:52 crc kubenswrapper[4965]: I0219 10:04:52.964098 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62dadf17-d312-4ce0-b6f5-9319261705e2-combined-ca-bundle\") pod \"62dadf17-d312-4ce0-b6f5-9319261705e2\" (UID: \"62dadf17-d312-4ce0-b6f5-9319261705e2\") " Feb 19 10:04:52 crc kubenswrapper[4965]: I0219 10:04:52.964796 4965 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62dadf17-d312-4ce0-b6f5-9319261705e2-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:52 crc kubenswrapper[4965]: I0219 10:04:52.964869 4965 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62dadf17-d312-4ce0-b6f5-9319261705e2-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:52 crc kubenswrapper[4965]: I0219 10:04:52.968618 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62dadf17-d312-4ce0-b6f5-9319261705e2-scripts" (OuterVolumeSpecName: "scripts") pod "62dadf17-d312-4ce0-b6f5-9319261705e2" (UID: "62dadf17-d312-4ce0-b6f5-9319261705e2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:52 crc kubenswrapper[4965]: I0219 10:04:52.970494 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62dadf17-d312-4ce0-b6f5-9319261705e2-kube-api-access-h8fpd" (OuterVolumeSpecName: "kube-api-access-h8fpd") pod "62dadf17-d312-4ce0-b6f5-9319261705e2" (UID: "62dadf17-d312-4ce0-b6f5-9319261705e2"). InnerVolumeSpecName "kube-api-access-h8fpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:04:53 crc kubenswrapper[4965]: I0219 10:04:52.998152 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62dadf17-d312-4ce0-b6f5-9319261705e2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "62dadf17-d312-4ce0-b6f5-9319261705e2" (UID: "62dadf17-d312-4ce0-b6f5-9319261705e2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:53 crc kubenswrapper[4965]: I0219 10:04:53.057140 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62dadf17-d312-4ce0-b6f5-9319261705e2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "62dadf17-d312-4ce0-b6f5-9319261705e2" (UID: "62dadf17-d312-4ce0-b6f5-9319261705e2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:53 crc kubenswrapper[4965]: I0219 10:04:53.066715 4965 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/62dadf17-d312-4ce0-b6f5-9319261705e2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:53 crc kubenswrapper[4965]: I0219 10:04:53.066753 4965 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62dadf17-d312-4ce0-b6f5-9319261705e2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:53 crc kubenswrapper[4965]: I0219 10:04:53.066766 4965 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62dadf17-d312-4ce0-b6f5-9319261705e2-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:53 crc kubenswrapper[4965]: I0219 10:04:53.066779 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8fpd\" (UniqueName: \"kubernetes.io/projected/62dadf17-d312-4ce0-b6f5-9319261705e2-kube-api-access-h8fpd\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:53 crc kubenswrapper[4965]: I0219 10:04:53.094099 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62dadf17-d312-4ce0-b6f5-9319261705e2-config-data" (OuterVolumeSpecName: "config-data") pod "62dadf17-d312-4ce0-b6f5-9319261705e2" (UID: "62dadf17-d312-4ce0-b6f5-9319261705e2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:53 crc kubenswrapper[4965]: I0219 10:04:53.168453 4965 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62dadf17-d312-4ce0-b6f5-9319261705e2-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:53 crc kubenswrapper[4965]: I0219 10:04:53.563172 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 19 10:04:53 crc kubenswrapper[4965]: I0219 10:04:53.563503 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 19 10:04:53 crc kubenswrapper[4965]: I0219 10:04:53.605098 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 19 10:04:53 crc kubenswrapper[4965]: I0219 10:04:53.611140 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 19 10:04:53 crc kubenswrapper[4965]: I0219 10:04:53.776861 4965 generic.go:334] "Generic (PLEG): container finished" podID="d5e000de-4745-47c0-b6e6-8735c626518e" containerID="4ca0cb278135cdd36718827ee36f015f4fe1729fe7693d3cd38cc8ff8e2ced90" exitCode=0 Feb 19 10:04:53 crc kubenswrapper[4965]: I0219 10:04:53.776976 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-pkhzc" event={"ID":"d5e000de-4745-47c0-b6e6-8735c626518e","Type":"ContainerDied","Data":"4ca0cb278135cdd36718827ee36f015f4fe1729fe7693d3cd38cc8ff8e2ced90"} Feb 19 10:04:53 crc kubenswrapper[4965]: I0219 10:04:53.777063 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:04:53 crc kubenswrapper[4965]: I0219 10:04:53.778134 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 19 10:04:53 crc kubenswrapper[4965]: I0219 10:04:53.778167 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 19 10:04:53 crc kubenswrapper[4965]: I0219 10:04:53.823964 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:04:53 crc kubenswrapper[4965]: I0219 10:04:53.836126 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:04:53 crc kubenswrapper[4965]: I0219 10:04:53.852467 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:04:53 crc kubenswrapper[4965]: E0219 10:04:53.852959 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62dadf17-d312-4ce0-b6f5-9319261705e2" containerName="ceilometer-central-agent" Feb 19 10:04:53 crc kubenswrapper[4965]: I0219 10:04:53.852986 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="62dadf17-d312-4ce0-b6f5-9319261705e2" containerName="ceilometer-central-agent" Feb 19 10:04:53 crc kubenswrapper[4965]: E0219 10:04:53.853009 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62dadf17-d312-4ce0-b6f5-9319261705e2" containerName="ceilometer-notification-agent" Feb 19 10:04:53 crc kubenswrapper[4965]: I0219 10:04:53.853015 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="62dadf17-d312-4ce0-b6f5-9319261705e2" containerName="ceilometer-notification-agent" Feb 19 10:04:53 crc kubenswrapper[4965]: E0219 10:04:53.853025 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62dadf17-d312-4ce0-b6f5-9319261705e2" containerName="sg-core" Feb 19 10:04:53 crc kubenswrapper[4965]: I0219 10:04:53.853031 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="62dadf17-d312-4ce0-b6f5-9319261705e2" containerName="sg-core" Feb 19 10:04:53 crc kubenswrapper[4965]: E0219 10:04:53.853046 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62dadf17-d312-4ce0-b6f5-9319261705e2" containerName="proxy-httpd" Feb 19 10:04:53 crc kubenswrapper[4965]: I0219 10:04:53.853051 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="62dadf17-d312-4ce0-b6f5-9319261705e2" containerName="proxy-httpd" Feb 19 10:04:53 crc kubenswrapper[4965]: I0219 10:04:53.856733 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="62dadf17-d312-4ce0-b6f5-9319261705e2" containerName="ceilometer-notification-agent" Feb 19 10:04:53 crc kubenswrapper[4965]: I0219 10:04:53.856765 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="62dadf17-d312-4ce0-b6f5-9319261705e2" containerName="proxy-httpd" Feb 19 10:04:53 crc kubenswrapper[4965]: I0219 10:04:53.856783 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="62dadf17-d312-4ce0-b6f5-9319261705e2" containerName="ceilometer-central-agent" Feb 19 10:04:53 crc kubenswrapper[4965]: I0219 10:04:53.856833 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="62dadf17-d312-4ce0-b6f5-9319261705e2" containerName="sg-core" Feb 19 10:04:53 crc kubenswrapper[4965]: I0219 10:04:53.859413 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:04:53 crc kubenswrapper[4965]: I0219 10:04:53.862176 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 10:04:53 crc kubenswrapper[4965]: I0219 10:04:53.862388 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 10:04:53 crc kubenswrapper[4965]: I0219 10:04:53.869607 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:04:53 crc kubenswrapper[4965]: I0219 10:04:53.984894 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88e68e49-9977-4222-bc61-95a5f5234d82-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"88e68e49-9977-4222-bc61-95a5f5234d82\") " pod="openstack/ceilometer-0" Feb 19 10:04:53 crc kubenswrapper[4965]: I0219 10:04:53.984949 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88e68e49-9977-4222-bc61-95a5f5234d82-config-data\") pod \"ceilometer-0\" (UID: \"88e68e49-9977-4222-bc61-95a5f5234d82\") " pod="openstack/ceilometer-0" Feb 19 10:04:53 crc kubenswrapper[4965]: I0219 10:04:53.984975 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88e68e49-9977-4222-bc61-95a5f5234d82-log-httpd\") pod \"ceilometer-0\" (UID: \"88e68e49-9977-4222-bc61-95a5f5234d82\") " pod="openstack/ceilometer-0" Feb 19 10:04:53 crc kubenswrapper[4965]: I0219 10:04:53.985069 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/88e68e49-9977-4222-bc61-95a5f5234d82-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"88e68e49-9977-4222-bc61-95a5f5234d82\") " pod="openstack/ceilometer-0" Feb 19 10:04:53 crc kubenswrapper[4965]: I0219 10:04:53.985214 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88e68e49-9977-4222-bc61-95a5f5234d82-run-httpd\") pod \"ceilometer-0\" (UID: \"88e68e49-9977-4222-bc61-95a5f5234d82\") " pod="openstack/ceilometer-0" Feb 19 10:04:53 crc kubenswrapper[4965]: I0219 10:04:53.985245 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88e68e49-9977-4222-bc61-95a5f5234d82-scripts\") pod \"ceilometer-0\" (UID: \"88e68e49-9977-4222-bc61-95a5f5234d82\") " pod="openstack/ceilometer-0" Feb 19 10:04:53 crc kubenswrapper[4965]: I0219 10:04:53.985386 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m5dq\" (UniqueName: \"kubernetes.io/projected/88e68e49-9977-4222-bc61-95a5f5234d82-kube-api-access-5m5dq\") pod \"ceilometer-0\" (UID: \"88e68e49-9977-4222-bc61-95a5f5234d82\") " pod="openstack/ceilometer-0" Feb 19 10:04:54 crc kubenswrapper[4965]: I0219 10:04:54.087068 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88e68e49-9977-4222-bc61-95a5f5234d82-run-httpd\") pod \"ceilometer-0\" (UID: \"88e68e49-9977-4222-bc61-95a5f5234d82\") " pod="openstack/ceilometer-0" Feb 19 10:04:54 crc kubenswrapper[4965]: I0219 10:04:54.087110 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88e68e49-9977-4222-bc61-95a5f5234d82-scripts\") pod \"ceilometer-0\" (UID: \"88e68e49-9977-4222-bc61-95a5f5234d82\") " pod="openstack/ceilometer-0" Feb 19 10:04:54 crc kubenswrapper[4965]: I0219 10:04:54.087171 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5m5dq\" (UniqueName: \"kubernetes.io/projected/88e68e49-9977-4222-bc61-95a5f5234d82-kube-api-access-5m5dq\") pod \"ceilometer-0\" (UID: \"88e68e49-9977-4222-bc61-95a5f5234d82\") " pod="openstack/ceilometer-0" Feb 19 10:04:54 crc kubenswrapper[4965]: I0219 10:04:54.087221 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88e68e49-9977-4222-bc61-95a5f5234d82-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"88e68e49-9977-4222-bc61-95a5f5234d82\") " pod="openstack/ceilometer-0" Feb 19 10:04:54 crc kubenswrapper[4965]: I0219 10:04:54.087242 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88e68e49-9977-4222-bc61-95a5f5234d82-config-data\") pod \"ceilometer-0\" (UID: \"88e68e49-9977-4222-bc61-95a5f5234d82\") " pod="openstack/ceilometer-0" Feb 19 10:04:54 crc kubenswrapper[4965]: I0219 10:04:54.087257 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88e68e49-9977-4222-bc61-95a5f5234d82-log-httpd\") pod \"ceilometer-0\" (UID: \"88e68e49-9977-4222-bc61-95a5f5234d82\") " pod="openstack/ceilometer-0" Feb 19 10:04:54 crc kubenswrapper[4965]: I0219 10:04:54.087296 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/88e68e49-9977-4222-bc61-95a5f5234d82-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"88e68e49-9977-4222-bc61-95a5f5234d82\") " pod="openstack/ceilometer-0" Feb 19 10:04:54 crc kubenswrapper[4965]: I0219 10:04:54.087579 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88e68e49-9977-4222-bc61-95a5f5234d82-run-httpd\") pod \"ceilometer-0\" (UID: \"88e68e49-9977-4222-bc61-95a5f5234d82\") " pod="openstack/ceilometer-0" Feb 19 10:04:54 crc kubenswrapper[4965]: I0219 10:04:54.088119 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88e68e49-9977-4222-bc61-95a5f5234d82-log-httpd\") pod \"ceilometer-0\" (UID: \"88e68e49-9977-4222-bc61-95a5f5234d82\") " pod="openstack/ceilometer-0" Feb 19 10:04:54 crc kubenswrapper[4965]: I0219 10:04:54.091265 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88e68e49-9977-4222-bc61-95a5f5234d82-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"88e68e49-9977-4222-bc61-95a5f5234d82\") " pod="openstack/ceilometer-0" Feb 19 10:04:54 crc kubenswrapper[4965]: I0219 10:04:54.091603 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/88e68e49-9977-4222-bc61-95a5f5234d82-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"88e68e49-9977-4222-bc61-95a5f5234d82\") " pod="openstack/ceilometer-0" Feb 19 10:04:54 crc kubenswrapper[4965]: I0219 10:04:54.099113 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88e68e49-9977-4222-bc61-95a5f5234d82-config-data\") pod \"ceilometer-0\" (UID: \"88e68e49-9977-4222-bc61-95a5f5234d82\") " pod="openstack/ceilometer-0" Feb 19 10:04:54 crc kubenswrapper[4965]: I0219 10:04:54.099822 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88e68e49-9977-4222-bc61-95a5f5234d82-scripts\") pod \"ceilometer-0\" (UID: \"88e68e49-9977-4222-bc61-95a5f5234d82\") " pod="openstack/ceilometer-0" Feb 19 10:04:54 crc kubenswrapper[4965]: I0219 10:04:54.103585 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 19 10:04:54 crc kubenswrapper[4965]: I0219 10:04:54.103632 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 19 10:04:54 crc kubenswrapper[4965]: I0219 10:04:54.109724 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m5dq\" (UniqueName: \"kubernetes.io/projected/88e68e49-9977-4222-bc61-95a5f5234d82-kube-api-access-5m5dq\") pod \"ceilometer-0\" (UID: \"88e68e49-9977-4222-bc61-95a5f5234d82\") " pod="openstack/ceilometer-0" Feb 19 10:04:54 crc kubenswrapper[4965]: I0219 10:04:54.141229 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 19 10:04:54 crc kubenswrapper[4965]: I0219 10:04:54.155871 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 19 10:04:54 crc kubenswrapper[4965]: I0219 10:04:54.191718 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:04:54 crc kubenswrapper[4965]: I0219 10:04:54.658778 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:04:54 crc kubenswrapper[4965]: I0219 10:04:54.787105 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88e68e49-9977-4222-bc61-95a5f5234d82","Type":"ContainerStarted","Data":"c4c837c920467e270c21bc2395a03c7a78da1f8ca00b1b343ca2411e96051784"} Feb 19 10:04:54 crc kubenswrapper[4965]: I0219 10:04:54.787786 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 19 10:04:54 crc kubenswrapper[4965]: I0219 10:04:54.787809 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 19 10:04:55 crc kubenswrapper[4965]: I0219 10:04:55.217851 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62dadf17-d312-4ce0-b6f5-9319261705e2" path="/var/lib/kubelet/pods/62dadf17-d312-4ce0-b6f5-9319261705e2/volumes" Feb 19 10:04:55 crc kubenswrapper[4965]: I0219 10:04:55.232815 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-pkhzc" Feb 19 10:04:55 crc kubenswrapper[4965]: I0219 10:04:55.321025 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5e000de-4745-47c0-b6e6-8735c626518e-config-data\") pod \"d5e000de-4745-47c0-b6e6-8735c626518e\" (UID: \"d5e000de-4745-47c0-b6e6-8735c626518e\") " Feb 19 10:04:55 crc kubenswrapper[4965]: I0219 10:04:55.321301 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5e000de-4745-47c0-b6e6-8735c626518e-scripts\") pod \"d5e000de-4745-47c0-b6e6-8735c626518e\" (UID: \"d5e000de-4745-47c0-b6e6-8735c626518e\") " Feb 19 10:04:55 crc kubenswrapper[4965]: I0219 10:04:55.321390 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5e000de-4745-47c0-b6e6-8735c626518e-combined-ca-bundle\") pod \"d5e000de-4745-47c0-b6e6-8735c626518e\" (UID: \"d5e000de-4745-47c0-b6e6-8735c626518e\") " Feb 19 10:04:55 crc kubenswrapper[4965]: I0219 10:04:55.321619 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jngbg\" (UniqueName: \"kubernetes.io/projected/d5e000de-4745-47c0-b6e6-8735c626518e-kube-api-access-jngbg\") pod \"d5e000de-4745-47c0-b6e6-8735c626518e\" (UID: \"d5e000de-4745-47c0-b6e6-8735c626518e\") " Feb 19 10:04:55 crc kubenswrapper[4965]: I0219 10:04:55.334766 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5e000de-4745-47c0-b6e6-8735c626518e-scripts" (OuterVolumeSpecName: "scripts") pod "d5e000de-4745-47c0-b6e6-8735c626518e" (UID: "d5e000de-4745-47c0-b6e6-8735c626518e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:55 crc kubenswrapper[4965]: I0219 10:04:55.343997 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5e000de-4745-47c0-b6e6-8735c626518e-kube-api-access-jngbg" (OuterVolumeSpecName: "kube-api-access-jngbg") pod "d5e000de-4745-47c0-b6e6-8735c626518e" (UID: "d5e000de-4745-47c0-b6e6-8735c626518e"). InnerVolumeSpecName "kube-api-access-jngbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:04:55 crc kubenswrapper[4965]: I0219 10:04:55.366578 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5e000de-4745-47c0-b6e6-8735c626518e-config-data" (OuterVolumeSpecName: "config-data") pod "d5e000de-4745-47c0-b6e6-8735c626518e" (UID: "d5e000de-4745-47c0-b6e6-8735c626518e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:55 crc kubenswrapper[4965]: I0219 10:04:55.398316 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5e000de-4745-47c0-b6e6-8735c626518e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d5e000de-4745-47c0-b6e6-8735c626518e" (UID: "d5e000de-4745-47c0-b6e6-8735c626518e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:04:55 crc kubenswrapper[4965]: I0219 10:04:55.425696 4965 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5e000de-4745-47c0-b6e6-8735c626518e-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:55 crc kubenswrapper[4965]: I0219 10:04:55.425734 4965 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5e000de-4745-47c0-b6e6-8735c626518e-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:55 crc kubenswrapper[4965]: I0219 10:04:55.425743 4965 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5e000de-4745-47c0-b6e6-8735c626518e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:55 crc kubenswrapper[4965]: I0219 10:04:55.425754 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jngbg\" (UniqueName: \"kubernetes.io/projected/d5e000de-4745-47c0-b6e6-8735c626518e-kube-api-access-jngbg\") on node \"crc\" DevicePath \"\"" Feb 19 10:04:55 crc kubenswrapper[4965]: I0219 10:04:55.801588 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-pkhzc" event={"ID":"d5e000de-4745-47c0-b6e6-8735c626518e","Type":"ContainerDied","Data":"f250f6c6397aee1740ca2facf207e2acd5cd929f1225e8bc74d84d4e0649401c"} Feb 19 10:04:55 crc kubenswrapper[4965]: I0219 10:04:55.801889 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f250f6c6397aee1740ca2facf207e2acd5cd929f1225e8bc74d84d4e0649401c" Feb 19 10:04:55 crc kubenswrapper[4965]: I0219 10:04:55.801657 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-pkhzc" Feb 19 10:04:55 crc kubenswrapper[4965]: I0219 10:04:55.804091 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88e68e49-9977-4222-bc61-95a5f5234d82","Type":"ContainerStarted","Data":"5451188c3516db6f5cc1c463714dc7928296541dd0b4a5adb26ad6986c4fa5bf"} Feb 19 10:04:55 crc kubenswrapper[4965]: I0219 10:04:55.916519 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 19 10:04:55 crc kubenswrapper[4965]: I0219 10:04:55.916635 4965 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 10:04:55 crc kubenswrapper[4965]: I0219 10:04:55.918750 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 19 10:04:56 crc kubenswrapper[4965]: I0219 10:04:56.030654 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 10:04:56 crc kubenswrapper[4965]: E0219 10:04:56.031178 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5e000de-4745-47c0-b6e6-8735c626518e" containerName="nova-cell0-conductor-db-sync" Feb 19 10:04:56 crc kubenswrapper[4965]: I0219 10:04:56.031212 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5e000de-4745-47c0-b6e6-8735c626518e" containerName="nova-cell0-conductor-db-sync" Feb 19 10:04:56 crc kubenswrapper[4965]: I0219 10:04:56.031500 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5e000de-4745-47c0-b6e6-8735c626518e" containerName="nova-cell0-conductor-db-sync" Feb 19 10:04:56 crc kubenswrapper[4965]: I0219 10:04:56.032408 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 19 10:04:56 crc kubenswrapper[4965]: I0219 10:04:56.039619 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-wxc9w" Feb 19 10:04:56 crc kubenswrapper[4965]: I0219 10:04:56.039648 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 19 10:04:56 crc kubenswrapper[4965]: I0219 10:04:56.069063 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 10:04:56 crc kubenswrapper[4965]: I0219 10:04:56.169360 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmhq5\" (UniqueName: \"kubernetes.io/projected/ac5be26a-d9dd-4131-b369-55b0a89377a5-kube-api-access-hmhq5\") pod \"nova-cell0-conductor-0\" (UID: \"ac5be26a-d9dd-4131-b369-55b0a89377a5\") " pod="openstack/nova-cell0-conductor-0" Feb 19 10:04:56 crc kubenswrapper[4965]: I0219 10:04:56.169597 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac5be26a-d9dd-4131-b369-55b0a89377a5-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ac5be26a-d9dd-4131-b369-55b0a89377a5\") " pod="openstack/nova-cell0-conductor-0" Feb 19 10:04:56 crc kubenswrapper[4965]: I0219 10:04:56.169616 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac5be26a-d9dd-4131-b369-55b0a89377a5-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ac5be26a-d9dd-4131-b369-55b0a89377a5\") " pod="openstack/nova-cell0-conductor-0" Feb 19 10:04:56 crc kubenswrapper[4965]: I0219 10:04:56.271931 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmhq5\" (UniqueName: \"kubernetes.io/projected/ac5be26a-d9dd-4131-b369-55b0a89377a5-kube-api-access-hmhq5\") pod \"nova-cell0-conductor-0\" (UID: \"ac5be26a-d9dd-4131-b369-55b0a89377a5\") " pod="openstack/nova-cell0-conductor-0" Feb 19 10:04:56 crc kubenswrapper[4965]: I0219 10:04:56.272027 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac5be26a-d9dd-4131-b369-55b0a89377a5-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ac5be26a-d9dd-4131-b369-55b0a89377a5\") " pod="openstack/nova-cell0-conductor-0" Feb 19 10:04:56 crc kubenswrapper[4965]: I0219 10:04:56.272078 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac5be26a-d9dd-4131-b369-55b0a89377a5-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ac5be26a-d9dd-4131-b369-55b0a89377a5\") " pod="openstack/nova-cell0-conductor-0" Feb 19 10:04:56 crc kubenswrapper[4965]: I0219 10:04:56.281816 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac5be26a-d9dd-4131-b369-55b0a89377a5-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ac5be26a-d9dd-4131-b369-55b0a89377a5\") " pod="openstack/nova-cell0-conductor-0" Feb 19 10:04:56 crc kubenswrapper[4965]: I0219 10:04:56.292937 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac5be26a-d9dd-4131-b369-55b0a89377a5-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ac5be26a-d9dd-4131-b369-55b0a89377a5\") " pod="openstack/nova-cell0-conductor-0" Feb 19 10:04:56 crc kubenswrapper[4965]: I0219 10:04:56.296721 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmhq5\" (UniqueName: \"kubernetes.io/projected/ac5be26a-d9dd-4131-b369-55b0a89377a5-kube-api-access-hmhq5\") pod \"nova-cell0-conductor-0\" (UID: \"ac5be26a-d9dd-4131-b369-55b0a89377a5\") " pod="openstack/nova-cell0-conductor-0" Feb 19 10:04:56 crc kubenswrapper[4965]: I0219 10:04:56.577989 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 19 10:04:56 crc kubenswrapper[4965]: I0219 10:04:56.829280 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88e68e49-9977-4222-bc61-95a5f5234d82","Type":"ContainerStarted","Data":"6aefae543a28b56087fcbe232e27c6e71e20964e701051d21a34f6193eb5bd60"} Feb 19 10:04:57 crc kubenswrapper[4965]: I0219 10:04:57.252536 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 10:04:57 crc kubenswrapper[4965]: I0219 10:04:57.488737 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 19 10:04:57 crc kubenswrapper[4965]: I0219 10:04:57.488872 4965 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 10:04:57 crc kubenswrapper[4965]: I0219 10:04:57.840072 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88e68e49-9977-4222-bc61-95a5f5234d82","Type":"ContainerStarted","Data":"dbff79fcd301d92b8b4712c1865b0b8af703a6a34fc47e443a192f4516f28f77"} Feb 19 10:04:57 crc kubenswrapper[4965]: I0219 10:04:57.841350 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"ac5be26a-d9dd-4131-b369-55b0a89377a5","Type":"ContainerStarted","Data":"8692949f4f511afb96b48df4057b63bbd3f837003ad9a7b7d588b1cd2c313edc"} Feb 19 10:04:57 crc kubenswrapper[4965]: I0219 10:04:57.841390 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"ac5be26a-d9dd-4131-b369-55b0a89377a5","Type":"ContainerStarted","Data":"ae7c1aaf2f2a50b29f01c3e9b404483ea24482558db294c1dfc43c5c5440fc6c"} Feb 19 10:04:57 crc kubenswrapper[4965]: I0219 10:04:57.841498 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 19 10:04:57 crc kubenswrapper[4965]: I0219 10:04:57.862999 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.862976164 podStartE2EDuration="2.862976164s" podCreationTimestamp="2026-02-19 10:04:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:04:57.856752293 +0000 UTC m=+1353.478073623" watchObservedRunningTime="2026-02-19 10:04:57.862976164 +0000 UTC m=+1353.484297464" Feb 19 10:04:58 crc kubenswrapper[4965]: I0219 10:04:58.019354 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 19 10:04:59 crc kubenswrapper[4965]: I0219 10:04:59.869148 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88e68e49-9977-4222-bc61-95a5f5234d82","Type":"ContainerStarted","Data":"fdb7c5f403c89321c60963fc6f4e717d05f14d805667944e624b94c44b07b5d2"} Feb 19 10:04:59 crc kubenswrapper[4965]: I0219 10:04:59.869757 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 10:04:59 crc kubenswrapper[4965]: I0219 10:04:59.901226 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.916364007 podStartE2EDuration="6.901187916s" podCreationTimestamp="2026-02-19 10:04:53 +0000 UTC" firstStartedPulling="2026-02-19 10:04:54.660004368 +0000 UTC m=+1350.281325678" lastFinishedPulling="2026-02-19 10:04:58.644828277 +0000 UTC m=+1354.266149587" observedRunningTime="2026-02-19 10:04:59.892448854 +0000 UTC m=+1355.513770174" watchObservedRunningTime="2026-02-19 10:04:59.901187916 +0000 UTC m=+1355.522509226" Feb 19 10:05:06 crc kubenswrapper[4965]: I0219 10:05:06.686843 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.243218 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-hlxzf"] Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.244954 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-hlxzf"] Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.249408 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-hlxzf" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.258943 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.262767 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.327428 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34a7f92d-3391-4c12-8d6b-14b531d39757-scripts\") pod \"nova-cell0-cell-mapping-hlxzf\" (UID: \"34a7f92d-3391-4c12-8d6b-14b531d39757\") " pod="openstack/nova-cell0-cell-mapping-hlxzf" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.327622 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsrxk\" (UniqueName: \"kubernetes.io/projected/34a7f92d-3391-4c12-8d6b-14b531d39757-kube-api-access-tsrxk\") pod \"nova-cell0-cell-mapping-hlxzf\" (UID: \"34a7f92d-3391-4c12-8d6b-14b531d39757\") " pod="openstack/nova-cell0-cell-mapping-hlxzf" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.327682 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34a7f92d-3391-4c12-8d6b-14b531d39757-config-data\") pod \"nova-cell0-cell-mapping-hlxzf\" (UID: \"34a7f92d-3391-4c12-8d6b-14b531d39757\") " pod="openstack/nova-cell0-cell-mapping-hlxzf" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.327717 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34a7f92d-3391-4c12-8d6b-14b531d39757-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-hlxzf\" (UID: \"34a7f92d-3391-4c12-8d6b-14b531d39757\") " pod="openstack/nova-cell0-cell-mapping-hlxzf" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.402338 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.403749 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.408919 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.411404 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.431223 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsrxk\" (UniqueName: \"kubernetes.io/projected/34a7f92d-3391-4c12-8d6b-14b531d39757-kube-api-access-tsrxk\") pod \"nova-cell0-cell-mapping-hlxzf\" (UID: \"34a7f92d-3391-4c12-8d6b-14b531d39757\") " pod="openstack/nova-cell0-cell-mapping-hlxzf" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.431546 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34a7f92d-3391-4c12-8d6b-14b531d39757-config-data\") pod \"nova-cell0-cell-mapping-hlxzf\" (UID: \"34a7f92d-3391-4c12-8d6b-14b531d39757\") " pod="openstack/nova-cell0-cell-mapping-hlxzf" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.431647 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34a7f92d-3391-4c12-8d6b-14b531d39757-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-hlxzf\" (UID: \"34a7f92d-3391-4c12-8d6b-14b531d39757\") " pod="openstack/nova-cell0-cell-mapping-hlxzf" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.431762 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34a7f92d-3391-4c12-8d6b-14b531d39757-scripts\") pod \"nova-cell0-cell-mapping-hlxzf\" (UID: \"34a7f92d-3391-4c12-8d6b-14b531d39757\") " pod="openstack/nova-cell0-cell-mapping-hlxzf" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.443038 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34a7f92d-3391-4c12-8d6b-14b531d39757-scripts\") pod \"nova-cell0-cell-mapping-hlxzf\" (UID: \"34a7f92d-3391-4c12-8d6b-14b531d39757\") " pod="openstack/nova-cell0-cell-mapping-hlxzf" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.453055 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34a7f92d-3391-4c12-8d6b-14b531d39757-config-data\") pod \"nova-cell0-cell-mapping-hlxzf\" (UID: \"34a7f92d-3391-4c12-8d6b-14b531d39757\") " pod="openstack/nova-cell0-cell-mapping-hlxzf" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.457372 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34a7f92d-3391-4c12-8d6b-14b531d39757-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-hlxzf\" (UID: \"34a7f92d-3391-4c12-8d6b-14b531d39757\") " pod="openstack/nova-cell0-cell-mapping-hlxzf" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.468716 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsrxk\" (UniqueName: \"kubernetes.io/projected/34a7f92d-3391-4c12-8d6b-14b531d39757-kube-api-access-tsrxk\") pod \"nova-cell0-cell-mapping-hlxzf\" (UID: \"34a7f92d-3391-4c12-8d6b-14b531d39757\") " pod="openstack/nova-cell0-cell-mapping-hlxzf" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.526974 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.529036 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.534175 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db417f58-59be-4949-a86c-60ca4439ec09-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"db417f58-59be-4949-a86c-60ca4439ec09\") " pod="openstack/nova-scheduler-0" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.534246 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx4qg\" (UniqueName: \"kubernetes.io/projected/db417f58-59be-4949-a86c-60ca4439ec09-kube-api-access-tx4qg\") pod \"nova-scheduler-0\" (UID: \"db417f58-59be-4949-a86c-60ca4439ec09\") " pod="openstack/nova-scheduler-0" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.534306 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db417f58-59be-4949-a86c-60ca4439ec09-config-data\") pod \"nova-scheduler-0\" (UID: \"db417f58-59be-4949-a86c-60ca4439ec09\") " pod="openstack/nova-scheduler-0" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.542072 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.558123 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.583832 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-hlxzf" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.628337 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.629764 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.635025 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.636845 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/123db546-f337-4b0c-828e-6b677b1fb954-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"123db546-f337-4b0c-828e-6b677b1fb954\") " pod="openstack/nova-api-0" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.637006 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db417f58-59be-4949-a86c-60ca4439ec09-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"db417f58-59be-4949-a86c-60ca4439ec09\") " pod="openstack/nova-scheduler-0" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.637107 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tx4qg\" (UniqueName: \"kubernetes.io/projected/db417f58-59be-4949-a86c-60ca4439ec09-kube-api-access-tx4qg\") pod \"nova-scheduler-0\" (UID: \"db417f58-59be-4949-a86c-60ca4439ec09\") " pod="openstack/nova-scheduler-0" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.637179 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/123db546-f337-4b0c-828e-6b677b1fb954-logs\") pod \"nova-api-0\" (UID: \"123db546-f337-4b0c-828e-6b677b1fb954\") " pod="openstack/nova-api-0" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.637334 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db417f58-59be-4949-a86c-60ca4439ec09-config-data\") pod \"nova-scheduler-0\" (UID: \"db417f58-59be-4949-a86c-60ca4439ec09\") " pod="openstack/nova-scheduler-0" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.637415 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czl7c\" (UniqueName: \"kubernetes.io/projected/123db546-f337-4b0c-828e-6b677b1fb954-kube-api-access-czl7c\") pod \"nova-api-0\" (UID: \"123db546-f337-4b0c-828e-6b677b1fb954\") " pod="openstack/nova-api-0" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.637543 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/123db546-f337-4b0c-828e-6b677b1fb954-config-data\") pod \"nova-api-0\" (UID: \"123db546-f337-4b0c-828e-6b677b1fb954\") " pod="openstack/nova-api-0" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.655033 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db417f58-59be-4949-a86c-60ca4439ec09-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"db417f58-59be-4949-a86c-60ca4439ec09\") " pod="openstack/nova-scheduler-0" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.655753 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db417f58-59be-4949-a86c-60ca4439ec09-config-data\") pod \"nova-scheduler-0\" (UID: \"db417f58-59be-4949-a86c-60ca4439ec09\") " pod="openstack/nova-scheduler-0" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.663618 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.689596 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tx4qg\" (UniqueName: \"kubernetes.io/projected/db417f58-59be-4949-a86c-60ca4439ec09-kube-api-access-tx4qg\") pod \"nova-scheduler-0\" (UID: \"db417f58-59be-4949-a86c-60ca4439ec09\") " pod="openstack/nova-scheduler-0" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.729839 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.739272 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qdxt\" (UniqueName: \"kubernetes.io/projected/f4ee246c-cea9-4377-9f96-54387ac61022-kube-api-access-8qdxt\") pod \"nova-cell1-novncproxy-0\" (UID: \"f4ee246c-cea9-4377-9f96-54387ac61022\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.739336 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4ee246c-cea9-4377-9f96-54387ac61022-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f4ee246c-cea9-4377-9f96-54387ac61022\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.739374 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czl7c\" (UniqueName: \"kubernetes.io/projected/123db546-f337-4b0c-828e-6b677b1fb954-kube-api-access-czl7c\") pod \"nova-api-0\" (UID: \"123db546-f337-4b0c-828e-6b677b1fb954\") " pod="openstack/nova-api-0" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.739427 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/123db546-f337-4b0c-828e-6b677b1fb954-config-data\") pod \"nova-api-0\" (UID: \"123db546-f337-4b0c-828e-6b677b1fb954\") " pod="openstack/nova-api-0" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.739451 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4ee246c-cea9-4377-9f96-54387ac61022-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f4ee246c-cea9-4377-9f96-54387ac61022\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.739488 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/123db546-f337-4b0c-828e-6b677b1fb954-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"123db546-f337-4b0c-828e-6b677b1fb954\") " pod="openstack/nova-api-0" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.739556 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/123db546-f337-4b0c-828e-6b677b1fb954-logs\") pod \"nova-api-0\" (UID: \"123db546-f337-4b0c-828e-6b677b1fb954\") " pod="openstack/nova-api-0" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.739920 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/123db546-f337-4b0c-828e-6b677b1fb954-logs\") pod \"nova-api-0\" (UID: \"123db546-f337-4b0c-828e-6b677b1fb954\") " pod="openstack/nova-api-0" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.753777 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/123db546-f337-4b0c-828e-6b677b1fb954-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"123db546-f337-4b0c-828e-6b677b1fb954\") " pod="openstack/nova-api-0" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.773173 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/123db546-f337-4b0c-828e-6b677b1fb954-config-data\") pod \"nova-api-0\" (UID: \"123db546-f337-4b0c-828e-6b677b1fb954\") " pod="openstack/nova-api-0" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.774594 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czl7c\" (UniqueName: \"kubernetes.io/projected/123db546-f337-4b0c-828e-6b677b1fb954-kube-api-access-czl7c\") pod \"nova-api-0\" (UID: \"123db546-f337-4b0c-828e-6b677b1fb954\") " pod="openstack/nova-api-0" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.788043 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.790287 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.812783 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.818560 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.846810 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4ee246c-cea9-4377-9f96-54387ac61022-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f4ee246c-cea9-4377-9f96-54387ac61022\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.846968 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lrns\" (UniqueName: \"kubernetes.io/projected/dbfed410-b5a6-4f7e-a33b-0bebd55379de-kube-api-access-9lrns\") pod \"nova-metadata-0\" (UID: \"dbfed410-b5a6-4f7e-a33b-0bebd55379de\") " pod="openstack/nova-metadata-0" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.846995 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbfed410-b5a6-4f7e-a33b-0bebd55379de-logs\") pod \"nova-metadata-0\" (UID: \"dbfed410-b5a6-4f7e-a33b-0bebd55379de\") " pod="openstack/nova-metadata-0" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.847030 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qdxt\" (UniqueName: \"kubernetes.io/projected/f4ee246c-cea9-4377-9f96-54387ac61022-kube-api-access-8qdxt\") pod \"nova-cell1-novncproxy-0\" (UID: \"f4ee246c-cea9-4377-9f96-54387ac61022\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.847048 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbfed410-b5a6-4f7e-a33b-0bebd55379de-config-data\") pod \"nova-metadata-0\" (UID: \"dbfed410-b5a6-4f7e-a33b-0bebd55379de\") " pod="openstack/nova-metadata-0" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.847083 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4ee246c-cea9-4377-9f96-54387ac61022-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f4ee246c-cea9-4377-9f96-54387ac61022\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.847119 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbfed410-b5a6-4f7e-a33b-0bebd55379de-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"dbfed410-b5a6-4f7e-a33b-0bebd55379de\") " pod="openstack/nova-metadata-0" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.853486 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4ee246c-cea9-4377-9f96-54387ac61022-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f4ee246c-cea9-4377-9f96-54387ac61022\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.872841 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4ee246c-cea9-4377-9f96-54387ac61022-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f4ee246c-cea9-4377-9f96-54387ac61022\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.899801 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qdxt\" (UniqueName: \"kubernetes.io/projected/f4ee246c-cea9-4377-9f96-54387ac61022-kube-api-access-8qdxt\") pod \"nova-cell1-novncproxy-0\" (UID: \"f4ee246c-cea9-4377-9f96-54387ac61022\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.905798 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.949093 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-884c8b8f5-s9fh4"] Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.950803 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-884c8b8f5-s9fh4" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.954909 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lrns\" (UniqueName: \"kubernetes.io/projected/dbfed410-b5a6-4f7e-a33b-0bebd55379de-kube-api-access-9lrns\") pod \"nova-metadata-0\" (UID: \"dbfed410-b5a6-4f7e-a33b-0bebd55379de\") " pod="openstack/nova-metadata-0" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.954952 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbfed410-b5a6-4f7e-a33b-0bebd55379de-logs\") pod \"nova-metadata-0\" (UID: \"dbfed410-b5a6-4f7e-a33b-0bebd55379de\") " pod="openstack/nova-metadata-0" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.955006 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbfed410-b5a6-4f7e-a33b-0bebd55379de-config-data\") pod \"nova-metadata-0\" (UID: \"dbfed410-b5a6-4f7e-a33b-0bebd55379de\") " pod="openstack/nova-metadata-0" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.955069 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbfed410-b5a6-4f7e-a33b-0bebd55379de-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"dbfed410-b5a6-4f7e-a33b-0bebd55379de\") " pod="openstack/nova-metadata-0" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.955953 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbfed410-b5a6-4f7e-a33b-0bebd55379de-logs\") pod \"nova-metadata-0\" (UID: \"dbfed410-b5a6-4f7e-a33b-0bebd55379de\") " pod="openstack/nova-metadata-0" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.959695 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbfed410-b5a6-4f7e-a33b-0bebd55379de-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"dbfed410-b5a6-4f7e-a33b-0bebd55379de\") " pod="openstack/nova-metadata-0" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.970173 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lrns\" (UniqueName: \"kubernetes.io/projected/dbfed410-b5a6-4f7e-a33b-0bebd55379de-kube-api-access-9lrns\") pod \"nova-metadata-0\" (UID: \"dbfed410-b5a6-4f7e-a33b-0bebd55379de\") " pod="openstack/nova-metadata-0" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.977497 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbfed410-b5a6-4f7e-a33b-0bebd55379de-config-data\") pod \"nova-metadata-0\" (UID: \"dbfed410-b5a6-4f7e-a33b-0bebd55379de\") " pod="openstack/nova-metadata-0" Feb 19 10:05:07 crc kubenswrapper[4965]: I0219 10:05:07.998259 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-884c8b8f5-s9fh4"] Feb 19 10:05:08 crc kubenswrapper[4965]: I0219 10:05:08.067053 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03baa534-d46c-4cb3-93ce-d124f65241ed-config\") pod \"dnsmasq-dns-884c8b8f5-s9fh4\" (UID: \"03baa534-d46c-4cb3-93ce-d124f65241ed\") " pod="openstack/dnsmasq-dns-884c8b8f5-s9fh4" Feb 19 10:05:08 crc kubenswrapper[4965]: I0219 10:05:08.067122 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03baa534-d46c-4cb3-93ce-d124f65241ed-dns-svc\") pod \"dnsmasq-dns-884c8b8f5-s9fh4\" (UID: \"03baa534-d46c-4cb3-93ce-d124f65241ed\") " pod="openstack/dnsmasq-dns-884c8b8f5-s9fh4" Feb 19 10:05:08 crc kubenswrapper[4965]: I0219 10:05:08.067160 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgwf8\" (UniqueName: \"kubernetes.io/projected/03baa534-d46c-4cb3-93ce-d124f65241ed-kube-api-access-tgwf8\") pod \"dnsmasq-dns-884c8b8f5-s9fh4\" (UID: \"03baa534-d46c-4cb3-93ce-d124f65241ed\") " pod="openstack/dnsmasq-dns-884c8b8f5-s9fh4" Feb 19 10:05:08 crc kubenswrapper[4965]: I0219 10:05:08.067201 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03baa534-d46c-4cb3-93ce-d124f65241ed-ovsdbserver-sb\") pod \"dnsmasq-dns-884c8b8f5-s9fh4\" (UID: \"03baa534-d46c-4cb3-93ce-d124f65241ed\") " pod="openstack/dnsmasq-dns-884c8b8f5-s9fh4" Feb 19 10:05:08 crc kubenswrapper[4965]: I0219 10:05:08.067267 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/03baa534-d46c-4cb3-93ce-d124f65241ed-dns-swift-storage-0\") pod \"dnsmasq-dns-884c8b8f5-s9fh4\" (UID: \"03baa534-d46c-4cb3-93ce-d124f65241ed\") " pod="openstack/dnsmasq-dns-884c8b8f5-s9fh4" Feb 19 10:05:08 crc kubenswrapper[4965]: I0219 10:05:08.067317 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03baa534-d46c-4cb3-93ce-d124f65241ed-ovsdbserver-nb\") pod \"dnsmasq-dns-884c8b8f5-s9fh4\" (UID: \"03baa534-d46c-4cb3-93ce-d124f65241ed\") " pod="openstack/dnsmasq-dns-884c8b8f5-s9fh4" Feb 19 10:05:08 crc kubenswrapper[4965]: I0219 10:05:08.169906 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/03baa534-d46c-4cb3-93ce-d124f65241ed-dns-swift-storage-0\") pod \"dnsmasq-dns-884c8b8f5-s9fh4\" (UID: \"03baa534-d46c-4cb3-93ce-d124f65241ed\") " pod="openstack/dnsmasq-dns-884c8b8f5-s9fh4" Feb 19 10:05:08 crc kubenswrapper[4965]: I0219 10:05:08.170003 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03baa534-d46c-4cb3-93ce-d124f65241ed-ovsdbserver-nb\") pod \"dnsmasq-dns-884c8b8f5-s9fh4\" (UID: \"03baa534-d46c-4cb3-93ce-d124f65241ed\") " pod="openstack/dnsmasq-dns-884c8b8f5-s9fh4" Feb 19 10:05:08 crc kubenswrapper[4965]: I0219 10:05:08.170262 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03baa534-d46c-4cb3-93ce-d124f65241ed-config\") pod \"dnsmasq-dns-884c8b8f5-s9fh4\" (UID: \"03baa534-d46c-4cb3-93ce-d124f65241ed\") " pod="openstack/dnsmasq-dns-884c8b8f5-s9fh4" Feb 19 10:05:08 crc kubenswrapper[4965]: I0219 10:05:08.170331 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03baa534-d46c-4cb3-93ce-d124f65241ed-dns-svc\") pod \"dnsmasq-dns-884c8b8f5-s9fh4\" (UID: \"03baa534-d46c-4cb3-93ce-d124f65241ed\") " pod="openstack/dnsmasq-dns-884c8b8f5-s9fh4" Feb 19 10:05:08 crc kubenswrapper[4965]: I0219 10:05:08.170360 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgwf8\" (UniqueName: \"kubernetes.io/projected/03baa534-d46c-4cb3-93ce-d124f65241ed-kube-api-access-tgwf8\") pod \"dnsmasq-dns-884c8b8f5-s9fh4\" (UID: \"03baa534-d46c-4cb3-93ce-d124f65241ed\") " pod="openstack/dnsmasq-dns-884c8b8f5-s9fh4" Feb 19 10:05:08 crc kubenswrapper[4965]: I0219 10:05:08.170412 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03baa534-d46c-4cb3-93ce-d124f65241ed-ovsdbserver-sb\") pod \"dnsmasq-dns-884c8b8f5-s9fh4\" (UID: \"03baa534-d46c-4cb3-93ce-d124f65241ed\") " pod="openstack/dnsmasq-dns-884c8b8f5-s9fh4" Feb 19 10:05:08 crc kubenswrapper[4965]: I0219 10:05:08.171659 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03baa534-d46c-4cb3-93ce-d124f65241ed-ovsdbserver-sb\") pod \"dnsmasq-dns-884c8b8f5-s9fh4\" (UID: \"03baa534-d46c-4cb3-93ce-d124f65241ed\") " pod="openstack/dnsmasq-dns-884c8b8f5-s9fh4" Feb 19 10:05:08 crc kubenswrapper[4965]: I0219 10:05:08.172235 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/03baa534-d46c-4cb3-93ce-d124f65241ed-dns-swift-storage-0\") pod \"dnsmasq-dns-884c8b8f5-s9fh4\" (UID: \"03baa534-d46c-4cb3-93ce-d124f65241ed\") " pod="openstack/dnsmasq-dns-884c8b8f5-s9fh4" Feb 19 10:05:08 crc kubenswrapper[4965]: I0219 10:05:08.172738 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03baa534-d46c-4cb3-93ce-d124f65241ed-ovsdbserver-nb\") pod \"dnsmasq-dns-884c8b8f5-s9fh4\" (UID: \"03baa534-d46c-4cb3-93ce-d124f65241ed\") " pod="openstack/dnsmasq-dns-884c8b8f5-s9fh4" Feb 19 10:05:08 crc kubenswrapper[4965]: I0219 10:05:08.173259 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03baa534-d46c-4cb3-93ce-d124f65241ed-config\") pod \"dnsmasq-dns-884c8b8f5-s9fh4\" (UID: \"03baa534-d46c-4cb3-93ce-d124f65241ed\") " pod="openstack/dnsmasq-dns-884c8b8f5-s9fh4" Feb 19 10:05:08 crc kubenswrapper[4965]: I0219 10:05:08.174252 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03baa534-d46c-4cb3-93ce-d124f65241ed-dns-svc\") pod \"dnsmasq-dns-884c8b8f5-s9fh4\" (UID: \"03baa534-d46c-4cb3-93ce-d124f65241ed\") " pod="openstack/dnsmasq-dns-884c8b8f5-s9fh4" Feb 19 10:05:08 crc kubenswrapper[4965]: I0219 10:05:08.180539 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:05:08 crc kubenswrapper[4965]: I0219 10:05:08.190357 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgwf8\" (UniqueName: \"kubernetes.io/projected/03baa534-d46c-4cb3-93ce-d124f65241ed-kube-api-access-tgwf8\") pod \"dnsmasq-dns-884c8b8f5-s9fh4\" (UID: \"03baa534-d46c-4cb3-93ce-d124f65241ed\") " pod="openstack/dnsmasq-dns-884c8b8f5-s9fh4" Feb 19 10:05:08 crc kubenswrapper[4965]: I0219 10:05:08.207752 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 10:05:08 crc kubenswrapper[4965]: I0219 10:05:08.311051 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-884c8b8f5-s9fh4" Feb 19 10:05:08 crc kubenswrapper[4965]: I0219 10:05:08.385921 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-hlxzf"] Feb 19 10:05:08 crc kubenswrapper[4965]: I0219 10:05:08.579776 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 10:05:08 crc kubenswrapper[4965]: I0219 10:05:08.830860 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-k2dlm"] Feb 19 10:05:08 crc kubenswrapper[4965]: I0219 10:05:08.832448 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-k2dlm" Feb 19 10:05:08 crc kubenswrapper[4965]: I0219 10:05:08.837504 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 19 10:05:08 crc kubenswrapper[4965]: I0219 10:05:08.839417 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 19 10:05:08 crc kubenswrapper[4965]: I0219 10:05:08.862271 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 10:05:08 crc kubenswrapper[4965]: I0219 10:05:08.920889 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95695f23-a4c9-4165-9dd6-d897ada26e93-config-data\") pod \"nova-cell1-conductor-db-sync-k2dlm\" (UID: \"95695f23-a4c9-4165-9dd6-d897ada26e93\") " pod="openstack/nova-cell1-conductor-db-sync-k2dlm" Feb 19 10:05:08 crc kubenswrapper[4965]: I0219 10:05:08.921133 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxj7x\" (UniqueName: \"kubernetes.io/projected/95695f23-a4c9-4165-9dd6-d897ada26e93-kube-api-access-dxj7x\") pod \"nova-cell1-conductor-db-sync-k2dlm\" (UID: \"95695f23-a4c9-4165-9dd6-d897ada26e93\") " pod="openstack/nova-cell1-conductor-db-sync-k2dlm" Feb 19 10:05:08 crc kubenswrapper[4965]: I0219 10:05:08.921243 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95695f23-a4c9-4165-9dd6-d897ada26e93-scripts\") pod \"nova-cell1-conductor-db-sync-k2dlm\" (UID: \"95695f23-a4c9-4165-9dd6-d897ada26e93\") " pod="openstack/nova-cell1-conductor-db-sync-k2dlm" Feb 19 10:05:08 crc kubenswrapper[4965]: I0219 10:05:08.921398 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95695f23-a4c9-4165-9dd6-d897ada26e93-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-k2dlm\" (UID: \"95695f23-a4c9-4165-9dd6-d897ada26e93\") " pod="openstack/nova-cell1-conductor-db-sync-k2dlm" Feb 19 10:05:08 crc kubenswrapper[4965]: I0219 10:05:08.965152 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-k2dlm"] Feb 19 10:05:09 crc kubenswrapper[4965]: I0219 10:05:09.030288 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95695f23-a4c9-4165-9dd6-d897ada26e93-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-k2dlm\" (UID: \"95695f23-a4c9-4165-9dd6-d897ada26e93\") " pod="openstack/nova-cell1-conductor-db-sync-k2dlm" Feb 19 10:05:09 crc kubenswrapper[4965]: I0219 10:05:09.030369 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95695f23-a4c9-4165-9dd6-d897ada26e93-config-data\") pod \"nova-cell1-conductor-db-sync-k2dlm\" (UID: \"95695f23-a4c9-4165-9dd6-d897ada26e93\") " pod="openstack/nova-cell1-conductor-db-sync-k2dlm" Feb 19 10:05:09 crc kubenswrapper[4965]: I0219 10:05:09.030461 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxj7x\" (UniqueName: \"kubernetes.io/projected/95695f23-a4c9-4165-9dd6-d897ada26e93-kube-api-access-dxj7x\") pod \"nova-cell1-conductor-db-sync-k2dlm\" (UID: \"95695f23-a4c9-4165-9dd6-d897ada26e93\") " pod="openstack/nova-cell1-conductor-db-sync-k2dlm" Feb 19 10:05:09 crc kubenswrapper[4965]: I0219 10:05:09.030511 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95695f23-a4c9-4165-9dd6-d897ada26e93-scripts\") pod \"nova-cell1-conductor-db-sync-k2dlm\" (UID: \"95695f23-a4c9-4165-9dd6-d897ada26e93\") " pod="openstack/nova-cell1-conductor-db-sync-k2dlm" Feb 19 10:05:09 crc kubenswrapper[4965]: I0219 10:05:09.041849 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95695f23-a4c9-4165-9dd6-d897ada26e93-config-data\") pod \"nova-cell1-conductor-db-sync-k2dlm\" (UID: \"95695f23-a4c9-4165-9dd6-d897ada26e93\") " pod="openstack/nova-cell1-conductor-db-sync-k2dlm" Feb 19 10:05:09 crc kubenswrapper[4965]: I0219 10:05:09.048642 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95695f23-a4c9-4165-9dd6-d897ada26e93-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-k2dlm\" (UID: \"95695f23-a4c9-4165-9dd6-d897ada26e93\") " pod="openstack/nova-cell1-conductor-db-sync-k2dlm" Feb 19 10:05:09 crc kubenswrapper[4965]: I0219 10:05:09.051996 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95695f23-a4c9-4165-9dd6-d897ada26e93-scripts\") pod \"nova-cell1-conductor-db-sync-k2dlm\" (UID: \"95695f23-a4c9-4165-9dd6-d897ada26e93\") " pod="openstack/nova-cell1-conductor-db-sync-k2dlm" Feb 19 10:05:09 crc kubenswrapper[4965]: I0219 10:05:09.057672 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"123db546-f337-4b0c-828e-6b677b1fb954","Type":"ContainerStarted","Data":"873a2d6b95840a03ef505d64eb76c20caff547d87461ea36bf50c1aaa47405b2"} Feb 19 10:05:09 crc kubenswrapper[4965]: I0219 10:05:09.059108 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxj7x\" (UniqueName: \"kubernetes.io/projected/95695f23-a4c9-4165-9dd6-d897ada26e93-kube-api-access-dxj7x\") pod \"nova-cell1-conductor-db-sync-k2dlm\" (UID: \"95695f23-a4c9-4165-9dd6-d897ada26e93\") " pod="openstack/nova-cell1-conductor-db-sync-k2dlm" Feb 19 10:05:09 crc kubenswrapper[4965]: I0219 10:05:09.108117 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-hlxzf" event={"ID":"34a7f92d-3391-4c12-8d6b-14b531d39757","Type":"ContainerStarted","Data":"17e34b954630a15dc81e81259a8da845f60d1930dfd17fdc208b22d4eea61e55"} Feb 19 10:05:09 crc kubenswrapper[4965]: I0219 10:05:09.108158 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-hlxzf" event={"ID":"34a7f92d-3391-4c12-8d6b-14b531d39757","Type":"ContainerStarted","Data":"98f6ce1c3f4ded2169357ba14e0106b52d8bb44f2297074d6f61be52085281c7"} Feb 19 10:05:09 crc kubenswrapper[4965]: I0219 10:05:09.114402 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:05:09 crc kubenswrapper[4965]: I0219 10:05:09.125531 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"db417f58-59be-4949-a86c-60ca4439ec09","Type":"ContainerStarted","Data":"1f3f761240a8147451b8c3a853cbf11c25ecccf012f617661287386e34126af3"} Feb 19 10:05:09 crc kubenswrapper[4965]: I0219 10:05:09.136284 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 10:05:09 crc kubenswrapper[4965]: I0219 10:05:09.246588 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-k2dlm" Feb 19 10:05:09 crc kubenswrapper[4965]: I0219 10:05:09.293036 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-hlxzf" podStartSLOduration=2.293016331 podStartE2EDuration="2.293016331s" podCreationTimestamp="2026-02-19 10:05:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:05:09.14968845 +0000 UTC m=+1364.771009760" watchObservedRunningTime="2026-02-19 10:05:09.293016331 +0000 UTC m=+1364.914337641" Feb 19 10:05:09 crc kubenswrapper[4965]: I0219 10:05:09.293788 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-884c8b8f5-s9fh4"] Feb 19 10:05:09 crc kubenswrapper[4965]: I0219 10:05:09.907754 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-k2dlm"] Feb 19 10:05:10 crc kubenswrapper[4965]: I0219 10:05:10.062368 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 10:05:10 crc kubenswrapper[4965]: I0219 10:05:10.062612 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="ac5be26a-d9dd-4131-b369-55b0a89377a5" containerName="nova-cell0-conductor-conductor" containerID="cri-o://8692949f4f511afb96b48df4057b63bbd3f837003ad9a7b7d588b1cd2c313edc" gracePeriod=30 Feb 19 10:05:10 crc kubenswrapper[4965]: I0219 10:05:10.091184 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 10:05:10 crc kubenswrapper[4965]: I0219 10:05:10.119632 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 10:05:10 crc kubenswrapper[4965]: I0219 10:05:10.150612 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f4ee246c-cea9-4377-9f96-54387ac61022","Type":"ContainerStarted","Data":"6724d1622fe064ffd34d221b4ffa2500297856af043a23e8820038be3e098aa9"} Feb 19 10:05:10 crc kubenswrapper[4965]: I0219 10:05:10.153178 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 10:05:10 crc kubenswrapper[4965]: I0219 10:05:10.161119 4965 generic.go:334] "Generic (PLEG): container finished" podID="03baa534-d46c-4cb3-93ce-d124f65241ed" containerID="c765b087378c663902e648dc871993e786501e97cf054ad30939e2d5a82e3fb5" exitCode=0 Feb 19 10:05:10 crc kubenswrapper[4965]: I0219 10:05:10.161220 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-884c8b8f5-s9fh4" event={"ID":"03baa534-d46c-4cb3-93ce-d124f65241ed","Type":"ContainerDied","Data":"c765b087378c663902e648dc871993e786501e97cf054ad30939e2d5a82e3fb5"} Feb 19 10:05:10 crc kubenswrapper[4965]: I0219 10:05:10.161245 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-884c8b8f5-s9fh4" event={"ID":"03baa534-d46c-4cb3-93ce-d124f65241ed","Type":"ContainerStarted","Data":"369fa776a874e268ee544a320b9003afa4f74f3c569eb275e5adc7d34a883b65"} Feb 19 10:05:10 crc kubenswrapper[4965]: I0219 10:05:10.167539 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dbfed410-b5a6-4f7e-a33b-0bebd55379de","Type":"ContainerStarted","Data":"e69be9c22fa256646002d1ee6bf6b2121b98f5049e49ab65d21b31aaf3be5d28"} Feb 19 10:05:10 crc kubenswrapper[4965]: I0219 10:05:10.178725 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:05:10 crc kubenswrapper[4965]: I0219 10:05:10.190982 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-k2dlm" event={"ID":"95695f23-a4c9-4165-9dd6-d897ada26e93","Type":"ContainerStarted","Data":"b0a95b5586e3a2df05ddb3fb61b7703fc28c02de4c14db4033e0c9fad71c52f4"} Feb 19 10:05:11 crc kubenswrapper[4965]: I0219 10:05:11.244435 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-884c8b8f5-s9fh4" Feb 19 10:05:11 crc kubenswrapper[4965]: I0219 10:05:11.244729 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-884c8b8f5-s9fh4" event={"ID":"03baa534-d46c-4cb3-93ce-d124f65241ed","Type":"ContainerStarted","Data":"3c8017670763c823c5cc67a9fc487d6567a64c60883754f318545572af0596ff"} Feb 19 10:05:11 crc kubenswrapper[4965]: I0219 10:05:11.255973 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-k2dlm" event={"ID":"95695f23-a4c9-4165-9dd6-d897ada26e93","Type":"ContainerStarted","Data":"e6e39480141c2b298b0714ff7b8a5bf42287acb818abf00252f976c3aba872c4"} Feb 19 10:05:11 crc kubenswrapper[4965]: I0219 10:05:11.271337 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-884c8b8f5-s9fh4" podStartSLOduration=4.271315238 podStartE2EDuration="4.271315238s" podCreationTimestamp="2026-02-19 10:05:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:05:11.269402182 +0000 UTC m=+1366.890723492" watchObservedRunningTime="2026-02-19 10:05:11.271315238 +0000 UTC m=+1366.892636558" Feb 19 10:05:11 crc kubenswrapper[4965]: I0219 10:05:11.295400 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-k2dlm" podStartSLOduration=3.295380362 podStartE2EDuration="3.295380362s" podCreationTimestamp="2026-02-19 10:05:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:05:11.288509685 +0000 UTC m=+1366.909830995" watchObservedRunningTime="2026-02-19 10:05:11.295380362 +0000 UTC m=+1366.916701672" Feb 19 10:05:11 crc kubenswrapper[4965]: E0219 10:05:11.591629 4965 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8692949f4f511afb96b48df4057b63bbd3f837003ad9a7b7d588b1cd2c313edc" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 19 10:05:11 crc kubenswrapper[4965]: E0219 10:05:11.593447 4965 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8692949f4f511afb96b48df4057b63bbd3f837003ad9a7b7d588b1cd2c313edc" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 19 10:05:11 crc kubenswrapper[4965]: E0219 10:05:11.610911 4965 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8692949f4f511afb96b48df4057b63bbd3f837003ad9a7b7d588b1cd2c313edc" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 19 10:05:11 crc kubenswrapper[4965]: E0219 10:05:11.610994 4965 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="ac5be26a-d9dd-4131-b369-55b0a89377a5" containerName="nova-cell0-conductor-conductor" Feb 19 10:05:12 crc kubenswrapper[4965]: I0219 10:05:12.814105 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:05:12 crc kubenswrapper[4965]: I0219 10:05:12.814766 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="88e68e49-9977-4222-bc61-95a5f5234d82" containerName="ceilometer-central-agent" containerID="cri-o://5451188c3516db6f5cc1c463714dc7928296541dd0b4a5adb26ad6986c4fa5bf" gracePeriod=30 Feb 19 10:05:12 crc kubenswrapper[4965]: I0219 10:05:12.814912 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="88e68e49-9977-4222-bc61-95a5f5234d82" containerName="proxy-httpd" containerID="cri-o://fdb7c5f403c89321c60963fc6f4e717d05f14d805667944e624b94c44b07b5d2" gracePeriod=30 Feb 19 10:05:12 crc kubenswrapper[4965]: I0219 10:05:12.814960 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="88e68e49-9977-4222-bc61-95a5f5234d82" containerName="sg-core" containerID="cri-o://dbff79fcd301d92b8b4712c1865b0b8af703a6a34fc47e443a192f4516f28f77" gracePeriod=30 Feb 19 10:05:12 crc kubenswrapper[4965]: I0219 10:05:12.815004 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="88e68e49-9977-4222-bc61-95a5f5234d82" containerName="ceilometer-notification-agent" containerID="cri-o://6aefae543a28b56087fcbe232e27c6e71e20964e701051d21a34f6193eb5bd60" gracePeriod=30 Feb 19 10:05:12 crc kubenswrapper[4965]: I0219 10:05:12.844180 4965 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="88e68e49-9977-4222-bc61-95a5f5234d82" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.208:3000/\": EOF" Feb 19 10:05:13 crc kubenswrapper[4965]: I0219 10:05:13.291847 4965 generic.go:334] "Generic (PLEG): container finished" podID="88e68e49-9977-4222-bc61-95a5f5234d82" containerID="fdb7c5f403c89321c60963fc6f4e717d05f14d805667944e624b94c44b07b5d2" exitCode=0 Feb 19 10:05:13 crc kubenswrapper[4965]: I0219 10:05:13.292392 4965 generic.go:334] "Generic (PLEG): container finished" podID="88e68e49-9977-4222-bc61-95a5f5234d82" containerID="dbff79fcd301d92b8b4712c1865b0b8af703a6a34fc47e443a192f4516f28f77" exitCode=2 Feb 19 10:05:13 crc kubenswrapper[4965]: I0219 10:05:13.292491 4965 generic.go:334] "Generic (PLEG): container finished" podID="88e68e49-9977-4222-bc61-95a5f5234d82" containerID="5451188c3516db6f5cc1c463714dc7928296541dd0b4a5adb26ad6986c4fa5bf" exitCode=0 Feb 19 10:05:13 crc kubenswrapper[4965]: I0219 10:05:13.291916 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88e68e49-9977-4222-bc61-95a5f5234d82","Type":"ContainerDied","Data":"fdb7c5f403c89321c60963fc6f4e717d05f14d805667944e624b94c44b07b5d2"} Feb 19 10:05:13 crc kubenswrapper[4965]: I0219 10:05:13.292674 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88e68e49-9977-4222-bc61-95a5f5234d82","Type":"ContainerDied","Data":"dbff79fcd301d92b8b4712c1865b0b8af703a6a34fc47e443a192f4516f28f77"} Feb 19 10:05:13 crc kubenswrapper[4965]: I0219 10:05:13.292772 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88e68e49-9977-4222-bc61-95a5f5234d82","Type":"ContainerDied","Data":"5451188c3516db6f5cc1c463714dc7928296541dd0b4a5adb26ad6986c4fa5bf"} Feb 19 10:05:14 crc kubenswrapper[4965]: I0219 10:05:14.307824 4965 generic.go:334] "Generic (PLEG): container finished" podID="88e68e49-9977-4222-bc61-95a5f5234d82" containerID="6aefae543a28b56087fcbe232e27c6e71e20964e701051d21a34f6193eb5bd60" exitCode=0 Feb 19 10:05:14 crc kubenswrapper[4965]: I0219 10:05:14.307911 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88e68e49-9977-4222-bc61-95a5f5234d82","Type":"ContainerDied","Data":"6aefae543a28b56087fcbe232e27c6e71e20964e701051d21a34f6193eb5bd60"} Feb 19 10:05:14 crc kubenswrapper[4965]: I0219 10:05:14.311313 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f4ee246c-cea9-4377-9f96-54387ac61022","Type":"ContainerStarted","Data":"e55738e982b49f6e967c17ef7f27af1b07c46293e5a63d43b461413a45c29242"} Feb 19 10:05:14 crc kubenswrapper[4965]: I0219 10:05:14.311480 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="f4ee246c-cea9-4377-9f96-54387ac61022" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://e55738e982b49f6e967c17ef7f27af1b07c46293e5a63d43b461413a45c29242" gracePeriod=30 Feb 19 10:05:14 crc kubenswrapper[4965]: I0219 10:05:14.314868 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dbfed410-b5a6-4f7e-a33b-0bebd55379de","Type":"ContainerStarted","Data":"2a7b486c7dcc9c70b13edbd41b42c324d82970359314055f542ed780688f5ccb"} Feb 19 10:05:14 crc kubenswrapper[4965]: I0219 10:05:14.314916 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dbfed410-b5a6-4f7e-a33b-0bebd55379de","Type":"ContainerStarted","Data":"7870d71421292b73e018889f6d727bd9369d55541b9758591742518df9506e47"} Feb 19 10:05:14 crc kubenswrapper[4965]: I0219 10:05:14.315053 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="dbfed410-b5a6-4f7e-a33b-0bebd55379de" containerName="nova-metadata-log" containerID="cri-o://7870d71421292b73e018889f6d727bd9369d55541b9758591742518df9506e47" gracePeriod=30 Feb 19 10:05:14 crc kubenswrapper[4965]: I0219 10:05:14.315160 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="dbfed410-b5a6-4f7e-a33b-0bebd55379de" containerName="nova-metadata-metadata" containerID="cri-o://2a7b486c7dcc9c70b13edbd41b42c324d82970359314055f542ed780688f5ccb" gracePeriod=30 Feb 19 10:05:14 crc kubenswrapper[4965]: I0219 10:05:14.325471 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"db417f58-59be-4949-a86c-60ca4439ec09","Type":"ContainerStarted","Data":"6f65abfd992faa041dc14920565b145c1bc5db823aa149fcb04f697a71a60ac0"} Feb 19 10:05:14 crc kubenswrapper[4965]: I0219 10:05:14.325613 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="db417f58-59be-4949-a86c-60ca4439ec09" containerName="nova-scheduler-scheduler" containerID="cri-o://6f65abfd992faa041dc14920565b145c1bc5db823aa149fcb04f697a71a60ac0" gracePeriod=30 Feb 19 10:05:14 crc kubenswrapper[4965]: I0219 10:05:14.333425 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.264269204 podStartE2EDuration="7.333402531s" podCreationTimestamp="2026-02-19 10:05:07 +0000 UTC" firstStartedPulling="2026-02-19 10:05:09.048251345 +0000 UTC m=+1364.669572645" lastFinishedPulling="2026-02-19 10:05:13.117384662 +0000 UTC m=+1368.738705972" observedRunningTime="2026-02-19 10:05:14.331464195 +0000 UTC m=+1369.952785515" watchObservedRunningTime="2026-02-19 10:05:14.333402531 +0000 UTC m=+1369.954723841" Feb 19 10:05:14 crc kubenswrapper[4965]: I0219 10:05:14.338371 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"123db546-f337-4b0c-828e-6b677b1fb954","Type":"ContainerStarted","Data":"1f579eaee71c7a9dc8a5379f1e5f83bc285c6d4eaf2acc3566b5f381ea6667c4"} Feb 19 10:05:14 crc kubenswrapper[4965]: I0219 10:05:14.338416 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"123db546-f337-4b0c-828e-6b677b1fb954","Type":"ContainerStarted","Data":"35d9cb5fff67f62d381376ed728ea50496920f694b5ea9d371ddbdc0da48546d"} Feb 19 10:05:14 crc kubenswrapper[4965]: I0219 10:05:14.338535 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="123db546-f337-4b0c-828e-6b677b1fb954" containerName="nova-api-log" containerID="cri-o://35d9cb5fff67f62d381376ed728ea50496920f694b5ea9d371ddbdc0da48546d" gracePeriod=30 Feb 19 10:05:14 crc kubenswrapper[4965]: I0219 10:05:14.338711 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="123db546-f337-4b0c-828e-6b677b1fb954" containerName="nova-api-api" containerID="cri-o://1f579eaee71c7a9dc8a5379f1e5f83bc285c6d4eaf2acc3566b5f381ea6667c4" gracePeriod=30 Feb 19 10:05:14 crc kubenswrapper[4965]: I0219 10:05:14.373677 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.23562902 podStartE2EDuration="7.373639889s" podCreationTimestamp="2026-02-19 10:05:07 +0000 UTC" firstStartedPulling="2026-02-19 10:05:09.036381288 +0000 UTC m=+1364.657702598" lastFinishedPulling="2026-02-19 10:05:13.174392157 +0000 UTC m=+1368.795713467" observedRunningTime="2026-02-19 10:05:14.352129876 +0000 UTC m=+1369.973451206" watchObservedRunningTime="2026-02-19 10:05:14.373639889 +0000 UTC m=+1369.994961199" Feb 19 10:05:14 crc kubenswrapper[4965]: I0219 10:05:14.379093 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.84906711 podStartE2EDuration="7.379071861s" podCreationTimestamp="2026-02-19 10:05:07 +0000 UTC" firstStartedPulling="2026-02-19 10:05:08.587309889 +0000 UTC m=+1364.208631199" lastFinishedPulling="2026-02-19 10:05:13.11731464 +0000 UTC m=+1368.738635950" observedRunningTime="2026-02-19 10:05:14.372976602 +0000 UTC m=+1369.994297922" watchObservedRunningTime="2026-02-19 10:05:14.379071861 +0000 UTC m=+1370.000393171" Feb 19 10:05:14 crc kubenswrapper[4965]: I0219 10:05:14.398084 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.136208634 podStartE2EDuration="7.398061282s" podCreationTimestamp="2026-02-19 10:05:07 +0000 UTC" firstStartedPulling="2026-02-19 10:05:08.873121221 +0000 UTC m=+1364.494442531" lastFinishedPulling="2026-02-19 10:05:13.134973869 +0000 UTC m=+1368.756295179" observedRunningTime="2026-02-19 10:05:14.388680104 +0000 UTC m=+1370.010001414" watchObservedRunningTime="2026-02-19 10:05:14.398061282 +0000 UTC m=+1370.019382602" Feb 19 10:05:14 crc kubenswrapper[4965]: I0219 10:05:14.730266 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:05:14 crc kubenswrapper[4965]: I0219 10:05:14.818025 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5m5dq\" (UniqueName: \"kubernetes.io/projected/88e68e49-9977-4222-bc61-95a5f5234d82-kube-api-access-5m5dq\") pod \"88e68e49-9977-4222-bc61-95a5f5234d82\" (UID: \"88e68e49-9977-4222-bc61-95a5f5234d82\") " Feb 19 10:05:14 crc kubenswrapper[4965]: I0219 10:05:14.818094 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88e68e49-9977-4222-bc61-95a5f5234d82-combined-ca-bundle\") pod \"88e68e49-9977-4222-bc61-95a5f5234d82\" (UID: \"88e68e49-9977-4222-bc61-95a5f5234d82\") " Feb 19 10:05:14 crc kubenswrapper[4965]: I0219 10:05:14.818217 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/88e68e49-9977-4222-bc61-95a5f5234d82-sg-core-conf-yaml\") pod \"88e68e49-9977-4222-bc61-95a5f5234d82\" (UID: \"88e68e49-9977-4222-bc61-95a5f5234d82\") " Feb 19 10:05:14 crc kubenswrapper[4965]: I0219 10:05:14.818280 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88e68e49-9977-4222-bc61-95a5f5234d82-run-httpd\") pod \"88e68e49-9977-4222-bc61-95a5f5234d82\" (UID: \"88e68e49-9977-4222-bc61-95a5f5234d82\") " Feb 19 10:05:14 crc kubenswrapper[4965]: I0219 10:05:14.818400 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88e68e49-9977-4222-bc61-95a5f5234d82-log-httpd\") pod \"88e68e49-9977-4222-bc61-95a5f5234d82\" (UID: \"88e68e49-9977-4222-bc61-95a5f5234d82\") " Feb 19 10:05:14 crc kubenswrapper[4965]: I0219 10:05:14.818476 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88e68e49-9977-4222-bc61-95a5f5234d82-scripts\") pod \"88e68e49-9977-4222-bc61-95a5f5234d82\" (UID: \"88e68e49-9977-4222-bc61-95a5f5234d82\") " Feb 19 10:05:14 crc kubenswrapper[4965]: I0219 10:05:14.818500 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88e68e49-9977-4222-bc61-95a5f5234d82-config-data\") pod \"88e68e49-9977-4222-bc61-95a5f5234d82\" (UID: \"88e68e49-9977-4222-bc61-95a5f5234d82\") " Feb 19 10:05:14 crc kubenswrapper[4965]: I0219 10:05:14.819615 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88e68e49-9977-4222-bc61-95a5f5234d82-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "88e68e49-9977-4222-bc61-95a5f5234d82" (UID: "88e68e49-9977-4222-bc61-95a5f5234d82"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:05:14 crc kubenswrapper[4965]: I0219 10:05:14.826286 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88e68e49-9977-4222-bc61-95a5f5234d82-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "88e68e49-9977-4222-bc61-95a5f5234d82" (UID: "88e68e49-9977-4222-bc61-95a5f5234d82"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:05:14 crc kubenswrapper[4965]: I0219 10:05:14.844016 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88e68e49-9977-4222-bc61-95a5f5234d82-scripts" (OuterVolumeSpecName: "scripts") pod "88e68e49-9977-4222-bc61-95a5f5234d82" (UID: "88e68e49-9977-4222-bc61-95a5f5234d82"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:05:14 crc kubenswrapper[4965]: I0219 10:05:14.845474 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88e68e49-9977-4222-bc61-95a5f5234d82-kube-api-access-5m5dq" (OuterVolumeSpecName: "kube-api-access-5m5dq") pod "88e68e49-9977-4222-bc61-95a5f5234d82" (UID: "88e68e49-9977-4222-bc61-95a5f5234d82"). InnerVolumeSpecName "kube-api-access-5m5dq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:05:14 crc kubenswrapper[4965]: I0219 10:05:14.875444 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88e68e49-9977-4222-bc61-95a5f5234d82-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "88e68e49-9977-4222-bc61-95a5f5234d82" (UID: "88e68e49-9977-4222-bc61-95a5f5234d82"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:05:14 crc kubenswrapper[4965]: I0219 10:05:14.928614 4965 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88e68e49-9977-4222-bc61-95a5f5234d82-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:14 crc kubenswrapper[4965]: I0219 10:05:14.928655 4965 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88e68e49-9977-4222-bc61-95a5f5234d82-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:14 crc kubenswrapper[4965]: I0219 10:05:14.928668 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5m5dq\" (UniqueName: \"kubernetes.io/projected/88e68e49-9977-4222-bc61-95a5f5234d82-kube-api-access-5m5dq\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:14 crc kubenswrapper[4965]: I0219 10:05:14.928684 4965 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/88e68e49-9977-4222-bc61-95a5f5234d82-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:14 crc kubenswrapper[4965]: I0219 10:05:14.928696 4965 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88e68e49-9977-4222-bc61-95a5f5234d82-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:14 crc kubenswrapper[4965]: I0219 10:05:14.999464 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88e68e49-9977-4222-bc61-95a5f5234d82-config-data" (OuterVolumeSpecName: "config-data") pod "88e68e49-9977-4222-bc61-95a5f5234d82" (UID: "88e68e49-9977-4222-bc61-95a5f5234d82"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.016529 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88e68e49-9977-4222-bc61-95a5f5234d82-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "88e68e49-9977-4222-bc61-95a5f5234d82" (UID: "88e68e49-9977-4222-bc61-95a5f5234d82"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.017787 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.032783 4965 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88e68e49-9977-4222-bc61-95a5f5234d82-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.032821 4965 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88e68e49-9977-4222-bc61-95a5f5234d82-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.136141 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbfed410-b5a6-4f7e-a33b-0bebd55379de-combined-ca-bundle\") pod \"dbfed410-b5a6-4f7e-a33b-0bebd55379de\" (UID: \"dbfed410-b5a6-4f7e-a33b-0bebd55379de\") " Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.145392 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbfed410-b5a6-4f7e-a33b-0bebd55379de-config-data\") pod \"dbfed410-b5a6-4f7e-a33b-0bebd55379de\" (UID: \"dbfed410-b5a6-4f7e-a33b-0bebd55379de\") " Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.145587 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lrns\" (UniqueName: \"kubernetes.io/projected/dbfed410-b5a6-4f7e-a33b-0bebd55379de-kube-api-access-9lrns\") pod \"dbfed410-b5a6-4f7e-a33b-0bebd55379de\" (UID: \"dbfed410-b5a6-4f7e-a33b-0bebd55379de\") " Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.145671 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbfed410-b5a6-4f7e-a33b-0bebd55379de-logs\") pod \"dbfed410-b5a6-4f7e-a33b-0bebd55379de\" (UID: \"dbfed410-b5a6-4f7e-a33b-0bebd55379de\") " Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.147136 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbfed410-b5a6-4f7e-a33b-0bebd55379de-logs" (OuterVolumeSpecName: "logs") pod "dbfed410-b5a6-4f7e-a33b-0bebd55379de" (UID: "dbfed410-b5a6-4f7e-a33b-0bebd55379de"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.178443 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbfed410-b5a6-4f7e-a33b-0bebd55379de-kube-api-access-9lrns" (OuterVolumeSpecName: "kube-api-access-9lrns") pod "dbfed410-b5a6-4f7e-a33b-0bebd55379de" (UID: "dbfed410-b5a6-4f7e-a33b-0bebd55379de"). InnerVolumeSpecName "kube-api-access-9lrns". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.190418 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbfed410-b5a6-4f7e-a33b-0bebd55379de-config-data" (OuterVolumeSpecName: "config-data") pod "dbfed410-b5a6-4f7e-a33b-0bebd55379de" (UID: "dbfed410-b5a6-4f7e-a33b-0bebd55379de"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.241299 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbfed410-b5a6-4f7e-a33b-0bebd55379de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dbfed410-b5a6-4f7e-a33b-0bebd55379de" (UID: "dbfed410-b5a6-4f7e-a33b-0bebd55379de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.249712 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lrns\" (UniqueName: \"kubernetes.io/projected/dbfed410-b5a6-4f7e-a33b-0bebd55379de-kube-api-access-9lrns\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.249750 4965 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbfed410-b5a6-4f7e-a33b-0bebd55379de-logs\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.249764 4965 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbfed410-b5a6-4f7e-a33b-0bebd55379de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.249772 4965 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbfed410-b5a6-4f7e-a33b-0bebd55379de-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.362448 4965 generic.go:334] "Generic (PLEG): container finished" podID="dbfed410-b5a6-4f7e-a33b-0bebd55379de" containerID="2a7b486c7dcc9c70b13edbd41b42c324d82970359314055f542ed780688f5ccb" exitCode=0 Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.362713 4965 generic.go:334] "Generic (PLEG): container finished" podID="dbfed410-b5a6-4f7e-a33b-0bebd55379de" containerID="7870d71421292b73e018889f6d727bd9369d55541b9758591742518df9506e47" exitCode=143 Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.362826 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.363026 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dbfed410-b5a6-4f7e-a33b-0bebd55379de","Type":"ContainerDied","Data":"2a7b486c7dcc9c70b13edbd41b42c324d82970359314055f542ed780688f5ccb"} Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.363069 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dbfed410-b5a6-4f7e-a33b-0bebd55379de","Type":"ContainerDied","Data":"7870d71421292b73e018889f6d727bd9369d55541b9758591742518df9506e47"} Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.363078 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dbfed410-b5a6-4f7e-a33b-0bebd55379de","Type":"ContainerDied","Data":"e69be9c22fa256646002d1ee6bf6b2121b98f5049e49ab65d21b31aaf3be5d28"} Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.363095 4965 scope.go:117] "RemoveContainer" containerID="2a7b486c7dcc9c70b13edbd41b42c324d82970359314055f542ed780688f5ccb" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.371548 4965 generic.go:334] "Generic (PLEG): container finished" podID="123db546-f337-4b0c-828e-6b677b1fb954" containerID="1f579eaee71c7a9dc8a5379f1e5f83bc285c6d4eaf2acc3566b5f381ea6667c4" exitCode=0 Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.371578 4965 generic.go:334] "Generic (PLEG): container finished" podID="123db546-f337-4b0c-828e-6b677b1fb954" containerID="35d9cb5fff67f62d381376ed728ea50496920f694b5ea9d371ddbdc0da48546d" exitCode=143 Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.371644 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"123db546-f337-4b0c-828e-6b677b1fb954","Type":"ContainerDied","Data":"1f579eaee71c7a9dc8a5379f1e5f83bc285c6d4eaf2acc3566b5f381ea6667c4"} Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.371670 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"123db546-f337-4b0c-828e-6b677b1fb954","Type":"ContainerDied","Data":"35d9cb5fff67f62d381376ed728ea50496920f694b5ea9d371ddbdc0da48546d"} Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.371679 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"123db546-f337-4b0c-828e-6b677b1fb954","Type":"ContainerDied","Data":"873a2d6b95840a03ef505d64eb76c20caff547d87461ea36bf50c1aaa47405b2"} Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.371713 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="873a2d6b95840a03ef505d64eb76c20caff547d87461ea36bf50c1aaa47405b2" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.374457 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.381482 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88e68e49-9977-4222-bc61-95a5f5234d82","Type":"ContainerDied","Data":"c4c837c920467e270c21bc2395a03c7a78da1f8ca00b1b343ca2411e96051784"} Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.381530 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.385812 4965 generic.go:334] "Generic (PLEG): container finished" podID="ac5be26a-d9dd-4131-b369-55b0a89377a5" containerID="8692949f4f511afb96b48df4057b63bbd3f837003ad9a7b7d588b1cd2c313edc" exitCode=0 Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.385874 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"ac5be26a-d9dd-4131-b369-55b0a89377a5","Type":"ContainerDied","Data":"8692949f4f511afb96b48df4057b63bbd3f837003ad9a7b7d588b1cd2c313edc"} Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.422662 4965 scope.go:117] "RemoveContainer" containerID="7870d71421292b73e018889f6d727bd9369d55541b9758591742518df9506e47" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.477255 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/123db546-f337-4b0c-828e-6b677b1fb954-combined-ca-bundle\") pod \"123db546-f337-4b0c-828e-6b677b1fb954\" (UID: \"123db546-f337-4b0c-828e-6b677b1fb954\") " Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.477384 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/123db546-f337-4b0c-828e-6b677b1fb954-logs\") pod \"123db546-f337-4b0c-828e-6b677b1fb954\" (UID: \"123db546-f337-4b0c-828e-6b677b1fb954\") " Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.477463 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czl7c\" (UniqueName: \"kubernetes.io/projected/123db546-f337-4b0c-828e-6b677b1fb954-kube-api-access-czl7c\") pod \"123db546-f337-4b0c-828e-6b677b1fb954\" (UID: \"123db546-f337-4b0c-828e-6b677b1fb954\") " Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.477601 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/123db546-f337-4b0c-828e-6b677b1fb954-config-data\") pod \"123db546-f337-4b0c-828e-6b677b1fb954\" (UID: \"123db546-f337-4b0c-828e-6b677b1fb954\") " Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.558629 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/123db546-f337-4b0c-828e-6b677b1fb954-logs" (OuterVolumeSpecName: "logs") pod "123db546-f337-4b0c-828e-6b677b1fb954" (UID: "123db546-f337-4b0c-828e-6b677b1fb954"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.559671 4965 scope.go:117] "RemoveContainer" containerID="2a7b486c7dcc9c70b13edbd41b42c324d82970359314055f542ed780688f5ccb" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.561052 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/123db546-f337-4b0c-828e-6b677b1fb954-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "123db546-f337-4b0c-828e-6b677b1fb954" (UID: "123db546-f337-4b0c-828e-6b677b1fb954"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:05:15 crc kubenswrapper[4965]: E0219 10:05:15.561280 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a7b486c7dcc9c70b13edbd41b42c324d82970359314055f542ed780688f5ccb\": container with ID starting with 2a7b486c7dcc9c70b13edbd41b42c324d82970359314055f542ed780688f5ccb not found: ID does not exist" containerID="2a7b486c7dcc9c70b13edbd41b42c324d82970359314055f542ed780688f5ccb" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.561319 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a7b486c7dcc9c70b13edbd41b42c324d82970359314055f542ed780688f5ccb"} err="failed to get container status \"2a7b486c7dcc9c70b13edbd41b42c324d82970359314055f542ed780688f5ccb\": rpc error: code = NotFound desc = could not find container \"2a7b486c7dcc9c70b13edbd41b42c324d82970359314055f542ed780688f5ccb\": container with ID starting with 2a7b486c7dcc9c70b13edbd41b42c324d82970359314055f542ed780688f5ccb not found: ID does not exist" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.562574 4965 scope.go:117] "RemoveContainer" containerID="7870d71421292b73e018889f6d727bd9369d55541b9758591742518df9506e47" Feb 19 10:05:15 crc kubenswrapper[4965]: E0219 10:05:15.567488 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7870d71421292b73e018889f6d727bd9369d55541b9758591742518df9506e47\": container with ID starting with 7870d71421292b73e018889f6d727bd9369d55541b9758591742518df9506e47 not found: ID does not exist" containerID="7870d71421292b73e018889f6d727bd9369d55541b9758591742518df9506e47" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.567529 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7870d71421292b73e018889f6d727bd9369d55541b9758591742518df9506e47"} err="failed to get container status \"7870d71421292b73e018889f6d727bd9369d55541b9758591742518df9506e47\": rpc error: code = NotFound desc = could not find container \"7870d71421292b73e018889f6d727bd9369d55541b9758591742518df9506e47\": container with ID starting with 7870d71421292b73e018889f6d727bd9369d55541b9758591742518df9506e47 not found: ID does not exist" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.567552 4965 scope.go:117] "RemoveContainer" containerID="2a7b486c7dcc9c70b13edbd41b42c324d82970359314055f542ed780688f5ccb" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.567590 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/123db546-f337-4b0c-828e-6b677b1fb954-kube-api-access-czl7c" (OuterVolumeSpecName: "kube-api-access-czl7c") pod "123db546-f337-4b0c-828e-6b677b1fb954" (UID: "123db546-f337-4b0c-828e-6b677b1fb954"). InnerVolumeSpecName "kube-api-access-czl7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.580069 4965 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/123db546-f337-4b0c-828e-6b677b1fb954-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.580104 4965 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/123db546-f337-4b0c-828e-6b677b1fb954-logs\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.580113 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czl7c\" (UniqueName: \"kubernetes.io/projected/123db546-f337-4b0c-828e-6b677b1fb954-kube-api-access-czl7c\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.597374 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a7b486c7dcc9c70b13edbd41b42c324d82970359314055f542ed780688f5ccb"} err="failed to get container status \"2a7b486c7dcc9c70b13edbd41b42c324d82970359314055f542ed780688f5ccb\": rpc error: code = NotFound desc = could not find container \"2a7b486c7dcc9c70b13edbd41b42c324d82970359314055f542ed780688f5ccb\": container with ID starting with 2a7b486c7dcc9c70b13edbd41b42c324d82970359314055f542ed780688f5ccb not found: ID does not exist" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.597424 4965 scope.go:117] "RemoveContainer" containerID="7870d71421292b73e018889f6d727bd9369d55541b9758591742518df9506e47" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.597390 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/123db546-f337-4b0c-828e-6b677b1fb954-config-data" (OuterVolumeSpecName: "config-data") pod "123db546-f337-4b0c-828e-6b677b1fb954" (UID: "123db546-f337-4b0c-828e-6b677b1fb954"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.611782 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7870d71421292b73e018889f6d727bd9369d55541b9758591742518df9506e47"} err="failed to get container status \"7870d71421292b73e018889f6d727bd9369d55541b9758591742518df9506e47\": rpc error: code = NotFound desc = could not find container \"7870d71421292b73e018889f6d727bd9369d55541b9758591742518df9506e47\": container with ID starting with 7870d71421292b73e018889f6d727bd9369d55541b9758591742518df9506e47 not found: ID does not exist" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.612359 4965 scope.go:117] "RemoveContainer" containerID="fdb7c5f403c89321c60963fc6f4e717d05f14d805667944e624b94c44b07b5d2" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.628416 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.673848 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.691712 4965 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/123db546-f337-4b0c-828e-6b677b1fb954-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.718448 4965 scope.go:117] "RemoveContainer" containerID="dbff79fcd301d92b8b4712c1865b0b8af703a6a34fc47e443a192f4516f28f77" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.736599 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.740424 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.748978 4965 scope.go:117] "RemoveContainer" containerID="6aefae543a28b56087fcbe232e27c6e71e20964e701051d21a34f6193eb5bd60" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.751500 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.763351 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:05:15 crc kubenswrapper[4965]: E0219 10:05:15.763825 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbfed410-b5a6-4f7e-a33b-0bebd55379de" containerName="nova-metadata-log" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.763843 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbfed410-b5a6-4f7e-a33b-0bebd55379de" containerName="nova-metadata-log" Feb 19 10:05:15 crc kubenswrapper[4965]: E0219 10:05:15.763885 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="123db546-f337-4b0c-828e-6b677b1fb954" containerName="nova-api-log" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.763891 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="123db546-f337-4b0c-828e-6b677b1fb954" containerName="nova-api-log" Feb 19 10:05:15 crc kubenswrapper[4965]: E0219 10:05:15.763899 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac5be26a-d9dd-4131-b369-55b0a89377a5" containerName="nova-cell0-conductor-conductor" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.763905 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac5be26a-d9dd-4131-b369-55b0a89377a5" containerName="nova-cell0-conductor-conductor" Feb 19 10:05:15 crc kubenswrapper[4965]: E0219 10:05:15.763916 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88e68e49-9977-4222-bc61-95a5f5234d82" containerName="ceilometer-central-agent" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.763922 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="88e68e49-9977-4222-bc61-95a5f5234d82" containerName="ceilometer-central-agent" Feb 19 10:05:15 crc kubenswrapper[4965]: E0219 10:05:15.763937 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88e68e49-9977-4222-bc61-95a5f5234d82" containerName="proxy-httpd" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.763942 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="88e68e49-9977-4222-bc61-95a5f5234d82" containerName="proxy-httpd" Feb 19 10:05:15 crc kubenswrapper[4965]: E0219 10:05:15.763954 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="123db546-f337-4b0c-828e-6b677b1fb954" containerName="nova-api-api" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.763960 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="123db546-f337-4b0c-828e-6b677b1fb954" containerName="nova-api-api" Feb 19 10:05:15 crc kubenswrapper[4965]: E0219 10:05:15.763971 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88e68e49-9977-4222-bc61-95a5f5234d82" containerName="sg-core" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.763977 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="88e68e49-9977-4222-bc61-95a5f5234d82" containerName="sg-core" Feb 19 10:05:15 crc kubenswrapper[4965]: E0219 10:05:15.763993 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbfed410-b5a6-4f7e-a33b-0bebd55379de" containerName="nova-metadata-metadata" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.763999 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbfed410-b5a6-4f7e-a33b-0bebd55379de" containerName="nova-metadata-metadata" Feb 19 10:05:15 crc kubenswrapper[4965]: E0219 10:05:15.764016 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88e68e49-9977-4222-bc61-95a5f5234d82" containerName="ceilometer-notification-agent" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.764022 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="88e68e49-9977-4222-bc61-95a5f5234d82" containerName="ceilometer-notification-agent" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.764219 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="88e68e49-9977-4222-bc61-95a5f5234d82" containerName="ceilometer-central-agent" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.764233 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="123db546-f337-4b0c-828e-6b677b1fb954" containerName="nova-api-log" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.764245 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="88e68e49-9977-4222-bc61-95a5f5234d82" containerName="sg-core" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.764256 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="88e68e49-9977-4222-bc61-95a5f5234d82" containerName="proxy-httpd" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.764266 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="123db546-f337-4b0c-828e-6b677b1fb954" containerName="nova-api-api" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.764276 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbfed410-b5a6-4f7e-a33b-0bebd55379de" containerName="nova-metadata-metadata" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.764287 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac5be26a-d9dd-4131-b369-55b0a89377a5" containerName="nova-cell0-conductor-conductor" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.764371 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="88e68e49-9977-4222-bc61-95a5f5234d82" containerName="ceilometer-notification-agent" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.764386 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbfed410-b5a6-4f7e-a33b-0bebd55379de" containerName="nova-metadata-log" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.768893 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.772749 4965 scope.go:117] "RemoveContainer" containerID="5451188c3516db6f5cc1c463714dc7928296541dd0b4a5adb26ad6986c4fa5bf" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.772897 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.773079 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.782826 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.784738 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.789696 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.789849 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.795210 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.802938 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac5be26a-d9dd-4131-b369-55b0a89377a5-combined-ca-bundle\") pod \"ac5be26a-d9dd-4131-b369-55b0a89377a5\" (UID: \"ac5be26a-d9dd-4131-b369-55b0a89377a5\") " Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.803041 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac5be26a-d9dd-4131-b369-55b0a89377a5-config-data\") pod \"ac5be26a-d9dd-4131-b369-55b0a89377a5\" (UID: \"ac5be26a-d9dd-4131-b369-55b0a89377a5\") " Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.803500 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmhq5\" (UniqueName: \"kubernetes.io/projected/ac5be26a-d9dd-4131-b369-55b0a89377a5-kube-api-access-hmhq5\") pod \"ac5be26a-d9dd-4131-b369-55b0a89377a5\" (UID: \"ac5be26a-d9dd-4131-b369-55b0a89377a5\") " Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.810423 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac5be26a-d9dd-4131-b369-55b0a89377a5-kube-api-access-hmhq5" (OuterVolumeSpecName: "kube-api-access-hmhq5") pod "ac5be26a-d9dd-4131-b369-55b0a89377a5" (UID: "ac5be26a-d9dd-4131-b369-55b0a89377a5"). InnerVolumeSpecName "kube-api-access-hmhq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.811058 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.866492 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac5be26a-d9dd-4131-b369-55b0a89377a5-config-data" (OuterVolumeSpecName: "config-data") pod "ac5be26a-d9dd-4131-b369-55b0a89377a5" (UID: "ac5be26a-d9dd-4131-b369-55b0a89377a5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.901367 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac5be26a-d9dd-4131-b369-55b0a89377a5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ac5be26a-d9dd-4131-b369-55b0a89377a5" (UID: "ac5be26a-d9dd-4131-b369-55b0a89377a5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.905756 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1c48eb36-b3c5-4516-a345-6dc813d425cb-log-httpd\") pod \"ceilometer-0\" (UID: \"1c48eb36-b3c5-4516-a345-6dc813d425cb\") " pod="openstack/ceilometer-0" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.905816 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3b64316-be3d-46e5-b67d-176aae2dd815-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c3b64316-be3d-46e5-b67d-176aae2dd815\") " pod="openstack/nova-metadata-0" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.905861 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr76t\" (UniqueName: \"kubernetes.io/projected/c3b64316-be3d-46e5-b67d-176aae2dd815-kube-api-access-mr76t\") pod \"nova-metadata-0\" (UID: \"c3b64316-be3d-46e5-b67d-176aae2dd815\") " pod="openstack/nova-metadata-0" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.905913 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c48eb36-b3c5-4516-a345-6dc813d425cb-config-data\") pod \"ceilometer-0\" (UID: \"1c48eb36-b3c5-4516-a345-6dc813d425cb\") " pod="openstack/ceilometer-0" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.905962 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3b64316-be3d-46e5-b67d-176aae2dd815-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c3b64316-be3d-46e5-b67d-176aae2dd815\") " pod="openstack/nova-metadata-0" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.906008 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c48eb36-b3c5-4516-a345-6dc813d425cb-scripts\") pod \"ceilometer-0\" (UID: \"1c48eb36-b3c5-4516-a345-6dc813d425cb\") " pod="openstack/ceilometer-0" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.906029 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3b64316-be3d-46e5-b67d-176aae2dd815-config-data\") pod \"nova-metadata-0\" (UID: \"c3b64316-be3d-46e5-b67d-176aae2dd815\") " pod="openstack/nova-metadata-0" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.906071 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1c48eb36-b3c5-4516-a345-6dc813d425cb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1c48eb36-b3c5-4516-a345-6dc813d425cb\") " pod="openstack/ceilometer-0" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.906105 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3b64316-be3d-46e5-b67d-176aae2dd815-logs\") pod \"nova-metadata-0\" (UID: \"c3b64316-be3d-46e5-b67d-176aae2dd815\") " pod="openstack/nova-metadata-0" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.906124 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1c48eb36-b3c5-4516-a345-6dc813d425cb-run-httpd\") pod \"ceilometer-0\" (UID: \"1c48eb36-b3c5-4516-a345-6dc813d425cb\") " pod="openstack/ceilometer-0" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.906143 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c48eb36-b3c5-4516-a345-6dc813d425cb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1c48eb36-b3c5-4516-a345-6dc813d425cb\") " pod="openstack/ceilometer-0" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.906204 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw97c\" (UniqueName: \"kubernetes.io/projected/1c48eb36-b3c5-4516-a345-6dc813d425cb-kube-api-access-dw97c\") pod \"ceilometer-0\" (UID: \"1c48eb36-b3c5-4516-a345-6dc813d425cb\") " pod="openstack/ceilometer-0" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.906261 4965 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac5be26a-d9dd-4131-b369-55b0a89377a5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.906272 4965 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac5be26a-d9dd-4131-b369-55b0a89377a5-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:15 crc kubenswrapper[4965]: I0219 10:05:15.906281 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmhq5\" (UniqueName: \"kubernetes.io/projected/ac5be26a-d9dd-4131-b369-55b0a89377a5-kube-api-access-hmhq5\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.007768 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dw97c\" (UniqueName: \"kubernetes.io/projected/1c48eb36-b3c5-4516-a345-6dc813d425cb-kube-api-access-dw97c\") pod \"ceilometer-0\" (UID: \"1c48eb36-b3c5-4516-a345-6dc813d425cb\") " pod="openstack/ceilometer-0" Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.007867 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1c48eb36-b3c5-4516-a345-6dc813d425cb-log-httpd\") pod \"ceilometer-0\" (UID: \"1c48eb36-b3c5-4516-a345-6dc813d425cb\") " pod="openstack/ceilometer-0" Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.007904 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3b64316-be3d-46e5-b67d-176aae2dd815-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c3b64316-be3d-46e5-b67d-176aae2dd815\") " pod="openstack/nova-metadata-0" Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.007944 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mr76t\" (UniqueName: \"kubernetes.io/projected/c3b64316-be3d-46e5-b67d-176aae2dd815-kube-api-access-mr76t\") pod \"nova-metadata-0\" (UID: \"c3b64316-be3d-46e5-b67d-176aae2dd815\") " pod="openstack/nova-metadata-0" Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.008015 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c48eb36-b3c5-4516-a345-6dc813d425cb-config-data\") pod \"ceilometer-0\" (UID: \"1c48eb36-b3c5-4516-a345-6dc813d425cb\") " pod="openstack/ceilometer-0" Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.008310 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3b64316-be3d-46e5-b67d-176aae2dd815-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c3b64316-be3d-46e5-b67d-176aae2dd815\") " pod="openstack/nova-metadata-0" Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.008385 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c48eb36-b3c5-4516-a345-6dc813d425cb-scripts\") pod \"ceilometer-0\" (UID: \"1c48eb36-b3c5-4516-a345-6dc813d425cb\") " pod="openstack/ceilometer-0" Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.008418 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3b64316-be3d-46e5-b67d-176aae2dd815-config-data\") pod \"nova-metadata-0\" (UID: \"c3b64316-be3d-46e5-b67d-176aae2dd815\") " pod="openstack/nova-metadata-0" Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.008480 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1c48eb36-b3c5-4516-a345-6dc813d425cb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1c48eb36-b3c5-4516-a345-6dc813d425cb\") " pod="openstack/ceilometer-0" Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.008529 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3b64316-be3d-46e5-b67d-176aae2dd815-logs\") pod \"nova-metadata-0\" (UID: \"c3b64316-be3d-46e5-b67d-176aae2dd815\") " pod="openstack/nova-metadata-0" Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.008558 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1c48eb36-b3c5-4516-a345-6dc813d425cb-run-httpd\") pod \"ceilometer-0\" (UID: \"1c48eb36-b3c5-4516-a345-6dc813d425cb\") " pod="openstack/ceilometer-0" Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.008590 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c48eb36-b3c5-4516-a345-6dc813d425cb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1c48eb36-b3c5-4516-a345-6dc813d425cb\") " pod="openstack/ceilometer-0" Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.008485 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1c48eb36-b3c5-4516-a345-6dc813d425cb-log-httpd\") pod \"ceilometer-0\" (UID: \"1c48eb36-b3c5-4516-a345-6dc813d425cb\") " pod="openstack/ceilometer-0" Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.009088 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1c48eb36-b3c5-4516-a345-6dc813d425cb-run-httpd\") pod \"ceilometer-0\" (UID: \"1c48eb36-b3c5-4516-a345-6dc813d425cb\") " pod="openstack/ceilometer-0" Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.009112 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3b64316-be3d-46e5-b67d-176aae2dd815-logs\") pod \"nova-metadata-0\" (UID: \"c3b64316-be3d-46e5-b67d-176aae2dd815\") " pod="openstack/nova-metadata-0" Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.011643 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c48eb36-b3c5-4516-a345-6dc813d425cb-scripts\") pod \"ceilometer-0\" (UID: \"1c48eb36-b3c5-4516-a345-6dc813d425cb\") " pod="openstack/ceilometer-0" Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.013868 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3b64316-be3d-46e5-b67d-176aae2dd815-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c3b64316-be3d-46e5-b67d-176aae2dd815\") " pod="openstack/nova-metadata-0" Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.015943 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1c48eb36-b3c5-4516-a345-6dc813d425cb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1c48eb36-b3c5-4516-a345-6dc813d425cb\") " pod="openstack/ceilometer-0" Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.016566 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c48eb36-b3c5-4516-a345-6dc813d425cb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1c48eb36-b3c5-4516-a345-6dc813d425cb\") " pod="openstack/ceilometer-0" Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.018885 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3b64316-be3d-46e5-b67d-176aae2dd815-config-data\") pod \"nova-metadata-0\" (UID: \"c3b64316-be3d-46e5-b67d-176aae2dd815\") " pod="openstack/nova-metadata-0" Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.020335 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c48eb36-b3c5-4516-a345-6dc813d425cb-config-data\") pod \"ceilometer-0\" (UID: \"1c48eb36-b3c5-4516-a345-6dc813d425cb\") " pod="openstack/ceilometer-0" Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.024858 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3b64316-be3d-46e5-b67d-176aae2dd815-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c3b64316-be3d-46e5-b67d-176aae2dd815\") " pod="openstack/nova-metadata-0" Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.027901 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dw97c\" (UniqueName: \"kubernetes.io/projected/1c48eb36-b3c5-4516-a345-6dc813d425cb-kube-api-access-dw97c\") pod \"ceilometer-0\" (UID: \"1c48eb36-b3c5-4516-a345-6dc813d425cb\") " pod="openstack/ceilometer-0" Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.028151 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mr76t\" (UniqueName: \"kubernetes.io/projected/c3b64316-be3d-46e5-b67d-176aae2dd815-kube-api-access-mr76t\") pod \"nova-metadata-0\" (UID: \"c3b64316-be3d-46e5-b67d-176aae2dd815\") " pod="openstack/nova-metadata-0" Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.109050 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.184693 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.404988 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"ac5be26a-d9dd-4131-b369-55b0a89377a5","Type":"ContainerDied","Data":"ae7c1aaf2f2a50b29f01c3e9b404483ea24482558db294c1dfc43c5c5440fc6c"} Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.405039 4965 scope.go:117] "RemoveContainer" containerID="8692949f4f511afb96b48df4057b63bbd3f837003ad9a7b7d588b1cd2c313edc" Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.405148 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.432478 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.489553 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.517105 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.536266 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.538379 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.546513 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.547909 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.558785 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.589111 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.602864 4965 patch_prober.go:28] interesting pod/machine-config-daemon-7mhh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.602935 4965 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.602993 4965 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.609884 4965 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6b5800cd8d3cdf0bd49b0429f539e236aa824e01e6e8bf55c3f2737a438df531"} pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.609992 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerName="machine-config-daemon" containerID="cri-o://6b5800cd8d3cdf0bd49b0429f539e236aa824e01e6e8bf55c3f2737a438df531" gracePeriod=600 Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.615944 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.621423 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hrzv\" (UniqueName: \"kubernetes.io/projected/8af99122-00d4-45e7-8e66-f541ba54a66a-kube-api-access-2hrzv\") pod \"nova-cell0-conductor-0\" (UID: \"8af99122-00d4-45e7-8e66-f541ba54a66a\") " pod="openstack/nova-cell0-conductor-0" Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.621473 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8af99122-00d4-45e7-8e66-f541ba54a66a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"8af99122-00d4-45e7-8e66-f541ba54a66a\") " pod="openstack/nova-cell0-conductor-0" Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.621609 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8af99122-00d4-45e7-8e66-f541ba54a66a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"8af99122-00d4-45e7-8e66-f541ba54a66a\") " pod="openstack/nova-cell0-conductor-0" Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.637184 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.644015 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.686436 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.721259 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.723086 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a730512c-555c-40bd-835b-e2ce5242bdff-config-data\") pod \"nova-api-0\" (UID: \"a730512c-555c-40bd-835b-e2ce5242bdff\") " pod="openstack/nova-api-0" Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.723135 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8af99122-00d4-45e7-8e66-f541ba54a66a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"8af99122-00d4-45e7-8e66-f541ba54a66a\") " pod="openstack/nova-cell0-conductor-0" Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.723150 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hrzv\" (UniqueName: \"kubernetes.io/projected/8af99122-00d4-45e7-8e66-f541ba54a66a-kube-api-access-2hrzv\") pod \"nova-cell0-conductor-0\" (UID: \"8af99122-00d4-45e7-8e66-f541ba54a66a\") " pod="openstack/nova-cell0-conductor-0" Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.723191 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4dbk\" (UniqueName: \"kubernetes.io/projected/a730512c-555c-40bd-835b-e2ce5242bdff-kube-api-access-s4dbk\") pod \"nova-api-0\" (UID: \"a730512c-555c-40bd-835b-e2ce5242bdff\") " pod="openstack/nova-api-0" Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.723226 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a730512c-555c-40bd-835b-e2ce5242bdff-logs\") pod \"nova-api-0\" (UID: \"a730512c-555c-40bd-835b-e2ce5242bdff\") " pod="openstack/nova-api-0" Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.723256 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a730512c-555c-40bd-835b-e2ce5242bdff-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a730512c-555c-40bd-835b-e2ce5242bdff\") " pod="openstack/nova-api-0" Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.723369 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8af99122-00d4-45e7-8e66-f541ba54a66a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"8af99122-00d4-45e7-8e66-f541ba54a66a\") " pod="openstack/nova-cell0-conductor-0" Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.732954 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8af99122-00d4-45e7-8e66-f541ba54a66a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"8af99122-00d4-45e7-8e66-f541ba54a66a\") " pod="openstack/nova-cell0-conductor-0" Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.733776 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8af99122-00d4-45e7-8e66-f541ba54a66a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"8af99122-00d4-45e7-8e66-f541ba54a66a\") " pod="openstack/nova-cell0-conductor-0" Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.739119 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hrzv\" (UniqueName: \"kubernetes.io/projected/8af99122-00d4-45e7-8e66-f541ba54a66a-kube-api-access-2hrzv\") pod \"nova-cell0-conductor-0\" (UID: \"8af99122-00d4-45e7-8e66-f541ba54a66a\") " pod="openstack/nova-cell0-conductor-0" Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.825294 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a730512c-555c-40bd-835b-e2ce5242bdff-config-data\") pod \"nova-api-0\" (UID: \"a730512c-555c-40bd-835b-e2ce5242bdff\") " pod="openstack/nova-api-0" Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.825386 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4dbk\" (UniqueName: \"kubernetes.io/projected/a730512c-555c-40bd-835b-e2ce5242bdff-kube-api-access-s4dbk\") pod \"nova-api-0\" (UID: \"a730512c-555c-40bd-835b-e2ce5242bdff\") " pod="openstack/nova-api-0" Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.825410 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a730512c-555c-40bd-835b-e2ce5242bdff-logs\") pod \"nova-api-0\" (UID: \"a730512c-555c-40bd-835b-e2ce5242bdff\") " pod="openstack/nova-api-0" Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.825439 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a730512c-555c-40bd-835b-e2ce5242bdff-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a730512c-555c-40bd-835b-e2ce5242bdff\") " pod="openstack/nova-api-0" Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.827574 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a730512c-555c-40bd-835b-e2ce5242bdff-logs\") pod \"nova-api-0\" (UID: \"a730512c-555c-40bd-835b-e2ce5242bdff\") " pod="openstack/nova-api-0" Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.831362 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a730512c-555c-40bd-835b-e2ce5242bdff-config-data\") pod \"nova-api-0\" (UID: \"a730512c-555c-40bd-835b-e2ce5242bdff\") " pod="openstack/nova-api-0" Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.833389 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a730512c-555c-40bd-835b-e2ce5242bdff-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a730512c-555c-40bd-835b-e2ce5242bdff\") " pod="openstack/nova-api-0" Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.848016 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4dbk\" (UniqueName: \"kubernetes.io/projected/a730512c-555c-40bd-835b-e2ce5242bdff-kube-api-access-s4dbk\") pod \"nova-api-0\" (UID: \"a730512c-555c-40bd-835b-e2ce5242bdff\") " pod="openstack/nova-api-0" Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.868405 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 19 10:05:16 crc kubenswrapper[4965]: I0219 10:05:16.933117 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:05:17 crc kubenswrapper[4965]: I0219 10:05:17.033328 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 10:05:17 crc kubenswrapper[4965]: I0219 10:05:17.240687 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="123db546-f337-4b0c-828e-6b677b1fb954" path="/var/lib/kubelet/pods/123db546-f337-4b0c-828e-6b677b1fb954/volumes" Feb 19 10:05:17 crc kubenswrapper[4965]: I0219 10:05:17.250586 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88e68e49-9977-4222-bc61-95a5f5234d82" path="/var/lib/kubelet/pods/88e68e49-9977-4222-bc61-95a5f5234d82/volumes" Feb 19 10:05:17 crc kubenswrapper[4965]: I0219 10:05:17.256115 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac5be26a-d9dd-4131-b369-55b0a89377a5" path="/var/lib/kubelet/pods/ac5be26a-d9dd-4131-b369-55b0a89377a5/volumes" Feb 19 10:05:17 crc kubenswrapper[4965]: I0219 10:05:17.256765 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbfed410-b5a6-4f7e-a33b-0bebd55379de" path="/var/lib/kubelet/pods/dbfed410-b5a6-4f7e-a33b-0bebd55379de/volumes" Feb 19 10:05:17 crc kubenswrapper[4965]: I0219 10:05:17.478172 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 10:05:17 crc kubenswrapper[4965]: I0219 10:05:17.481783 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1c48eb36-b3c5-4516-a345-6dc813d425cb","Type":"ContainerStarted","Data":"0063e24ff2fbb68b3564678bf6a64e11e71b5973b5ba2e436d6cb04929a70ee4"} Feb 19 10:05:17 crc kubenswrapper[4965]: I0219 10:05:17.487468 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c3b64316-be3d-46e5-b67d-176aae2dd815","Type":"ContainerStarted","Data":"725b36523f2d9a796a7008058a06c690b679116b8c0c93f8ef64b5cddafd019d"} Feb 19 10:05:17 crc kubenswrapper[4965]: I0219 10:05:17.487514 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c3b64316-be3d-46e5-b67d-176aae2dd815","Type":"ContainerStarted","Data":"2ada989f14ac898fc49b0bfbc9c408be769c620227dbef8dd31bccf027abc4d9"} Feb 19 10:05:17 crc kubenswrapper[4965]: I0219 10:05:17.495223 4965 generic.go:334] "Generic (PLEG): container finished" podID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerID="6b5800cd8d3cdf0bd49b0429f539e236aa824e01e6e8bf55c3f2737a438df531" exitCode=0 Feb 19 10:05:17 crc kubenswrapper[4965]: I0219 10:05:17.495271 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" event={"ID":"63ef3eb8-6103-492d-b6ef-f16081d15e83","Type":"ContainerDied","Data":"6b5800cd8d3cdf0bd49b0429f539e236aa824e01e6e8bf55c3f2737a438df531"} Feb 19 10:05:17 crc kubenswrapper[4965]: I0219 10:05:17.495302 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" event={"ID":"63ef3eb8-6103-492d-b6ef-f16081d15e83","Type":"ContainerStarted","Data":"76f06bc02934238a40bb54d8f37e941fa531c6ec466807a5bec720886092509c"} Feb 19 10:05:17 crc kubenswrapper[4965]: I0219 10:05:17.495321 4965 scope.go:117] "RemoveContainer" containerID="ac6c3a11724d0b4226206f45a1c130a82ce4948594339da20a6fb6307209a67e" Feb 19 10:05:17 crc kubenswrapper[4965]: I0219 10:05:17.645148 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 10:05:17 crc kubenswrapper[4965]: I0219 10:05:17.730427 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 19 10:05:18 crc kubenswrapper[4965]: I0219 10:05:18.181743 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:05:18 crc kubenswrapper[4965]: I0219 10:05:18.313373 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-884c8b8f5-s9fh4" Feb 19 10:05:18 crc kubenswrapper[4965]: I0219 10:05:18.386331 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58bd69657f-66r4m"] Feb 19 10:05:18 crc kubenswrapper[4965]: I0219 10:05:18.386636 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58bd69657f-66r4m" podUID="feacab4f-9866-41e7-a8c2-e9850aff1252" containerName="dnsmasq-dns" containerID="cri-o://7a4838ae6a8b0456b86cae2a6be896db8b6d9e6ee13475213a4ad0f585b35bec" gracePeriod=10 Feb 19 10:05:18 crc kubenswrapper[4965]: I0219 10:05:18.611912 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a730512c-555c-40bd-835b-e2ce5242bdff","Type":"ContainerStarted","Data":"096c0d9f6cbed57ebd73e1acb17c5ac851c1e7bb8c047ed14ff1387c59ef76a5"} Feb 19 10:05:18 crc kubenswrapper[4965]: I0219 10:05:18.611952 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a730512c-555c-40bd-835b-e2ce5242bdff","Type":"ContainerStarted","Data":"fc7b169809ee70adf88159fd1bba1cbfd20f5a48f1d70c32635c48718cf4dfb0"} Feb 19 10:05:18 crc kubenswrapper[4965]: I0219 10:05:18.611965 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a730512c-555c-40bd-835b-e2ce5242bdff","Type":"ContainerStarted","Data":"0c3b7f145126edce3dcd3ddbeae5a4943d9af5fc0482c7405a356552409a9be7"} Feb 19 10:05:18 crc kubenswrapper[4965]: I0219 10:05:18.627442 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1c48eb36-b3c5-4516-a345-6dc813d425cb","Type":"ContainerStarted","Data":"95b86d8f471de0d406d27b87b967f19339596823d3562e22741b5ef38fbefd78"} Feb 19 10:05:18 crc kubenswrapper[4965]: I0219 10:05:18.639126 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.639107315 podStartE2EDuration="2.639107315s" podCreationTimestamp="2026-02-19 10:05:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:05:18.639061904 +0000 UTC m=+1374.260383204" watchObservedRunningTime="2026-02-19 10:05:18.639107315 +0000 UTC m=+1374.260428625" Feb 19 10:05:18 crc kubenswrapper[4965]: I0219 10:05:18.646208 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c3b64316-be3d-46e5-b67d-176aae2dd815","Type":"ContainerStarted","Data":"960ed1676959bcb0f35d44e24dae95f7ebde5837397559b77cc0f41f0ff02635"} Feb 19 10:05:18 crc kubenswrapper[4965]: I0219 10:05:18.666799 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"8af99122-00d4-45e7-8e66-f541ba54a66a","Type":"ContainerStarted","Data":"b21b4bc0d90aa1adad5ebf0204a655eecc62fd0d6af744087ad7d2145733db50"} Feb 19 10:05:18 crc kubenswrapper[4965]: I0219 10:05:18.666851 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"8af99122-00d4-45e7-8e66-f541ba54a66a","Type":"ContainerStarted","Data":"6387879a978ba50e805b70e2f16de340239f5ad6c9d2be53311c6e97781b0a38"} Feb 19 10:05:18 crc kubenswrapper[4965]: I0219 10:05:18.666946 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 19 10:05:18 crc kubenswrapper[4965]: I0219 10:05:18.696767 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.696749145 podStartE2EDuration="3.696749145s" podCreationTimestamp="2026-02-19 10:05:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:05:18.682350395 +0000 UTC m=+1374.303671735" watchObservedRunningTime="2026-02-19 10:05:18.696749145 +0000 UTC m=+1374.318070445" Feb 19 10:05:18 crc kubenswrapper[4965]: I0219 10:05:18.716360 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.716339801 podStartE2EDuration="2.716339801s" podCreationTimestamp="2026-02-19 10:05:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:05:18.702624287 +0000 UTC m=+1374.323945617" watchObservedRunningTime="2026-02-19 10:05:18.716339801 +0000 UTC m=+1374.337661131" Feb 19 10:05:19 crc kubenswrapper[4965]: I0219 10:05:19.036800 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58bd69657f-66r4m" Feb 19 10:05:19 crc kubenswrapper[4965]: I0219 10:05:19.105924 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/feacab4f-9866-41e7-a8c2-e9850aff1252-ovsdbserver-sb\") pod \"feacab4f-9866-41e7-a8c2-e9850aff1252\" (UID: \"feacab4f-9866-41e7-a8c2-e9850aff1252\") " Feb 19 10:05:19 crc kubenswrapper[4965]: I0219 10:05:19.107008 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/feacab4f-9866-41e7-a8c2-e9850aff1252-config\") pod \"feacab4f-9866-41e7-a8c2-e9850aff1252\" (UID: \"feacab4f-9866-41e7-a8c2-e9850aff1252\") " Feb 19 10:05:19 crc kubenswrapper[4965]: I0219 10:05:19.107121 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/feacab4f-9866-41e7-a8c2-e9850aff1252-dns-swift-storage-0\") pod \"feacab4f-9866-41e7-a8c2-e9850aff1252\" (UID: \"feacab4f-9866-41e7-a8c2-e9850aff1252\") " Feb 19 10:05:19 crc kubenswrapper[4965]: I0219 10:05:19.107300 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/feacab4f-9866-41e7-a8c2-e9850aff1252-dns-svc\") pod \"feacab4f-9866-41e7-a8c2-e9850aff1252\" (UID: \"feacab4f-9866-41e7-a8c2-e9850aff1252\") " Feb 19 10:05:19 crc kubenswrapper[4965]: I0219 10:05:19.107429 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnggr\" (UniqueName: \"kubernetes.io/projected/feacab4f-9866-41e7-a8c2-e9850aff1252-kube-api-access-lnggr\") pod \"feacab4f-9866-41e7-a8c2-e9850aff1252\" (UID: \"feacab4f-9866-41e7-a8c2-e9850aff1252\") " Feb 19 10:05:19 crc kubenswrapper[4965]: I0219 10:05:19.107601 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/feacab4f-9866-41e7-a8c2-e9850aff1252-ovsdbserver-nb\") pod \"feacab4f-9866-41e7-a8c2-e9850aff1252\" (UID: \"feacab4f-9866-41e7-a8c2-e9850aff1252\") " Feb 19 10:05:19 crc kubenswrapper[4965]: I0219 10:05:19.136292 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/feacab4f-9866-41e7-a8c2-e9850aff1252-kube-api-access-lnggr" (OuterVolumeSpecName: "kube-api-access-lnggr") pod "feacab4f-9866-41e7-a8c2-e9850aff1252" (UID: "feacab4f-9866-41e7-a8c2-e9850aff1252"). InnerVolumeSpecName "kube-api-access-lnggr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:05:19 crc kubenswrapper[4965]: I0219 10:05:19.209631 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnggr\" (UniqueName: \"kubernetes.io/projected/feacab4f-9866-41e7-a8c2-e9850aff1252-kube-api-access-lnggr\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:19 crc kubenswrapper[4965]: I0219 10:05:19.303785 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/feacab4f-9866-41e7-a8c2-e9850aff1252-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "feacab4f-9866-41e7-a8c2-e9850aff1252" (UID: "feacab4f-9866-41e7-a8c2-e9850aff1252"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:05:19 crc kubenswrapper[4965]: I0219 10:05:19.304416 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/feacab4f-9866-41e7-a8c2-e9850aff1252-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "feacab4f-9866-41e7-a8c2-e9850aff1252" (UID: "feacab4f-9866-41e7-a8c2-e9850aff1252"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:05:19 crc kubenswrapper[4965]: I0219 10:05:19.306559 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/feacab4f-9866-41e7-a8c2-e9850aff1252-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "feacab4f-9866-41e7-a8c2-e9850aff1252" (UID: "feacab4f-9866-41e7-a8c2-e9850aff1252"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:05:19 crc kubenswrapper[4965]: I0219 10:05:19.315640 4965 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/feacab4f-9866-41e7-a8c2-e9850aff1252-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:19 crc kubenswrapper[4965]: I0219 10:05:19.315675 4965 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/feacab4f-9866-41e7-a8c2-e9850aff1252-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:19 crc kubenswrapper[4965]: I0219 10:05:19.315685 4965 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/feacab4f-9866-41e7-a8c2-e9850aff1252-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:19 crc kubenswrapper[4965]: I0219 10:05:19.360868 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/feacab4f-9866-41e7-a8c2-e9850aff1252-config" (OuterVolumeSpecName: "config") pod "feacab4f-9866-41e7-a8c2-e9850aff1252" (UID: "feacab4f-9866-41e7-a8c2-e9850aff1252"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:05:19 crc kubenswrapper[4965]: I0219 10:05:19.388795 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/feacab4f-9866-41e7-a8c2-e9850aff1252-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "feacab4f-9866-41e7-a8c2-e9850aff1252" (UID: "feacab4f-9866-41e7-a8c2-e9850aff1252"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:05:19 crc kubenswrapper[4965]: I0219 10:05:19.417717 4965 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/feacab4f-9866-41e7-a8c2-e9850aff1252-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:19 crc kubenswrapper[4965]: I0219 10:05:19.418175 4965 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/feacab4f-9866-41e7-a8c2-e9850aff1252-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:19 crc kubenswrapper[4965]: I0219 10:05:19.679763 4965 generic.go:334] "Generic (PLEG): container finished" podID="feacab4f-9866-41e7-a8c2-e9850aff1252" containerID="7a4838ae6a8b0456b86cae2a6be896db8b6d9e6ee13475213a4ad0f585b35bec" exitCode=0 Feb 19 10:05:19 crc kubenswrapper[4965]: I0219 10:05:19.679852 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58bd69657f-66r4m" event={"ID":"feacab4f-9866-41e7-a8c2-e9850aff1252","Type":"ContainerDied","Data":"7a4838ae6a8b0456b86cae2a6be896db8b6d9e6ee13475213a4ad0f585b35bec"} Feb 19 10:05:19 crc kubenswrapper[4965]: I0219 10:05:19.679904 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58bd69657f-66r4m" event={"ID":"feacab4f-9866-41e7-a8c2-e9850aff1252","Type":"ContainerDied","Data":"52a02ffcc46a894acb9872515a61488b2e823b86c7010b316f702acef9b3762d"} Feb 19 10:05:19 crc kubenswrapper[4965]: I0219 10:05:19.679922 4965 scope.go:117] "RemoveContainer" containerID="7a4838ae6a8b0456b86cae2a6be896db8b6d9e6ee13475213a4ad0f585b35bec" Feb 19 10:05:19 crc kubenswrapper[4965]: I0219 10:05:19.679866 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58bd69657f-66r4m" Feb 19 10:05:19 crc kubenswrapper[4965]: I0219 10:05:19.683626 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1c48eb36-b3c5-4516-a345-6dc813d425cb","Type":"ContainerStarted","Data":"37b03556e861708c26d3c1f49e7151c9cbd6b98e53ec0294941fe42652a494dd"} Feb 19 10:05:19 crc kubenswrapper[4965]: I0219 10:05:19.683668 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1c48eb36-b3c5-4516-a345-6dc813d425cb","Type":"ContainerStarted","Data":"64658e35d12f5cabaf06523e730d47175a03175015974d985b3a19c27f08eeb0"} Feb 19 10:05:19 crc kubenswrapper[4965]: I0219 10:05:19.695514 4965 generic.go:334] "Generic (PLEG): container finished" podID="34a7f92d-3391-4c12-8d6b-14b531d39757" containerID="17e34b954630a15dc81e81259a8da845f60d1930dfd17fdc208b22d4eea61e55" exitCode=0 Feb 19 10:05:19 crc kubenswrapper[4965]: I0219 10:05:19.695569 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-hlxzf" event={"ID":"34a7f92d-3391-4c12-8d6b-14b531d39757","Type":"ContainerDied","Data":"17e34b954630a15dc81e81259a8da845f60d1930dfd17fdc208b22d4eea61e55"} Feb 19 10:05:19 crc kubenswrapper[4965]: I0219 10:05:19.711012 4965 scope.go:117] "RemoveContainer" containerID="521d1a7f6f99a2ef4ae43a3c0ff1e7c74557907e5c7f53846cc9bf813c91d4a9" Feb 19 10:05:19 crc kubenswrapper[4965]: I0219 10:05:19.747902 4965 scope.go:117] "RemoveContainer" containerID="7a4838ae6a8b0456b86cae2a6be896db8b6d9e6ee13475213a4ad0f585b35bec" Feb 19 10:05:19 crc kubenswrapper[4965]: E0219 10:05:19.748733 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a4838ae6a8b0456b86cae2a6be896db8b6d9e6ee13475213a4ad0f585b35bec\": container with ID starting with 7a4838ae6a8b0456b86cae2a6be896db8b6d9e6ee13475213a4ad0f585b35bec not found: ID does not exist" containerID="7a4838ae6a8b0456b86cae2a6be896db8b6d9e6ee13475213a4ad0f585b35bec" Feb 19 10:05:19 crc kubenswrapper[4965]: I0219 10:05:19.748775 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a4838ae6a8b0456b86cae2a6be896db8b6d9e6ee13475213a4ad0f585b35bec"} err="failed to get container status \"7a4838ae6a8b0456b86cae2a6be896db8b6d9e6ee13475213a4ad0f585b35bec\": rpc error: code = NotFound desc = could not find container \"7a4838ae6a8b0456b86cae2a6be896db8b6d9e6ee13475213a4ad0f585b35bec\": container with ID starting with 7a4838ae6a8b0456b86cae2a6be896db8b6d9e6ee13475213a4ad0f585b35bec not found: ID does not exist" Feb 19 10:05:19 crc kubenswrapper[4965]: I0219 10:05:19.748808 4965 scope.go:117] "RemoveContainer" containerID="521d1a7f6f99a2ef4ae43a3c0ff1e7c74557907e5c7f53846cc9bf813c91d4a9" Feb 19 10:05:19 crc kubenswrapper[4965]: E0219 10:05:19.749159 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"521d1a7f6f99a2ef4ae43a3c0ff1e7c74557907e5c7f53846cc9bf813c91d4a9\": container with ID starting with 521d1a7f6f99a2ef4ae43a3c0ff1e7c74557907e5c7f53846cc9bf813c91d4a9 not found: ID does not exist" containerID="521d1a7f6f99a2ef4ae43a3c0ff1e7c74557907e5c7f53846cc9bf813c91d4a9" Feb 19 10:05:19 crc kubenswrapper[4965]: I0219 10:05:19.749181 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"521d1a7f6f99a2ef4ae43a3c0ff1e7c74557907e5c7f53846cc9bf813c91d4a9"} err="failed to get container status \"521d1a7f6f99a2ef4ae43a3c0ff1e7c74557907e5c7f53846cc9bf813c91d4a9\": rpc error: code = NotFound desc = could not find container \"521d1a7f6f99a2ef4ae43a3c0ff1e7c74557907e5c7f53846cc9bf813c91d4a9\": container with ID starting with 521d1a7f6f99a2ef4ae43a3c0ff1e7c74557907e5c7f53846cc9bf813c91d4a9 not found: ID does not exist" Feb 19 10:05:19 crc kubenswrapper[4965]: I0219 10:05:19.753374 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58bd69657f-66r4m"] Feb 19 10:05:19 crc kubenswrapper[4965]: I0219 10:05:19.764615 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58bd69657f-66r4m"] Feb 19 10:05:21 crc kubenswrapper[4965]: I0219 10:05:21.186004 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 10:05:21 crc kubenswrapper[4965]: I0219 10:05:21.187156 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 10:05:21 crc kubenswrapper[4965]: I0219 10:05:21.211779 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="feacab4f-9866-41e7-a8c2-e9850aff1252" path="/var/lib/kubelet/pods/feacab4f-9866-41e7-a8c2-e9850aff1252/volumes" Feb 19 10:05:21 crc kubenswrapper[4965]: I0219 10:05:21.252077 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-hlxzf" Feb 19 10:05:21 crc kubenswrapper[4965]: I0219 10:05:21.357738 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tsrxk\" (UniqueName: \"kubernetes.io/projected/34a7f92d-3391-4c12-8d6b-14b531d39757-kube-api-access-tsrxk\") pod \"34a7f92d-3391-4c12-8d6b-14b531d39757\" (UID: \"34a7f92d-3391-4c12-8d6b-14b531d39757\") " Feb 19 10:05:21 crc kubenswrapper[4965]: I0219 10:05:21.357882 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34a7f92d-3391-4c12-8d6b-14b531d39757-combined-ca-bundle\") pod \"34a7f92d-3391-4c12-8d6b-14b531d39757\" (UID: \"34a7f92d-3391-4c12-8d6b-14b531d39757\") " Feb 19 10:05:21 crc kubenswrapper[4965]: I0219 10:05:21.358472 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34a7f92d-3391-4c12-8d6b-14b531d39757-config-data\") pod \"34a7f92d-3391-4c12-8d6b-14b531d39757\" (UID: \"34a7f92d-3391-4c12-8d6b-14b531d39757\") " Feb 19 10:05:21 crc kubenswrapper[4965]: I0219 10:05:21.358843 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34a7f92d-3391-4c12-8d6b-14b531d39757-scripts\") pod \"34a7f92d-3391-4c12-8d6b-14b531d39757\" (UID: \"34a7f92d-3391-4c12-8d6b-14b531d39757\") " Feb 19 10:05:21 crc kubenswrapper[4965]: I0219 10:05:21.363644 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34a7f92d-3391-4c12-8d6b-14b531d39757-kube-api-access-tsrxk" (OuterVolumeSpecName: "kube-api-access-tsrxk") pod "34a7f92d-3391-4c12-8d6b-14b531d39757" (UID: "34a7f92d-3391-4c12-8d6b-14b531d39757"). InnerVolumeSpecName "kube-api-access-tsrxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:05:21 crc kubenswrapper[4965]: I0219 10:05:21.370364 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34a7f92d-3391-4c12-8d6b-14b531d39757-scripts" (OuterVolumeSpecName: "scripts") pod "34a7f92d-3391-4c12-8d6b-14b531d39757" (UID: "34a7f92d-3391-4c12-8d6b-14b531d39757"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:05:21 crc kubenswrapper[4965]: I0219 10:05:21.394445 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34a7f92d-3391-4c12-8d6b-14b531d39757-config-data" (OuterVolumeSpecName: "config-data") pod "34a7f92d-3391-4c12-8d6b-14b531d39757" (UID: "34a7f92d-3391-4c12-8d6b-14b531d39757"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:05:21 crc kubenswrapper[4965]: I0219 10:05:21.424418 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34a7f92d-3391-4c12-8d6b-14b531d39757-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "34a7f92d-3391-4c12-8d6b-14b531d39757" (UID: "34a7f92d-3391-4c12-8d6b-14b531d39757"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:05:21 crc kubenswrapper[4965]: I0219 10:05:21.461932 4965 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34a7f92d-3391-4c12-8d6b-14b531d39757-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:21 crc kubenswrapper[4965]: I0219 10:05:21.461973 4965 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34a7f92d-3391-4c12-8d6b-14b531d39757-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:21 crc kubenswrapper[4965]: I0219 10:05:21.461983 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tsrxk\" (UniqueName: \"kubernetes.io/projected/34a7f92d-3391-4c12-8d6b-14b531d39757-kube-api-access-tsrxk\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:21 crc kubenswrapper[4965]: I0219 10:05:21.461993 4965 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34a7f92d-3391-4c12-8d6b-14b531d39757-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:21 crc kubenswrapper[4965]: I0219 10:05:21.718631 4965 generic.go:334] "Generic (PLEG): container finished" podID="95695f23-a4c9-4165-9dd6-d897ada26e93" containerID="e6e39480141c2b298b0714ff7b8a5bf42287acb818abf00252f976c3aba872c4" exitCode=0 Feb 19 10:05:21 crc kubenswrapper[4965]: I0219 10:05:21.718688 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-k2dlm" event={"ID":"95695f23-a4c9-4165-9dd6-d897ada26e93","Type":"ContainerDied","Data":"e6e39480141c2b298b0714ff7b8a5bf42287acb818abf00252f976c3aba872c4"} Feb 19 10:05:21 crc kubenswrapper[4965]: I0219 10:05:21.721487 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1c48eb36-b3c5-4516-a345-6dc813d425cb","Type":"ContainerStarted","Data":"f90f4c1bd3e20625cc9aced4d5faaaf9897c41870dbbb0b10fd90dddbd81cb98"} Feb 19 10:05:21 crc kubenswrapper[4965]: I0219 10:05:21.722119 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 10:05:21 crc kubenswrapper[4965]: I0219 10:05:21.723594 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-hlxzf" event={"ID":"34a7f92d-3391-4c12-8d6b-14b531d39757","Type":"ContainerDied","Data":"98f6ce1c3f4ded2169357ba14e0106b52d8bb44f2297074d6f61be52085281c7"} Feb 19 10:05:21 crc kubenswrapper[4965]: I0219 10:05:21.723621 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98f6ce1c3f4ded2169357ba14e0106b52d8bb44f2297074d6f61be52085281c7" Feb 19 10:05:21 crc kubenswrapper[4965]: I0219 10:05:21.723668 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-hlxzf" Feb 19 10:05:21 crc kubenswrapper[4965]: I0219 10:05:21.784663 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.437300701 podStartE2EDuration="6.784640216s" podCreationTimestamp="2026-02-19 10:05:15 +0000 UTC" firstStartedPulling="2026-02-19 10:05:16.680874305 +0000 UTC m=+1372.302195615" lastFinishedPulling="2026-02-19 10:05:21.02821382 +0000 UTC m=+1376.649535130" observedRunningTime="2026-02-19 10:05:21.773529226 +0000 UTC m=+1377.394850546" watchObservedRunningTime="2026-02-19 10:05:21.784640216 +0000 UTC m=+1377.405961526" Feb 19 10:05:23 crc kubenswrapper[4965]: I0219 10:05:23.310557 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-k2dlm" Feb 19 10:05:23 crc kubenswrapper[4965]: I0219 10:05:23.502073 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95695f23-a4c9-4165-9dd6-d897ada26e93-config-data\") pod \"95695f23-a4c9-4165-9dd6-d897ada26e93\" (UID: \"95695f23-a4c9-4165-9dd6-d897ada26e93\") " Feb 19 10:05:23 crc kubenswrapper[4965]: I0219 10:05:23.502261 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95695f23-a4c9-4165-9dd6-d897ada26e93-scripts\") pod \"95695f23-a4c9-4165-9dd6-d897ada26e93\" (UID: \"95695f23-a4c9-4165-9dd6-d897ada26e93\") " Feb 19 10:05:23 crc kubenswrapper[4965]: I0219 10:05:23.502393 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95695f23-a4c9-4165-9dd6-d897ada26e93-combined-ca-bundle\") pod \"95695f23-a4c9-4165-9dd6-d897ada26e93\" (UID: \"95695f23-a4c9-4165-9dd6-d897ada26e93\") " Feb 19 10:05:23 crc kubenswrapper[4965]: I0219 10:05:23.502444 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxj7x\" (UniqueName: \"kubernetes.io/projected/95695f23-a4c9-4165-9dd6-d897ada26e93-kube-api-access-dxj7x\") pod \"95695f23-a4c9-4165-9dd6-d897ada26e93\" (UID: \"95695f23-a4c9-4165-9dd6-d897ada26e93\") " Feb 19 10:05:23 crc kubenswrapper[4965]: I0219 10:05:23.509963 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95695f23-a4c9-4165-9dd6-d897ada26e93-kube-api-access-dxj7x" (OuterVolumeSpecName: "kube-api-access-dxj7x") pod "95695f23-a4c9-4165-9dd6-d897ada26e93" (UID: "95695f23-a4c9-4165-9dd6-d897ada26e93"). InnerVolumeSpecName "kube-api-access-dxj7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:05:23 crc kubenswrapper[4965]: I0219 10:05:23.512221 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95695f23-a4c9-4165-9dd6-d897ada26e93-scripts" (OuterVolumeSpecName: "scripts") pod "95695f23-a4c9-4165-9dd6-d897ada26e93" (UID: "95695f23-a4c9-4165-9dd6-d897ada26e93"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:05:23 crc kubenswrapper[4965]: I0219 10:05:23.539758 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95695f23-a4c9-4165-9dd6-d897ada26e93-config-data" (OuterVolumeSpecName: "config-data") pod "95695f23-a4c9-4165-9dd6-d897ada26e93" (UID: "95695f23-a4c9-4165-9dd6-d897ada26e93"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:05:23 crc kubenswrapper[4965]: I0219 10:05:23.547726 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95695f23-a4c9-4165-9dd6-d897ada26e93-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "95695f23-a4c9-4165-9dd6-d897ada26e93" (UID: "95695f23-a4c9-4165-9dd6-d897ada26e93"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:05:23 crc kubenswrapper[4965]: I0219 10:05:23.605490 4965 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95695f23-a4c9-4165-9dd6-d897ada26e93-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:23 crc kubenswrapper[4965]: I0219 10:05:23.605545 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxj7x\" (UniqueName: \"kubernetes.io/projected/95695f23-a4c9-4165-9dd6-d897ada26e93-kube-api-access-dxj7x\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:23 crc kubenswrapper[4965]: I0219 10:05:23.605568 4965 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95695f23-a4c9-4165-9dd6-d897ada26e93-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:23 crc kubenswrapper[4965]: I0219 10:05:23.605585 4965 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95695f23-a4c9-4165-9dd6-d897ada26e93-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:23 crc kubenswrapper[4965]: I0219 10:05:23.745246 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-k2dlm" event={"ID":"95695f23-a4c9-4165-9dd6-d897ada26e93","Type":"ContainerDied","Data":"b0a95b5586e3a2df05ddb3fb61b7703fc28c02de4c14db4033e0c9fad71c52f4"} Feb 19 10:05:23 crc kubenswrapper[4965]: I0219 10:05:23.745287 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-k2dlm" Feb 19 10:05:23 crc kubenswrapper[4965]: I0219 10:05:23.745302 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0a95b5586e3a2df05ddb3fb61b7703fc28c02de4c14db4033e0c9fad71c52f4" Feb 19 10:05:23 crc kubenswrapper[4965]: I0219 10:05:23.843883 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 10:05:23 crc kubenswrapper[4965]: E0219 10:05:23.844294 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feacab4f-9866-41e7-a8c2-e9850aff1252" containerName="init" Feb 19 10:05:23 crc kubenswrapper[4965]: I0219 10:05:23.844310 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="feacab4f-9866-41e7-a8c2-e9850aff1252" containerName="init" Feb 19 10:05:23 crc kubenswrapper[4965]: E0219 10:05:23.844332 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feacab4f-9866-41e7-a8c2-e9850aff1252" containerName="dnsmasq-dns" Feb 19 10:05:23 crc kubenswrapper[4965]: I0219 10:05:23.844338 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="feacab4f-9866-41e7-a8c2-e9850aff1252" containerName="dnsmasq-dns" Feb 19 10:05:23 crc kubenswrapper[4965]: E0219 10:05:23.844363 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95695f23-a4c9-4165-9dd6-d897ada26e93" containerName="nova-cell1-conductor-db-sync" Feb 19 10:05:23 crc kubenswrapper[4965]: I0219 10:05:23.844370 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="95695f23-a4c9-4165-9dd6-d897ada26e93" containerName="nova-cell1-conductor-db-sync" Feb 19 10:05:23 crc kubenswrapper[4965]: E0219 10:05:23.844382 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34a7f92d-3391-4c12-8d6b-14b531d39757" containerName="nova-manage" Feb 19 10:05:23 crc kubenswrapper[4965]: I0219 10:05:23.844387 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="34a7f92d-3391-4c12-8d6b-14b531d39757" containerName="nova-manage" Feb 19 10:05:23 crc kubenswrapper[4965]: I0219 10:05:23.844582 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="95695f23-a4c9-4165-9dd6-d897ada26e93" containerName="nova-cell1-conductor-db-sync" Feb 19 10:05:23 crc kubenswrapper[4965]: I0219 10:05:23.844600 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="feacab4f-9866-41e7-a8c2-e9850aff1252" containerName="dnsmasq-dns" Feb 19 10:05:23 crc kubenswrapper[4965]: I0219 10:05:23.844612 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="34a7f92d-3391-4c12-8d6b-14b531d39757" containerName="nova-manage" Feb 19 10:05:23 crc kubenswrapper[4965]: I0219 10:05:23.849368 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 19 10:05:23 crc kubenswrapper[4965]: I0219 10:05:23.855061 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 19 10:05:23 crc kubenswrapper[4965]: I0219 10:05:23.877727 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 10:05:24 crc kubenswrapper[4965]: I0219 10:05:24.014929 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2l8s\" (UniqueName: \"kubernetes.io/projected/04d07332-2cb5-49b4-b70c-9f3a13f73a09-kube-api-access-l2l8s\") pod \"nova-cell1-conductor-0\" (UID: \"04d07332-2cb5-49b4-b70c-9f3a13f73a09\") " pod="openstack/nova-cell1-conductor-0" Feb 19 10:05:24 crc kubenswrapper[4965]: I0219 10:05:24.015085 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04d07332-2cb5-49b4-b70c-9f3a13f73a09-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"04d07332-2cb5-49b4-b70c-9f3a13f73a09\") " pod="openstack/nova-cell1-conductor-0" Feb 19 10:05:24 crc kubenswrapper[4965]: I0219 10:05:24.015151 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04d07332-2cb5-49b4-b70c-9f3a13f73a09-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"04d07332-2cb5-49b4-b70c-9f3a13f73a09\") " pod="openstack/nova-cell1-conductor-0" Feb 19 10:05:24 crc kubenswrapper[4965]: I0219 10:05:24.116968 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04d07332-2cb5-49b4-b70c-9f3a13f73a09-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"04d07332-2cb5-49b4-b70c-9f3a13f73a09\") " pod="openstack/nova-cell1-conductor-0" Feb 19 10:05:24 crc kubenswrapper[4965]: I0219 10:05:24.117226 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2l8s\" (UniqueName: \"kubernetes.io/projected/04d07332-2cb5-49b4-b70c-9f3a13f73a09-kube-api-access-l2l8s\") pod \"nova-cell1-conductor-0\" (UID: \"04d07332-2cb5-49b4-b70c-9f3a13f73a09\") " pod="openstack/nova-cell1-conductor-0" Feb 19 10:05:24 crc kubenswrapper[4965]: I0219 10:05:24.117293 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04d07332-2cb5-49b4-b70c-9f3a13f73a09-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"04d07332-2cb5-49b4-b70c-9f3a13f73a09\") " pod="openstack/nova-cell1-conductor-0" Feb 19 10:05:24 crc kubenswrapper[4965]: I0219 10:05:24.121323 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04d07332-2cb5-49b4-b70c-9f3a13f73a09-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"04d07332-2cb5-49b4-b70c-9f3a13f73a09\") " pod="openstack/nova-cell1-conductor-0" Feb 19 10:05:24 crc kubenswrapper[4965]: I0219 10:05:24.128982 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04d07332-2cb5-49b4-b70c-9f3a13f73a09-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"04d07332-2cb5-49b4-b70c-9f3a13f73a09\") " pod="openstack/nova-cell1-conductor-0" Feb 19 10:05:24 crc kubenswrapper[4965]: I0219 10:05:24.136743 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2l8s\" (UniqueName: \"kubernetes.io/projected/04d07332-2cb5-49b4-b70c-9f3a13f73a09-kube-api-access-l2l8s\") pod \"nova-cell1-conductor-0\" (UID: \"04d07332-2cb5-49b4-b70c-9f3a13f73a09\") " pod="openstack/nova-cell1-conductor-0" Feb 19 10:05:24 crc kubenswrapper[4965]: I0219 10:05:24.172026 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 19 10:05:24 crc kubenswrapper[4965]: I0219 10:05:24.612839 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 10:05:24 crc kubenswrapper[4965]: I0219 10:05:24.770350 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"04d07332-2cb5-49b4-b70c-9f3a13f73a09","Type":"ContainerStarted","Data":"7a8e51a4b61d26d4ca1459d98e9c356b74a07ec6959d97c2575c57dcd1610b04"} Feb 19 10:05:25 crc kubenswrapper[4965]: I0219 10:05:25.788128 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"04d07332-2cb5-49b4-b70c-9f3a13f73a09","Type":"ContainerStarted","Data":"bee69dc794c719c8c551e6fbc0f875c86690043f02aa2293484d64397e04b349"} Feb 19 10:05:25 crc kubenswrapper[4965]: I0219 10:05:25.788501 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 19 10:05:25 crc kubenswrapper[4965]: I0219 10:05:25.838137 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.838110502 podStartE2EDuration="2.838110502s" podCreationTimestamp="2026-02-19 10:05:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:05:25.825114156 +0000 UTC m=+1381.446435506" watchObservedRunningTime="2026-02-19 10:05:25.838110502 +0000 UTC m=+1381.459431822" Feb 19 10:05:26 crc kubenswrapper[4965]: I0219 10:05:26.185294 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 10:05:26 crc kubenswrapper[4965]: I0219 10:05:26.185576 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 10:05:26 crc kubenswrapper[4965]: I0219 10:05:26.905886 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 19 10:05:27 crc kubenswrapper[4965]: I0219 10:05:27.034393 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 10:05:27 crc kubenswrapper[4965]: I0219 10:05:27.034449 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 10:05:27 crc kubenswrapper[4965]: I0219 10:05:27.203322 4965 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c3b64316-be3d-46e5-b67d-176aae2dd815" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.218:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 10:05:27 crc kubenswrapper[4965]: I0219 10:05:27.203322 4965 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c3b64316-be3d-46e5-b67d-176aae2dd815" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.218:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 10:05:27 crc kubenswrapper[4965]: I0219 10:05:27.450822 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 10:05:27 crc kubenswrapper[4965]: I0219 10:05:27.494672 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:05:27 crc kubenswrapper[4965]: I0219 10:05:27.807010 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c3b64316-be3d-46e5-b67d-176aae2dd815" containerName="nova-metadata-log" containerID="cri-o://725b36523f2d9a796a7008058a06c690b679116b8c0c93f8ef64b5cddafd019d" gracePeriod=30 Feb 19 10:05:27 crc kubenswrapper[4965]: I0219 10:05:27.807087 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a730512c-555c-40bd-835b-e2ce5242bdff" containerName="nova-api-log" containerID="cri-o://fc7b169809ee70adf88159fd1bba1cbfd20f5a48f1d70c32635c48718cf4dfb0" gracePeriod=30 Feb 19 10:05:27 crc kubenswrapper[4965]: I0219 10:05:27.807099 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a730512c-555c-40bd-835b-e2ce5242bdff" containerName="nova-api-api" containerID="cri-o://096c0d9f6cbed57ebd73e1acb17c5ac851c1e7bb8c047ed14ff1387c59ef76a5" gracePeriod=30 Feb 19 10:05:27 crc kubenswrapper[4965]: I0219 10:05:27.807403 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c3b64316-be3d-46e5-b67d-176aae2dd815" containerName="nova-metadata-metadata" containerID="cri-o://960ed1676959bcb0f35d44e24dae95f7ebde5837397559b77cc0f41f0ff02635" gracePeriod=30 Feb 19 10:05:27 crc kubenswrapper[4965]: I0219 10:05:27.818528 4965 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a730512c-555c-40bd-835b-e2ce5242bdff" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.220:8774/\": EOF" Feb 19 10:05:27 crc kubenswrapper[4965]: I0219 10:05:27.818605 4965 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a730512c-555c-40bd-835b-e2ce5242bdff" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.220:8774/\": EOF" Feb 19 10:05:28 crc kubenswrapper[4965]: I0219 10:05:28.817544 4965 generic.go:334] "Generic (PLEG): container finished" podID="c3b64316-be3d-46e5-b67d-176aae2dd815" containerID="725b36523f2d9a796a7008058a06c690b679116b8c0c93f8ef64b5cddafd019d" exitCode=143 Feb 19 10:05:28 crc kubenswrapper[4965]: I0219 10:05:28.817655 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c3b64316-be3d-46e5-b67d-176aae2dd815","Type":"ContainerDied","Data":"725b36523f2d9a796a7008058a06c690b679116b8c0c93f8ef64b5cddafd019d"} Feb 19 10:05:28 crc kubenswrapper[4965]: I0219 10:05:28.819994 4965 generic.go:334] "Generic (PLEG): container finished" podID="a730512c-555c-40bd-835b-e2ce5242bdff" containerID="fc7b169809ee70adf88159fd1bba1cbfd20f5a48f1d70c32635c48718cf4dfb0" exitCode=143 Feb 19 10:05:28 crc kubenswrapper[4965]: I0219 10:05:28.820041 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a730512c-555c-40bd-835b-e2ce5242bdff","Type":"ContainerDied","Data":"fc7b169809ee70adf88159fd1bba1cbfd20f5a48f1d70c32635c48718cf4dfb0"} Feb 19 10:05:29 crc kubenswrapper[4965]: I0219 10:05:29.196904 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 19 10:05:32 crc kubenswrapper[4965]: I0219 10:05:32.834561 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 10:05:32 crc kubenswrapper[4965]: I0219 10:05:32.919116 4965 generic.go:334] "Generic (PLEG): container finished" podID="c3b64316-be3d-46e5-b67d-176aae2dd815" containerID="960ed1676959bcb0f35d44e24dae95f7ebde5837397559b77cc0f41f0ff02635" exitCode=0 Feb 19 10:05:32 crc kubenswrapper[4965]: I0219 10:05:32.919161 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c3b64316-be3d-46e5-b67d-176aae2dd815","Type":"ContainerDied","Data":"960ed1676959bcb0f35d44e24dae95f7ebde5837397559b77cc0f41f0ff02635"} Feb 19 10:05:32 crc kubenswrapper[4965]: I0219 10:05:32.919218 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c3b64316-be3d-46e5-b67d-176aae2dd815","Type":"ContainerDied","Data":"2ada989f14ac898fc49b0bfbc9c408be769c620227dbef8dd31bccf027abc4d9"} Feb 19 10:05:32 crc kubenswrapper[4965]: I0219 10:05:32.919237 4965 scope.go:117] "RemoveContainer" containerID="960ed1676959bcb0f35d44e24dae95f7ebde5837397559b77cc0f41f0ff02635" Feb 19 10:05:32 crc kubenswrapper[4965]: I0219 10:05:32.919287 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 10:05:32 crc kubenswrapper[4965]: I0219 10:05:32.944279 4965 scope.go:117] "RemoveContainer" containerID="725b36523f2d9a796a7008058a06c690b679116b8c0c93f8ef64b5cddafd019d" Feb 19 10:05:32 crc kubenswrapper[4965]: I0219 10:05:32.968408 4965 scope.go:117] "RemoveContainer" containerID="960ed1676959bcb0f35d44e24dae95f7ebde5837397559b77cc0f41f0ff02635" Feb 19 10:05:32 crc kubenswrapper[4965]: E0219 10:05:32.969140 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"960ed1676959bcb0f35d44e24dae95f7ebde5837397559b77cc0f41f0ff02635\": container with ID starting with 960ed1676959bcb0f35d44e24dae95f7ebde5837397559b77cc0f41f0ff02635 not found: ID does not exist" containerID="960ed1676959bcb0f35d44e24dae95f7ebde5837397559b77cc0f41f0ff02635" Feb 19 10:05:32 crc kubenswrapper[4965]: I0219 10:05:32.969230 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"960ed1676959bcb0f35d44e24dae95f7ebde5837397559b77cc0f41f0ff02635"} err="failed to get container status \"960ed1676959bcb0f35d44e24dae95f7ebde5837397559b77cc0f41f0ff02635\": rpc error: code = NotFound desc = could not find container \"960ed1676959bcb0f35d44e24dae95f7ebde5837397559b77cc0f41f0ff02635\": container with ID starting with 960ed1676959bcb0f35d44e24dae95f7ebde5837397559b77cc0f41f0ff02635 not found: ID does not exist" Feb 19 10:05:32 crc kubenswrapper[4965]: I0219 10:05:32.969311 4965 scope.go:117] "RemoveContainer" containerID="725b36523f2d9a796a7008058a06c690b679116b8c0c93f8ef64b5cddafd019d" Feb 19 10:05:32 crc kubenswrapper[4965]: E0219 10:05:32.969889 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"725b36523f2d9a796a7008058a06c690b679116b8c0c93f8ef64b5cddafd019d\": container with ID starting with 725b36523f2d9a796a7008058a06c690b679116b8c0c93f8ef64b5cddafd019d not found: ID does not exist" containerID="725b36523f2d9a796a7008058a06c690b679116b8c0c93f8ef64b5cddafd019d" Feb 19 10:05:32 crc kubenswrapper[4965]: I0219 10:05:32.969916 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"725b36523f2d9a796a7008058a06c690b679116b8c0c93f8ef64b5cddafd019d"} err="failed to get container status \"725b36523f2d9a796a7008058a06c690b679116b8c0c93f8ef64b5cddafd019d\": rpc error: code = NotFound desc = could not find container \"725b36523f2d9a796a7008058a06c690b679116b8c0c93f8ef64b5cddafd019d\": container with ID starting with 725b36523f2d9a796a7008058a06c690b679116b8c0c93f8ef64b5cddafd019d not found: ID does not exist" Feb 19 10:05:33 crc kubenswrapper[4965]: I0219 10:05:33.008448 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3b64316-be3d-46e5-b67d-176aae2dd815-combined-ca-bundle\") pod \"c3b64316-be3d-46e5-b67d-176aae2dd815\" (UID: \"c3b64316-be3d-46e5-b67d-176aae2dd815\") " Feb 19 10:05:33 crc kubenswrapper[4965]: I0219 10:05:33.008629 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mr76t\" (UniqueName: \"kubernetes.io/projected/c3b64316-be3d-46e5-b67d-176aae2dd815-kube-api-access-mr76t\") pod \"c3b64316-be3d-46e5-b67d-176aae2dd815\" (UID: \"c3b64316-be3d-46e5-b67d-176aae2dd815\") " Feb 19 10:05:33 crc kubenswrapper[4965]: I0219 10:05:33.008706 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3b64316-be3d-46e5-b67d-176aae2dd815-config-data\") pod \"c3b64316-be3d-46e5-b67d-176aae2dd815\" (UID: \"c3b64316-be3d-46e5-b67d-176aae2dd815\") " Feb 19 10:05:33 crc kubenswrapper[4965]: I0219 10:05:33.008797 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3b64316-be3d-46e5-b67d-176aae2dd815-logs\") pod \"c3b64316-be3d-46e5-b67d-176aae2dd815\" (UID: \"c3b64316-be3d-46e5-b67d-176aae2dd815\") " Feb 19 10:05:33 crc kubenswrapper[4965]: I0219 10:05:33.008886 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3b64316-be3d-46e5-b67d-176aae2dd815-nova-metadata-tls-certs\") pod \"c3b64316-be3d-46e5-b67d-176aae2dd815\" (UID: \"c3b64316-be3d-46e5-b67d-176aae2dd815\") " Feb 19 10:05:33 crc kubenswrapper[4965]: I0219 10:05:33.009600 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3b64316-be3d-46e5-b67d-176aae2dd815-logs" (OuterVolumeSpecName: "logs") pod "c3b64316-be3d-46e5-b67d-176aae2dd815" (UID: "c3b64316-be3d-46e5-b67d-176aae2dd815"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:05:33 crc kubenswrapper[4965]: I0219 10:05:33.010529 4965 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3b64316-be3d-46e5-b67d-176aae2dd815-logs\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:33 crc kubenswrapper[4965]: I0219 10:05:33.015954 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3b64316-be3d-46e5-b67d-176aae2dd815-kube-api-access-mr76t" (OuterVolumeSpecName: "kube-api-access-mr76t") pod "c3b64316-be3d-46e5-b67d-176aae2dd815" (UID: "c3b64316-be3d-46e5-b67d-176aae2dd815"). InnerVolumeSpecName "kube-api-access-mr76t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:05:33 crc kubenswrapper[4965]: I0219 10:05:33.040403 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3b64316-be3d-46e5-b67d-176aae2dd815-config-data" (OuterVolumeSpecName: "config-data") pod "c3b64316-be3d-46e5-b67d-176aae2dd815" (UID: "c3b64316-be3d-46e5-b67d-176aae2dd815"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:05:33 crc kubenswrapper[4965]: I0219 10:05:33.040991 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3b64316-be3d-46e5-b67d-176aae2dd815-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c3b64316-be3d-46e5-b67d-176aae2dd815" (UID: "c3b64316-be3d-46e5-b67d-176aae2dd815"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:05:33 crc kubenswrapper[4965]: I0219 10:05:33.089591 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3b64316-be3d-46e5-b67d-176aae2dd815-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "c3b64316-be3d-46e5-b67d-176aae2dd815" (UID: "c3b64316-be3d-46e5-b67d-176aae2dd815"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:05:33 crc kubenswrapper[4965]: I0219 10:05:33.113697 4965 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3b64316-be3d-46e5-b67d-176aae2dd815-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:33 crc kubenswrapper[4965]: I0219 10:05:33.114007 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mr76t\" (UniqueName: \"kubernetes.io/projected/c3b64316-be3d-46e5-b67d-176aae2dd815-kube-api-access-mr76t\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:33 crc kubenswrapper[4965]: I0219 10:05:33.114133 4965 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3b64316-be3d-46e5-b67d-176aae2dd815-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:33 crc kubenswrapper[4965]: I0219 10:05:33.114207 4965 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3b64316-be3d-46e5-b67d-176aae2dd815-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:33 crc kubenswrapper[4965]: I0219 10:05:33.334893 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:05:33 crc kubenswrapper[4965]: I0219 10:05:33.359397 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:05:33 crc kubenswrapper[4965]: I0219 10:05:33.401236 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:05:33 crc kubenswrapper[4965]: E0219 10:05:33.401703 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3b64316-be3d-46e5-b67d-176aae2dd815" containerName="nova-metadata-metadata" Feb 19 10:05:33 crc kubenswrapper[4965]: I0219 10:05:33.401715 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3b64316-be3d-46e5-b67d-176aae2dd815" containerName="nova-metadata-metadata" Feb 19 10:05:33 crc kubenswrapper[4965]: E0219 10:05:33.401736 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3b64316-be3d-46e5-b67d-176aae2dd815" containerName="nova-metadata-log" Feb 19 10:05:33 crc kubenswrapper[4965]: I0219 10:05:33.401742 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3b64316-be3d-46e5-b67d-176aae2dd815" containerName="nova-metadata-log" Feb 19 10:05:33 crc kubenswrapper[4965]: I0219 10:05:33.401917 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3b64316-be3d-46e5-b67d-176aae2dd815" containerName="nova-metadata-metadata" Feb 19 10:05:33 crc kubenswrapper[4965]: I0219 10:05:33.401930 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3b64316-be3d-46e5-b67d-176aae2dd815" containerName="nova-metadata-log" Feb 19 10:05:33 crc kubenswrapper[4965]: I0219 10:05:33.403075 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 10:05:33 crc kubenswrapper[4965]: I0219 10:05:33.406260 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 19 10:05:33 crc kubenswrapper[4965]: I0219 10:05:33.406460 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 19 10:05:33 crc kubenswrapper[4965]: I0219 10:05:33.409768 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:05:33 crc kubenswrapper[4965]: I0219 10:05:33.522408 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd6d75f5-49d6-41d3-b812-e406dea5a4d1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fd6d75f5-49d6-41d3-b812-e406dea5a4d1\") " pod="openstack/nova-metadata-0" Feb 19 10:05:33 crc kubenswrapper[4965]: I0219 10:05:33.522499 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd6d75f5-49d6-41d3-b812-e406dea5a4d1-logs\") pod \"nova-metadata-0\" (UID: \"fd6d75f5-49d6-41d3-b812-e406dea5a4d1\") " pod="openstack/nova-metadata-0" Feb 19 10:05:33 crc kubenswrapper[4965]: I0219 10:05:33.522557 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd6d75f5-49d6-41d3-b812-e406dea5a4d1-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fd6d75f5-49d6-41d3-b812-e406dea5a4d1\") " pod="openstack/nova-metadata-0" Feb 19 10:05:33 crc kubenswrapper[4965]: I0219 10:05:33.522616 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd6d75f5-49d6-41d3-b812-e406dea5a4d1-config-data\") pod \"nova-metadata-0\" (UID: \"fd6d75f5-49d6-41d3-b812-e406dea5a4d1\") " pod="openstack/nova-metadata-0" Feb 19 10:05:33 crc kubenswrapper[4965]: I0219 10:05:33.522637 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqsj8\" (UniqueName: \"kubernetes.io/projected/fd6d75f5-49d6-41d3-b812-e406dea5a4d1-kube-api-access-fqsj8\") pod \"nova-metadata-0\" (UID: \"fd6d75f5-49d6-41d3-b812-e406dea5a4d1\") " pod="openstack/nova-metadata-0" Feb 19 10:05:33 crc kubenswrapper[4965]: I0219 10:05:33.624828 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd6d75f5-49d6-41d3-b812-e406dea5a4d1-config-data\") pod \"nova-metadata-0\" (UID: \"fd6d75f5-49d6-41d3-b812-e406dea5a4d1\") " pod="openstack/nova-metadata-0" Feb 19 10:05:33 crc kubenswrapper[4965]: I0219 10:05:33.624883 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqsj8\" (UniqueName: \"kubernetes.io/projected/fd6d75f5-49d6-41d3-b812-e406dea5a4d1-kube-api-access-fqsj8\") pod \"nova-metadata-0\" (UID: \"fd6d75f5-49d6-41d3-b812-e406dea5a4d1\") " pod="openstack/nova-metadata-0" Feb 19 10:05:33 crc kubenswrapper[4965]: I0219 10:05:33.624991 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd6d75f5-49d6-41d3-b812-e406dea5a4d1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fd6d75f5-49d6-41d3-b812-e406dea5a4d1\") " pod="openstack/nova-metadata-0" Feb 19 10:05:33 crc kubenswrapper[4965]: I0219 10:05:33.625057 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd6d75f5-49d6-41d3-b812-e406dea5a4d1-logs\") pod \"nova-metadata-0\" (UID: \"fd6d75f5-49d6-41d3-b812-e406dea5a4d1\") " pod="openstack/nova-metadata-0" Feb 19 10:05:33 crc kubenswrapper[4965]: I0219 10:05:33.625098 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd6d75f5-49d6-41d3-b812-e406dea5a4d1-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fd6d75f5-49d6-41d3-b812-e406dea5a4d1\") " pod="openstack/nova-metadata-0" Feb 19 10:05:33 crc kubenswrapper[4965]: I0219 10:05:33.626118 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd6d75f5-49d6-41d3-b812-e406dea5a4d1-logs\") pod \"nova-metadata-0\" (UID: \"fd6d75f5-49d6-41d3-b812-e406dea5a4d1\") " pod="openstack/nova-metadata-0" Feb 19 10:05:33 crc kubenswrapper[4965]: I0219 10:05:33.629530 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd6d75f5-49d6-41d3-b812-e406dea5a4d1-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fd6d75f5-49d6-41d3-b812-e406dea5a4d1\") " pod="openstack/nova-metadata-0" Feb 19 10:05:33 crc kubenswrapper[4965]: I0219 10:05:33.629891 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd6d75f5-49d6-41d3-b812-e406dea5a4d1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fd6d75f5-49d6-41d3-b812-e406dea5a4d1\") " pod="openstack/nova-metadata-0" Feb 19 10:05:33 crc kubenswrapper[4965]: I0219 10:05:33.630481 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd6d75f5-49d6-41d3-b812-e406dea5a4d1-config-data\") pod \"nova-metadata-0\" (UID: \"fd6d75f5-49d6-41d3-b812-e406dea5a4d1\") " pod="openstack/nova-metadata-0" Feb 19 10:05:33 crc kubenswrapper[4965]: I0219 10:05:33.647585 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqsj8\" (UniqueName: \"kubernetes.io/projected/fd6d75f5-49d6-41d3-b812-e406dea5a4d1-kube-api-access-fqsj8\") pod \"nova-metadata-0\" (UID: \"fd6d75f5-49d6-41d3-b812-e406dea5a4d1\") " pod="openstack/nova-metadata-0" Feb 19 10:05:33 crc kubenswrapper[4965]: I0219 10:05:33.732986 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 10:05:33 crc kubenswrapper[4965]: I0219 10:05:33.833848 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 10:05:33 crc kubenswrapper[4965]: I0219 10:05:33.930620 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a730512c-555c-40bd-835b-e2ce5242bdff-logs\") pod \"a730512c-555c-40bd-835b-e2ce5242bdff\" (UID: \"a730512c-555c-40bd-835b-e2ce5242bdff\") " Feb 19 10:05:33 crc kubenswrapper[4965]: I0219 10:05:33.930810 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a730512c-555c-40bd-835b-e2ce5242bdff-config-data\") pod \"a730512c-555c-40bd-835b-e2ce5242bdff\" (UID: \"a730512c-555c-40bd-835b-e2ce5242bdff\") " Feb 19 10:05:33 crc kubenswrapper[4965]: I0219 10:05:33.930933 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a730512c-555c-40bd-835b-e2ce5242bdff-combined-ca-bundle\") pod \"a730512c-555c-40bd-835b-e2ce5242bdff\" (UID: \"a730512c-555c-40bd-835b-e2ce5242bdff\") " Feb 19 10:05:33 crc kubenswrapper[4965]: I0219 10:05:33.930977 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4dbk\" (UniqueName: \"kubernetes.io/projected/a730512c-555c-40bd-835b-e2ce5242bdff-kube-api-access-s4dbk\") pod \"a730512c-555c-40bd-835b-e2ce5242bdff\" (UID: \"a730512c-555c-40bd-835b-e2ce5242bdff\") " Feb 19 10:05:33 crc kubenswrapper[4965]: I0219 10:05:33.931369 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a730512c-555c-40bd-835b-e2ce5242bdff-logs" (OuterVolumeSpecName: "logs") pod "a730512c-555c-40bd-835b-e2ce5242bdff" (UID: "a730512c-555c-40bd-835b-e2ce5242bdff"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:05:33 crc kubenswrapper[4965]: I0219 10:05:33.931533 4965 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a730512c-555c-40bd-835b-e2ce5242bdff-logs\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:33 crc kubenswrapper[4965]: I0219 10:05:33.959085 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a730512c-555c-40bd-835b-e2ce5242bdff-kube-api-access-s4dbk" (OuterVolumeSpecName: "kube-api-access-s4dbk") pod "a730512c-555c-40bd-835b-e2ce5242bdff" (UID: "a730512c-555c-40bd-835b-e2ce5242bdff"). InnerVolumeSpecName "kube-api-access-s4dbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:05:33 crc kubenswrapper[4965]: I0219 10:05:33.962754 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a730512c-555c-40bd-835b-e2ce5242bdff-config-data" (OuterVolumeSpecName: "config-data") pod "a730512c-555c-40bd-835b-e2ce5242bdff" (UID: "a730512c-555c-40bd-835b-e2ce5242bdff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:05:33 crc kubenswrapper[4965]: I0219 10:05:33.971836 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a730512c-555c-40bd-835b-e2ce5242bdff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a730512c-555c-40bd-835b-e2ce5242bdff" (UID: "a730512c-555c-40bd-835b-e2ce5242bdff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:05:33 crc kubenswrapper[4965]: I0219 10:05:33.979420 4965 generic.go:334] "Generic (PLEG): container finished" podID="a730512c-555c-40bd-835b-e2ce5242bdff" containerID="096c0d9f6cbed57ebd73e1acb17c5ac851c1e7bb8c047ed14ff1387c59ef76a5" exitCode=0 Feb 19 10:05:33 crc kubenswrapper[4965]: I0219 10:05:33.979472 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a730512c-555c-40bd-835b-e2ce5242bdff","Type":"ContainerDied","Data":"096c0d9f6cbed57ebd73e1acb17c5ac851c1e7bb8c047ed14ff1387c59ef76a5"} Feb 19 10:05:33 crc kubenswrapper[4965]: I0219 10:05:33.979505 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a730512c-555c-40bd-835b-e2ce5242bdff","Type":"ContainerDied","Data":"0c3b7f145126edce3dcd3ddbeae5a4943d9af5fc0482c7405a356552409a9be7"} Feb 19 10:05:33 crc kubenswrapper[4965]: I0219 10:05:33.979521 4965 scope.go:117] "RemoveContainer" containerID="096c0d9f6cbed57ebd73e1acb17c5ac851c1e7bb8c047ed14ff1387c59ef76a5" Feb 19 10:05:33 crc kubenswrapper[4965]: I0219 10:05:33.979540 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 10:05:34 crc kubenswrapper[4965]: I0219 10:05:34.014836 4965 scope.go:117] "RemoveContainer" containerID="fc7b169809ee70adf88159fd1bba1cbfd20f5a48f1d70c32635c48718cf4dfb0" Feb 19 10:05:34 crc kubenswrapper[4965]: I0219 10:05:34.034056 4965 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a730512c-555c-40bd-835b-e2ce5242bdff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:34 crc kubenswrapper[4965]: I0219 10:05:34.034091 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4dbk\" (UniqueName: \"kubernetes.io/projected/a730512c-555c-40bd-835b-e2ce5242bdff-kube-api-access-s4dbk\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:34 crc kubenswrapper[4965]: I0219 10:05:34.034102 4965 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a730512c-555c-40bd-835b-e2ce5242bdff-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:34 crc kubenswrapper[4965]: I0219 10:05:34.034123 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 10:05:34 crc kubenswrapper[4965]: I0219 10:05:34.050283 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 19 10:05:34 crc kubenswrapper[4965]: I0219 10:05:34.058475 4965 scope.go:117] "RemoveContainer" containerID="096c0d9f6cbed57ebd73e1acb17c5ac851c1e7bb8c047ed14ff1387c59ef76a5" Feb 19 10:05:34 crc kubenswrapper[4965]: E0219 10:05:34.058899 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"096c0d9f6cbed57ebd73e1acb17c5ac851c1e7bb8c047ed14ff1387c59ef76a5\": container with ID starting with 096c0d9f6cbed57ebd73e1acb17c5ac851c1e7bb8c047ed14ff1387c59ef76a5 not found: ID does not exist" containerID="096c0d9f6cbed57ebd73e1acb17c5ac851c1e7bb8c047ed14ff1387c59ef76a5" Feb 19 10:05:34 crc kubenswrapper[4965]: I0219 10:05:34.058934 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"096c0d9f6cbed57ebd73e1acb17c5ac851c1e7bb8c047ed14ff1387c59ef76a5"} err="failed to get container status \"096c0d9f6cbed57ebd73e1acb17c5ac851c1e7bb8c047ed14ff1387c59ef76a5\": rpc error: code = NotFound desc = could not find container \"096c0d9f6cbed57ebd73e1acb17c5ac851c1e7bb8c047ed14ff1387c59ef76a5\": container with ID starting with 096c0d9f6cbed57ebd73e1acb17c5ac851c1e7bb8c047ed14ff1387c59ef76a5 not found: ID does not exist" Feb 19 10:05:34 crc kubenswrapper[4965]: I0219 10:05:34.058955 4965 scope.go:117] "RemoveContainer" containerID="fc7b169809ee70adf88159fd1bba1cbfd20f5a48f1d70c32635c48718cf4dfb0" Feb 19 10:05:34 crc kubenswrapper[4965]: E0219 10:05:34.059228 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc7b169809ee70adf88159fd1bba1cbfd20f5a48f1d70c32635c48718cf4dfb0\": container with ID starting with fc7b169809ee70adf88159fd1bba1cbfd20f5a48f1d70c32635c48718cf4dfb0 not found: ID does not exist" containerID="fc7b169809ee70adf88159fd1bba1cbfd20f5a48f1d70c32635c48718cf4dfb0" Feb 19 10:05:34 crc kubenswrapper[4965]: I0219 10:05:34.059248 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc7b169809ee70adf88159fd1bba1cbfd20f5a48f1d70c32635c48718cf4dfb0"} err="failed to get container status \"fc7b169809ee70adf88159fd1bba1cbfd20f5a48f1d70c32635c48718cf4dfb0\": rpc error: code = NotFound desc = could not find container \"fc7b169809ee70adf88159fd1bba1cbfd20f5a48f1d70c32635c48718cf4dfb0\": container with ID starting with fc7b169809ee70adf88159fd1bba1cbfd20f5a48f1d70c32635c48718cf4dfb0 not found: ID does not exist" Feb 19 10:05:34 crc kubenswrapper[4965]: I0219 10:05:34.070836 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 10:05:34 crc kubenswrapper[4965]: E0219 10:05:34.071538 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a730512c-555c-40bd-835b-e2ce5242bdff" containerName="nova-api-log" Feb 19 10:05:34 crc kubenswrapper[4965]: I0219 10:05:34.071558 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="a730512c-555c-40bd-835b-e2ce5242bdff" containerName="nova-api-log" Feb 19 10:05:34 crc kubenswrapper[4965]: E0219 10:05:34.071599 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a730512c-555c-40bd-835b-e2ce5242bdff" containerName="nova-api-api" Feb 19 10:05:34 crc kubenswrapper[4965]: I0219 10:05:34.071606 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="a730512c-555c-40bd-835b-e2ce5242bdff" containerName="nova-api-api" Feb 19 10:05:34 crc kubenswrapper[4965]: I0219 10:05:34.071807 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="a730512c-555c-40bd-835b-e2ce5242bdff" containerName="nova-api-log" Feb 19 10:05:34 crc kubenswrapper[4965]: I0219 10:05:34.071830 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="a730512c-555c-40bd-835b-e2ce5242bdff" containerName="nova-api-api" Feb 19 10:05:34 crc kubenswrapper[4965]: I0219 10:05:34.073290 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 10:05:34 crc kubenswrapper[4965]: I0219 10:05:34.077672 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 10:05:34 crc kubenswrapper[4965]: I0219 10:05:34.084662 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 10:05:34 crc kubenswrapper[4965]: I0219 10:05:34.226746 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:05:34 crc kubenswrapper[4965]: W0219 10:05:34.236171 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd6d75f5_49d6_41d3_b812_e406dea5a4d1.slice/crio-70b85840ec315f8dbc682c29904b04455ae5a9a48bb9663c18e8d6cc978a7ca7 WatchSource:0}: Error finding container 70b85840ec315f8dbc682c29904b04455ae5a9a48bb9663c18e8d6cc978a7ca7: Status 404 returned error can't find the container with id 70b85840ec315f8dbc682c29904b04455ae5a9a48bb9663c18e8d6cc978a7ca7 Feb 19 10:05:34 crc kubenswrapper[4965]: I0219 10:05:34.237170 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb877059-a8dd-4347-ac5e-08baba808882-logs\") pod \"nova-api-0\" (UID: \"eb877059-a8dd-4347-ac5e-08baba808882\") " pod="openstack/nova-api-0" Feb 19 10:05:34 crc kubenswrapper[4965]: I0219 10:05:34.237291 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb877059-a8dd-4347-ac5e-08baba808882-config-data\") pod \"nova-api-0\" (UID: \"eb877059-a8dd-4347-ac5e-08baba808882\") " pod="openstack/nova-api-0" Feb 19 10:05:34 crc kubenswrapper[4965]: I0219 10:05:34.237361 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtk98\" (UniqueName: \"kubernetes.io/projected/eb877059-a8dd-4347-ac5e-08baba808882-kube-api-access-xtk98\") pod \"nova-api-0\" (UID: \"eb877059-a8dd-4347-ac5e-08baba808882\") " pod="openstack/nova-api-0" Feb 19 10:05:34 crc kubenswrapper[4965]: I0219 10:05:34.237392 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb877059-a8dd-4347-ac5e-08baba808882-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"eb877059-a8dd-4347-ac5e-08baba808882\") " pod="openstack/nova-api-0" Feb 19 10:05:34 crc kubenswrapper[4965]: I0219 10:05:34.339217 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb877059-a8dd-4347-ac5e-08baba808882-logs\") pod \"nova-api-0\" (UID: \"eb877059-a8dd-4347-ac5e-08baba808882\") " pod="openstack/nova-api-0" Feb 19 10:05:34 crc kubenswrapper[4965]: I0219 10:05:34.339299 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb877059-a8dd-4347-ac5e-08baba808882-config-data\") pod \"nova-api-0\" (UID: \"eb877059-a8dd-4347-ac5e-08baba808882\") " pod="openstack/nova-api-0" Feb 19 10:05:34 crc kubenswrapper[4965]: I0219 10:05:34.339362 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtk98\" (UniqueName: \"kubernetes.io/projected/eb877059-a8dd-4347-ac5e-08baba808882-kube-api-access-xtk98\") pod \"nova-api-0\" (UID: \"eb877059-a8dd-4347-ac5e-08baba808882\") " pod="openstack/nova-api-0" Feb 19 10:05:34 crc kubenswrapper[4965]: I0219 10:05:34.339400 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb877059-a8dd-4347-ac5e-08baba808882-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"eb877059-a8dd-4347-ac5e-08baba808882\") " pod="openstack/nova-api-0" Feb 19 10:05:34 crc kubenswrapper[4965]: I0219 10:05:34.339937 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb877059-a8dd-4347-ac5e-08baba808882-logs\") pod \"nova-api-0\" (UID: \"eb877059-a8dd-4347-ac5e-08baba808882\") " pod="openstack/nova-api-0" Feb 19 10:05:34 crc kubenswrapper[4965]: I0219 10:05:34.343383 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb877059-a8dd-4347-ac5e-08baba808882-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"eb877059-a8dd-4347-ac5e-08baba808882\") " pod="openstack/nova-api-0" Feb 19 10:05:34 crc kubenswrapper[4965]: I0219 10:05:34.344185 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb877059-a8dd-4347-ac5e-08baba808882-config-data\") pod \"nova-api-0\" (UID: \"eb877059-a8dd-4347-ac5e-08baba808882\") " pod="openstack/nova-api-0" Feb 19 10:05:34 crc kubenswrapper[4965]: I0219 10:05:34.362566 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtk98\" (UniqueName: \"kubernetes.io/projected/eb877059-a8dd-4347-ac5e-08baba808882-kube-api-access-xtk98\") pod \"nova-api-0\" (UID: \"eb877059-a8dd-4347-ac5e-08baba808882\") " pod="openstack/nova-api-0" Feb 19 10:05:34 crc kubenswrapper[4965]: I0219 10:05:34.391951 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 10:05:34 crc kubenswrapper[4965]: W0219 10:05:34.866582 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb877059_a8dd_4347_ac5e_08baba808882.slice/crio-bb3e5893598adae78c0f7f2f4db331fbde15be8cc56d30e10b0f68e9ec76bfae WatchSource:0}: Error finding container bb3e5893598adae78c0f7f2f4db331fbde15be8cc56d30e10b0f68e9ec76bfae: Status 404 returned error can't find the container with id bb3e5893598adae78c0f7f2f4db331fbde15be8cc56d30e10b0f68e9ec76bfae Feb 19 10:05:34 crc kubenswrapper[4965]: I0219 10:05:34.870155 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 10:05:34 crc kubenswrapper[4965]: I0219 10:05:34.990278 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eb877059-a8dd-4347-ac5e-08baba808882","Type":"ContainerStarted","Data":"bb3e5893598adae78c0f7f2f4db331fbde15be8cc56d30e10b0f68e9ec76bfae"} Feb 19 10:05:34 crc kubenswrapper[4965]: I0219 10:05:34.992178 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fd6d75f5-49d6-41d3-b812-e406dea5a4d1","Type":"ContainerStarted","Data":"475d7b0f8a76a385b34955f80982c1f0792b7806b49576cbe9b801197953d60e"} Feb 19 10:05:34 crc kubenswrapper[4965]: I0219 10:05:34.992226 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fd6d75f5-49d6-41d3-b812-e406dea5a4d1","Type":"ContainerStarted","Data":"06b8e45a13ff271edb63dc5141fa51d3cd108a2a4888b96e852541d8b583efcc"} Feb 19 10:05:34 crc kubenswrapper[4965]: I0219 10:05:34.992240 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fd6d75f5-49d6-41d3-b812-e406dea5a4d1","Type":"ContainerStarted","Data":"70b85840ec315f8dbc682c29904b04455ae5a9a48bb9663c18e8d6cc978a7ca7"} Feb 19 10:05:35 crc kubenswrapper[4965]: I0219 10:05:35.011716 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.011693656 podStartE2EDuration="2.011693656s" podCreationTimestamp="2026-02-19 10:05:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:05:35.010133218 +0000 UTC m=+1390.631454538" watchObservedRunningTime="2026-02-19 10:05:35.011693656 +0000 UTC m=+1390.633014966" Feb 19 10:05:35 crc kubenswrapper[4965]: I0219 10:05:35.218775 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a730512c-555c-40bd-835b-e2ce5242bdff" path="/var/lib/kubelet/pods/a730512c-555c-40bd-835b-e2ce5242bdff/volumes" Feb 19 10:05:35 crc kubenswrapper[4965]: I0219 10:05:35.220344 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3b64316-be3d-46e5-b67d-176aae2dd815" path="/var/lib/kubelet/pods/c3b64316-be3d-46e5-b67d-176aae2dd815/volumes" Feb 19 10:05:36 crc kubenswrapper[4965]: I0219 10:05:36.003401 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eb877059-a8dd-4347-ac5e-08baba808882","Type":"ContainerStarted","Data":"a16fc5c3d0e0db847e840708f974a2bba3681c23e9b364723aa9a128602d3e57"} Feb 19 10:05:36 crc kubenswrapper[4965]: I0219 10:05:36.003709 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eb877059-a8dd-4347-ac5e-08baba808882","Type":"ContainerStarted","Data":"c293a7335c0acf4b1b23da789a047620557caffe1b453f3b6e0ed68558acff77"} Feb 19 10:05:36 crc kubenswrapper[4965]: I0219 10:05:36.029165 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.029136871 podStartE2EDuration="2.029136871s" podCreationTimestamp="2026-02-19 10:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:05:36.022721236 +0000 UTC m=+1391.644042566" watchObservedRunningTime="2026-02-19 10:05:36.029136871 +0000 UTC m=+1391.650458201" Feb 19 10:05:38 crc kubenswrapper[4965]: I0219 10:05:38.737433 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 10:05:38 crc kubenswrapper[4965]: I0219 10:05:38.737794 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 10:05:43 crc kubenswrapper[4965]: I0219 10:05:43.733283 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 10:05:43 crc kubenswrapper[4965]: I0219 10:05:43.734165 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 10:05:44 crc kubenswrapper[4965]: I0219 10:05:44.394044 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 10:05:44 crc kubenswrapper[4965]: I0219 10:05:44.394851 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 10:05:44 crc kubenswrapper[4965]: I0219 10:05:44.752484 4965 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="fd6d75f5-49d6-41d3-b812-e406dea5a4d1" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.222:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 10:05:44 crc kubenswrapper[4965]: I0219 10:05:44.752498 4965 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="fd6d75f5-49d6-41d3-b812-e406dea5a4d1" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.222:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 10:05:44 crc kubenswrapper[4965]: I0219 10:05:44.817604 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:05:44 crc kubenswrapper[4965]: I0219 10:05:44.975000 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4ee246c-cea9-4377-9f96-54387ac61022-config-data\") pod \"f4ee246c-cea9-4377-9f96-54387ac61022\" (UID: \"f4ee246c-cea9-4377-9f96-54387ac61022\") " Feb 19 10:05:44 crc kubenswrapper[4965]: I0219 10:05:44.975491 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qdxt\" (UniqueName: \"kubernetes.io/projected/f4ee246c-cea9-4377-9f96-54387ac61022-kube-api-access-8qdxt\") pod \"f4ee246c-cea9-4377-9f96-54387ac61022\" (UID: \"f4ee246c-cea9-4377-9f96-54387ac61022\") " Feb 19 10:05:44 crc kubenswrapper[4965]: I0219 10:05:44.975693 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4ee246c-cea9-4377-9f96-54387ac61022-combined-ca-bundle\") pod \"f4ee246c-cea9-4377-9f96-54387ac61022\" (UID: \"f4ee246c-cea9-4377-9f96-54387ac61022\") " Feb 19 10:05:44 crc kubenswrapper[4965]: I0219 10:05:44.980378 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4ee246c-cea9-4377-9f96-54387ac61022-kube-api-access-8qdxt" (OuterVolumeSpecName: "kube-api-access-8qdxt") pod "f4ee246c-cea9-4377-9f96-54387ac61022" (UID: "f4ee246c-cea9-4377-9f96-54387ac61022"). InnerVolumeSpecName "kube-api-access-8qdxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.000219 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.014911 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4ee246c-cea9-4377-9f96-54387ac61022-config-data" (OuterVolumeSpecName: "config-data") pod "f4ee246c-cea9-4377-9f96-54387ac61022" (UID: "f4ee246c-cea9-4377-9f96-54387ac61022"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.015982 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4ee246c-cea9-4377-9f96-54387ac61022-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f4ee246c-cea9-4377-9f96-54387ac61022" (UID: "f4ee246c-cea9-4377-9f96-54387ac61022"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.077588 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db417f58-59be-4949-a86c-60ca4439ec09-config-data\") pod \"db417f58-59be-4949-a86c-60ca4439ec09\" (UID: \"db417f58-59be-4949-a86c-60ca4439ec09\") " Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.077815 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tx4qg\" (UniqueName: \"kubernetes.io/projected/db417f58-59be-4949-a86c-60ca4439ec09-kube-api-access-tx4qg\") pod \"db417f58-59be-4949-a86c-60ca4439ec09\" (UID: \"db417f58-59be-4949-a86c-60ca4439ec09\") " Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.078042 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db417f58-59be-4949-a86c-60ca4439ec09-combined-ca-bundle\") pod \"db417f58-59be-4949-a86c-60ca4439ec09\" (UID: \"db417f58-59be-4949-a86c-60ca4439ec09\") " Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.078764 4965 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4ee246c-cea9-4377-9f96-54387ac61022-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.078825 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qdxt\" (UniqueName: \"kubernetes.io/projected/f4ee246c-cea9-4377-9f96-54387ac61022-kube-api-access-8qdxt\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.078840 4965 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4ee246c-cea9-4377-9f96-54387ac61022-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.081159 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db417f58-59be-4949-a86c-60ca4439ec09-kube-api-access-tx4qg" (OuterVolumeSpecName: "kube-api-access-tx4qg") pod "db417f58-59be-4949-a86c-60ca4439ec09" (UID: "db417f58-59be-4949-a86c-60ca4439ec09"). InnerVolumeSpecName "kube-api-access-tx4qg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.102496 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db417f58-59be-4949-a86c-60ca4439ec09-config-data" (OuterVolumeSpecName: "config-data") pod "db417f58-59be-4949-a86c-60ca4439ec09" (UID: "db417f58-59be-4949-a86c-60ca4439ec09"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.103334 4965 generic.go:334] "Generic (PLEG): container finished" podID="f4ee246c-cea9-4377-9f96-54387ac61022" containerID="e55738e982b49f6e967c17ef7f27af1b07c46293e5a63d43b461413a45c29242" exitCode=137 Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.103406 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f4ee246c-cea9-4377-9f96-54387ac61022","Type":"ContainerDied","Data":"e55738e982b49f6e967c17ef7f27af1b07c46293e5a63d43b461413a45c29242"} Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.103419 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.103447 4965 scope.go:117] "RemoveContainer" containerID="e55738e982b49f6e967c17ef7f27af1b07c46293e5a63d43b461413a45c29242" Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.103434 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f4ee246c-cea9-4377-9f96-54387ac61022","Type":"ContainerDied","Data":"6724d1622fe064ffd34d221b4ffa2500297856af043a23e8820038be3e098aa9"} Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.108547 4965 generic.go:334] "Generic (PLEG): container finished" podID="db417f58-59be-4949-a86c-60ca4439ec09" containerID="6f65abfd992faa041dc14920565b145c1bc5db823aa149fcb04f697a71a60ac0" exitCode=137 Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.108580 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"db417f58-59be-4949-a86c-60ca4439ec09","Type":"ContainerDied","Data":"6f65abfd992faa041dc14920565b145c1bc5db823aa149fcb04f697a71a60ac0"} Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.108600 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"db417f58-59be-4949-a86c-60ca4439ec09","Type":"ContainerDied","Data":"1f3f761240a8147451b8c3a853cbf11c25ecccf012f617661287386e34126af3"} Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.108645 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.126250 4965 scope.go:117] "RemoveContainer" containerID="e55738e982b49f6e967c17ef7f27af1b07c46293e5a63d43b461413a45c29242" Feb 19 10:05:45 crc kubenswrapper[4965]: E0219 10:05:45.126603 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e55738e982b49f6e967c17ef7f27af1b07c46293e5a63d43b461413a45c29242\": container with ID starting with e55738e982b49f6e967c17ef7f27af1b07c46293e5a63d43b461413a45c29242 not found: ID does not exist" containerID="e55738e982b49f6e967c17ef7f27af1b07c46293e5a63d43b461413a45c29242" Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.126686 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e55738e982b49f6e967c17ef7f27af1b07c46293e5a63d43b461413a45c29242"} err="failed to get container status \"e55738e982b49f6e967c17ef7f27af1b07c46293e5a63d43b461413a45c29242\": rpc error: code = NotFound desc = could not find container \"e55738e982b49f6e967c17ef7f27af1b07c46293e5a63d43b461413a45c29242\": container with ID starting with e55738e982b49f6e967c17ef7f27af1b07c46293e5a63d43b461413a45c29242 not found: ID does not exist" Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.126757 4965 scope.go:117] "RemoveContainer" containerID="6f65abfd992faa041dc14920565b145c1bc5db823aa149fcb04f697a71a60ac0" Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.127095 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db417f58-59be-4949-a86c-60ca4439ec09-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db417f58-59be-4949-a86c-60ca4439ec09" (UID: "db417f58-59be-4949-a86c-60ca4439ec09"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.142807 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.172411 4965 scope.go:117] "RemoveContainer" containerID="6f65abfd992faa041dc14920565b145c1bc5db823aa149fcb04f697a71a60ac0" Feb 19 10:05:45 crc kubenswrapper[4965]: E0219 10:05:45.179634 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f65abfd992faa041dc14920565b145c1bc5db823aa149fcb04f697a71a60ac0\": container with ID starting with 6f65abfd992faa041dc14920565b145c1bc5db823aa149fcb04f697a71a60ac0 not found: ID does not exist" containerID="6f65abfd992faa041dc14920565b145c1bc5db823aa149fcb04f697a71a60ac0" Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.179838 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f65abfd992faa041dc14920565b145c1bc5db823aa149fcb04f697a71a60ac0"} err="failed to get container status \"6f65abfd992faa041dc14920565b145c1bc5db823aa149fcb04f697a71a60ac0\": rpc error: code = NotFound desc = could not find container \"6f65abfd992faa041dc14920565b145c1bc5db823aa149fcb04f697a71a60ac0\": container with ID starting with 6f65abfd992faa041dc14920565b145c1bc5db823aa149fcb04f697a71a60ac0 not found: ID does not exist" Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.182397 4965 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db417f58-59be-4949-a86c-60ca4439ec09-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.190378 4965 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db417f58-59be-4949-a86c-60ca4439ec09-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.190958 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tx4qg\" (UniqueName: \"kubernetes.io/projected/db417f58-59be-4949-a86c-60ca4439ec09-kube-api-access-tx4qg\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.221818 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.222005 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 10:05:45 crc kubenswrapper[4965]: E0219 10:05:45.222817 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4ee246c-cea9-4377-9f96-54387ac61022" containerName="nova-cell1-novncproxy-novncproxy" Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.222854 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4ee246c-cea9-4377-9f96-54387ac61022" containerName="nova-cell1-novncproxy-novncproxy" Feb 19 10:05:45 crc kubenswrapper[4965]: E0219 10:05:45.222899 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db417f58-59be-4949-a86c-60ca4439ec09" containerName="nova-scheduler-scheduler" Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.222912 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="db417f58-59be-4949-a86c-60ca4439ec09" containerName="nova-scheduler-scheduler" Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.223296 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4ee246c-cea9-4377-9f96-54387ac61022" containerName="nova-cell1-novncproxy-novncproxy" Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.223338 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="db417f58-59be-4949-a86c-60ca4439ec09" containerName="nova-scheduler-scheduler" Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.224645 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.224821 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.228156 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.228539 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.229699 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.293120 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9358573e-5a2b-4f2a-bbff-0e55e0e00869-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"9358573e-5a2b-4f2a-bbff-0e55e0e00869\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.293490 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/9358573e-5a2b-4f2a-bbff-0e55e0e00869-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"9358573e-5a2b-4f2a-bbff-0e55e0e00869\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.293592 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/9358573e-5a2b-4f2a-bbff-0e55e0e00869-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"9358573e-5a2b-4f2a-bbff-0e55e0e00869\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.293716 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9358573e-5a2b-4f2a-bbff-0e55e0e00869-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"9358573e-5a2b-4f2a-bbff-0e55e0e00869\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.293803 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhqhs\" (UniqueName: \"kubernetes.io/projected/9358573e-5a2b-4f2a-bbff-0e55e0e00869-kube-api-access-zhqhs\") pod \"nova-cell1-novncproxy-0\" (UID: \"9358573e-5a2b-4f2a-bbff-0e55e0e00869\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.395840 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9358573e-5a2b-4f2a-bbff-0e55e0e00869-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"9358573e-5a2b-4f2a-bbff-0e55e0e00869\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.395916 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhqhs\" (UniqueName: \"kubernetes.io/projected/9358573e-5a2b-4f2a-bbff-0e55e0e00869-kube-api-access-zhqhs\") pod \"nova-cell1-novncproxy-0\" (UID: \"9358573e-5a2b-4f2a-bbff-0e55e0e00869\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.396041 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9358573e-5a2b-4f2a-bbff-0e55e0e00869-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"9358573e-5a2b-4f2a-bbff-0e55e0e00869\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.396103 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/9358573e-5a2b-4f2a-bbff-0e55e0e00869-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"9358573e-5a2b-4f2a-bbff-0e55e0e00869\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.396133 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/9358573e-5a2b-4f2a-bbff-0e55e0e00869-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"9358573e-5a2b-4f2a-bbff-0e55e0e00869\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.400212 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9358573e-5a2b-4f2a-bbff-0e55e0e00869-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"9358573e-5a2b-4f2a-bbff-0e55e0e00869\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.400619 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/9358573e-5a2b-4f2a-bbff-0e55e0e00869-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"9358573e-5a2b-4f2a-bbff-0e55e0e00869\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.400880 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/9358573e-5a2b-4f2a-bbff-0e55e0e00869-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"9358573e-5a2b-4f2a-bbff-0e55e0e00869\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.401719 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9358573e-5a2b-4f2a-bbff-0e55e0e00869-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"9358573e-5a2b-4f2a-bbff-0e55e0e00869\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.414699 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhqhs\" (UniqueName: \"kubernetes.io/projected/9358573e-5a2b-4f2a-bbff-0e55e0e00869-kube-api-access-zhqhs\") pod \"nova-cell1-novncproxy-0\" (UID: \"9358573e-5a2b-4f2a-bbff-0e55e0e00869\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.475468 4965 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="eb877059-a8dd-4347-ac5e-08baba808882" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.223:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.475482 4965 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="eb877059-a8dd-4347-ac5e-08baba808882" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.223:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.549120 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.562464 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.568909 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.579380 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.613709 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.616936 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.620690 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.719517 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg5b4\" (UniqueName: \"kubernetes.io/projected/edd94405-a4e9-4078-b7f7-d0fe27e28d69-kube-api-access-zg5b4\") pod \"nova-scheduler-0\" (UID: \"edd94405-a4e9-4078-b7f7-d0fe27e28d69\") " pod="openstack/nova-scheduler-0" Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.719867 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edd94405-a4e9-4078-b7f7-d0fe27e28d69-config-data\") pod \"nova-scheduler-0\" (UID: \"edd94405-a4e9-4078-b7f7-d0fe27e28d69\") " pod="openstack/nova-scheduler-0" Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.720039 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edd94405-a4e9-4078-b7f7-d0fe27e28d69-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"edd94405-a4e9-4078-b7f7-d0fe27e28d69\") " pod="openstack/nova-scheduler-0" Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.822130 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edd94405-a4e9-4078-b7f7-d0fe27e28d69-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"edd94405-a4e9-4078-b7f7-d0fe27e28d69\") " pod="openstack/nova-scheduler-0" Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.822257 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zg5b4\" (UniqueName: \"kubernetes.io/projected/edd94405-a4e9-4078-b7f7-d0fe27e28d69-kube-api-access-zg5b4\") pod \"nova-scheduler-0\" (UID: \"edd94405-a4e9-4078-b7f7-d0fe27e28d69\") " pod="openstack/nova-scheduler-0" Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.822308 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edd94405-a4e9-4078-b7f7-d0fe27e28d69-config-data\") pod \"nova-scheduler-0\" (UID: \"edd94405-a4e9-4078-b7f7-d0fe27e28d69\") " pod="openstack/nova-scheduler-0" Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.836704 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edd94405-a4e9-4078-b7f7-d0fe27e28d69-config-data\") pod \"nova-scheduler-0\" (UID: \"edd94405-a4e9-4078-b7f7-d0fe27e28d69\") " pod="openstack/nova-scheduler-0" Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.837722 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edd94405-a4e9-4078-b7f7-d0fe27e28d69-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"edd94405-a4e9-4078-b7f7-d0fe27e28d69\") " pod="openstack/nova-scheduler-0" Feb 19 10:05:45 crc kubenswrapper[4965]: I0219 10:05:45.840910 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zg5b4\" (UniqueName: \"kubernetes.io/projected/edd94405-a4e9-4078-b7f7-d0fe27e28d69-kube-api-access-zg5b4\") pod \"nova-scheduler-0\" (UID: \"edd94405-a4e9-4078-b7f7-d0fe27e28d69\") " pod="openstack/nova-scheduler-0" Feb 19 10:05:46 crc kubenswrapper[4965]: I0219 10:05:46.035987 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 10:05:46 crc kubenswrapper[4965]: W0219 10:05:46.080218 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9358573e_5a2b_4f2a_bbff_0e55e0e00869.slice/crio-7b82a65150624f37049f568b3dca67b52edb7c48a89641b731017254a724fa29 WatchSource:0}: Error finding container 7b82a65150624f37049f568b3dca67b52edb7c48a89641b731017254a724fa29: Status 404 returned error can't find the container with id 7b82a65150624f37049f568b3dca67b52edb7c48a89641b731017254a724fa29 Feb 19 10:05:46 crc kubenswrapper[4965]: I0219 10:05:46.080595 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 10:05:46 crc kubenswrapper[4965]: I0219 10:05:46.122765 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 19 10:05:46 crc kubenswrapper[4965]: I0219 10:05:46.134404 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"9358573e-5a2b-4f2a-bbff-0e55e0e00869","Type":"ContainerStarted","Data":"7b82a65150624f37049f568b3dca67b52edb7c48a89641b731017254a724fa29"} Feb 19 10:05:46 crc kubenswrapper[4965]: I0219 10:05:46.543453 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 10:05:47 crc kubenswrapper[4965]: I0219 10:05:47.152334 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"edd94405-a4e9-4078-b7f7-d0fe27e28d69","Type":"ContainerStarted","Data":"e44bda06a835128697318ffaa90768ab207e29cafd57d5c0492c8732a45e70a7"} Feb 19 10:05:47 crc kubenswrapper[4965]: I0219 10:05:47.152754 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"edd94405-a4e9-4078-b7f7-d0fe27e28d69","Type":"ContainerStarted","Data":"a3e3a728c0049958984849ebb8d60a71986860c7637dcbf547179f303ac3e246"} Feb 19 10:05:47 crc kubenswrapper[4965]: I0219 10:05:47.159024 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"9358573e-5a2b-4f2a-bbff-0e55e0e00869","Type":"ContainerStarted","Data":"1499d37dcc5a0bd21838fbdbaf809801f49d07594a36497c4088577ae683cf46"} Feb 19 10:05:47 crc kubenswrapper[4965]: I0219 10:05:47.182038 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.182014196 podStartE2EDuration="2.182014196s" podCreationTimestamp="2026-02-19 10:05:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:05:47.172881243 +0000 UTC m=+1402.794202583" watchObservedRunningTime="2026-02-19 10:05:47.182014196 +0000 UTC m=+1402.803335506" Feb 19 10:05:47 crc kubenswrapper[4965]: I0219 10:05:47.200776 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.20075491 podStartE2EDuration="2.20075491s" podCreationTimestamp="2026-02-19 10:05:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:05:47.196759453 +0000 UTC m=+1402.818080773" watchObservedRunningTime="2026-02-19 10:05:47.20075491 +0000 UTC m=+1402.822076220" Feb 19 10:05:47 crc kubenswrapper[4965]: I0219 10:05:47.226964 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db417f58-59be-4949-a86c-60ca4439ec09" path="/var/lib/kubelet/pods/db417f58-59be-4949-a86c-60ca4439ec09/volumes" Feb 19 10:05:47 crc kubenswrapper[4965]: I0219 10:05:47.227806 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4ee246c-cea9-4377-9f96-54387ac61022" path="/var/lib/kubelet/pods/f4ee246c-cea9-4377-9f96-54387ac61022/volumes" Feb 19 10:05:50 crc kubenswrapper[4965]: I0219 10:05:50.016620 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 10:05:50 crc kubenswrapper[4965]: I0219 10:05:50.017436 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="6263bd60-b2d0-44ff-ae54-874728576f1d" containerName="kube-state-metrics" containerID="cri-o://5f5b70678e77927b68b8ea51dd42cb45ad507a78c7344cc371c42214804244fd" gracePeriod=30 Feb 19 10:05:50 crc kubenswrapper[4965]: I0219 10:05:50.197832 4965 generic.go:334] "Generic (PLEG): container finished" podID="6263bd60-b2d0-44ff-ae54-874728576f1d" containerID="5f5b70678e77927b68b8ea51dd42cb45ad507a78c7344cc371c42214804244fd" exitCode=2 Feb 19 10:05:50 crc kubenswrapper[4965]: I0219 10:05:50.197890 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6263bd60-b2d0-44ff-ae54-874728576f1d","Type":"ContainerDied","Data":"5f5b70678e77927b68b8ea51dd42cb45ad507a78c7344cc371c42214804244fd"} Feb 19 10:05:50 crc kubenswrapper[4965]: I0219 10:05:50.562627 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:05:50 crc kubenswrapper[4965]: I0219 10:05:50.586129 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 10:05:50 crc kubenswrapper[4965]: I0219 10:05:50.639270 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcqdp\" (UniqueName: \"kubernetes.io/projected/6263bd60-b2d0-44ff-ae54-874728576f1d-kube-api-access-zcqdp\") pod \"6263bd60-b2d0-44ff-ae54-874728576f1d\" (UID: \"6263bd60-b2d0-44ff-ae54-874728576f1d\") " Feb 19 10:05:50 crc kubenswrapper[4965]: I0219 10:05:50.658834 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6263bd60-b2d0-44ff-ae54-874728576f1d-kube-api-access-zcqdp" (OuterVolumeSpecName: "kube-api-access-zcqdp") pod "6263bd60-b2d0-44ff-ae54-874728576f1d" (UID: "6263bd60-b2d0-44ff-ae54-874728576f1d"). InnerVolumeSpecName "kube-api-access-zcqdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:05:50 crc kubenswrapper[4965]: I0219 10:05:50.741707 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zcqdp\" (UniqueName: \"kubernetes.io/projected/6263bd60-b2d0-44ff-ae54-874728576f1d-kube-api-access-zcqdp\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:51 crc kubenswrapper[4965]: I0219 10:05:51.036086 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 19 10:05:51 crc kubenswrapper[4965]: I0219 10:05:51.215841 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 10:05:51 crc kubenswrapper[4965]: I0219 10:05:51.227539 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6263bd60-b2d0-44ff-ae54-874728576f1d","Type":"ContainerDied","Data":"e10a4be2e0f74e37559727d687c0b37894c0b3202300f5aaed28101235b5b398"} Feb 19 10:05:51 crc kubenswrapper[4965]: I0219 10:05:51.227771 4965 scope.go:117] "RemoveContainer" containerID="5f5b70678e77927b68b8ea51dd42cb45ad507a78c7344cc371c42214804244fd" Feb 19 10:05:51 crc kubenswrapper[4965]: I0219 10:05:51.277609 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 10:05:51 crc kubenswrapper[4965]: I0219 10:05:51.290078 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 10:05:51 crc kubenswrapper[4965]: E0219 10:05:51.290516 4965 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6263bd60_b2d0_44ff_ae54_874728576f1d.slice/crio-e10a4be2e0f74e37559727d687c0b37894c0b3202300f5aaed28101235b5b398\": RecentStats: unable to find data in memory cache]" Feb 19 10:05:51 crc kubenswrapper[4965]: I0219 10:05:51.319907 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 10:05:51 crc kubenswrapper[4965]: E0219 10:05:51.320593 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6263bd60-b2d0-44ff-ae54-874728576f1d" containerName="kube-state-metrics" Feb 19 10:05:51 crc kubenswrapper[4965]: I0219 10:05:51.320612 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="6263bd60-b2d0-44ff-ae54-874728576f1d" containerName="kube-state-metrics" Feb 19 10:05:51 crc kubenswrapper[4965]: I0219 10:05:51.320817 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="6263bd60-b2d0-44ff-ae54-874728576f1d" containerName="kube-state-metrics" Feb 19 10:05:51 crc kubenswrapper[4965]: I0219 10:05:51.321527 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 10:05:51 crc kubenswrapper[4965]: I0219 10:05:51.328176 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 19 10:05:51 crc kubenswrapper[4965]: I0219 10:05:51.328400 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 19 10:05:51 crc kubenswrapper[4965]: I0219 10:05:51.333616 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 10:05:51 crc kubenswrapper[4965]: I0219 10:05:51.362054 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr8vt\" (UniqueName: \"kubernetes.io/projected/947a2943-c25c-4606-848a-a2942e8988c9-kube-api-access-mr8vt\") pod \"kube-state-metrics-0\" (UID: \"947a2943-c25c-4606-848a-a2942e8988c9\") " pod="openstack/kube-state-metrics-0" Feb 19 10:05:51 crc kubenswrapper[4965]: I0219 10:05:51.362138 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/947a2943-c25c-4606-848a-a2942e8988c9-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"947a2943-c25c-4606-848a-a2942e8988c9\") " pod="openstack/kube-state-metrics-0" Feb 19 10:05:51 crc kubenswrapper[4965]: I0219 10:05:51.362283 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/947a2943-c25c-4606-848a-a2942e8988c9-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"947a2943-c25c-4606-848a-a2942e8988c9\") " pod="openstack/kube-state-metrics-0" Feb 19 10:05:51 crc kubenswrapper[4965]: I0219 10:05:51.362493 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/947a2943-c25c-4606-848a-a2942e8988c9-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"947a2943-c25c-4606-848a-a2942e8988c9\") " pod="openstack/kube-state-metrics-0" Feb 19 10:05:51 crc kubenswrapper[4965]: I0219 10:05:51.464853 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mr8vt\" (UniqueName: \"kubernetes.io/projected/947a2943-c25c-4606-848a-a2942e8988c9-kube-api-access-mr8vt\") pod \"kube-state-metrics-0\" (UID: \"947a2943-c25c-4606-848a-a2942e8988c9\") " pod="openstack/kube-state-metrics-0" Feb 19 10:05:51 crc kubenswrapper[4965]: I0219 10:05:51.464915 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/947a2943-c25c-4606-848a-a2942e8988c9-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"947a2943-c25c-4606-848a-a2942e8988c9\") " pod="openstack/kube-state-metrics-0" Feb 19 10:05:51 crc kubenswrapper[4965]: I0219 10:05:51.464995 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/947a2943-c25c-4606-848a-a2942e8988c9-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"947a2943-c25c-4606-848a-a2942e8988c9\") " pod="openstack/kube-state-metrics-0" Feb 19 10:05:51 crc kubenswrapper[4965]: I0219 10:05:51.465044 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/947a2943-c25c-4606-848a-a2942e8988c9-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"947a2943-c25c-4606-848a-a2942e8988c9\") " pod="openstack/kube-state-metrics-0" Feb 19 10:05:51 crc kubenswrapper[4965]: I0219 10:05:51.471802 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/947a2943-c25c-4606-848a-a2942e8988c9-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"947a2943-c25c-4606-848a-a2942e8988c9\") " pod="openstack/kube-state-metrics-0" Feb 19 10:05:51 crc kubenswrapper[4965]: I0219 10:05:51.475880 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/947a2943-c25c-4606-848a-a2942e8988c9-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"947a2943-c25c-4606-848a-a2942e8988c9\") " pod="openstack/kube-state-metrics-0" Feb 19 10:05:51 crc kubenswrapper[4965]: I0219 10:05:51.477304 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/947a2943-c25c-4606-848a-a2942e8988c9-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"947a2943-c25c-4606-848a-a2942e8988c9\") " pod="openstack/kube-state-metrics-0" Feb 19 10:05:51 crc kubenswrapper[4965]: I0219 10:05:51.489543 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mr8vt\" (UniqueName: \"kubernetes.io/projected/947a2943-c25c-4606-848a-a2942e8988c9-kube-api-access-mr8vt\") pod \"kube-state-metrics-0\" (UID: \"947a2943-c25c-4606-848a-a2942e8988c9\") " pod="openstack/kube-state-metrics-0" Feb 19 10:05:51 crc kubenswrapper[4965]: I0219 10:05:51.641491 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 10:05:51 crc kubenswrapper[4965]: I0219 10:05:51.821365 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:05:51 crc kubenswrapper[4965]: I0219 10:05:51.821960 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1c48eb36-b3c5-4516-a345-6dc813d425cb" containerName="ceilometer-central-agent" containerID="cri-o://95b86d8f471de0d406d27b87b967f19339596823d3562e22741b5ef38fbefd78" gracePeriod=30 Feb 19 10:05:51 crc kubenswrapper[4965]: I0219 10:05:51.823673 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1c48eb36-b3c5-4516-a345-6dc813d425cb" containerName="proxy-httpd" containerID="cri-o://f90f4c1bd3e20625cc9aced4d5faaaf9897c41870dbbb0b10fd90dddbd81cb98" gracePeriod=30 Feb 19 10:05:51 crc kubenswrapper[4965]: I0219 10:05:51.823864 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1c48eb36-b3c5-4516-a345-6dc813d425cb" containerName="sg-core" containerID="cri-o://37b03556e861708c26d3c1f49e7151c9cbd6b98e53ec0294941fe42652a494dd" gracePeriod=30 Feb 19 10:05:51 crc kubenswrapper[4965]: I0219 10:05:51.823922 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1c48eb36-b3c5-4516-a345-6dc813d425cb" containerName="ceilometer-notification-agent" containerID="cri-o://64658e35d12f5cabaf06523e730d47175a03175015974d985b3a19c27f08eeb0" gracePeriod=30 Feb 19 10:05:52 crc kubenswrapper[4965]: I0219 10:05:52.096884 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 10:05:52 crc kubenswrapper[4965]: W0219 10:05:52.096890 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod947a2943_c25c_4606_848a_a2942e8988c9.slice/crio-cb22d686e026a12d6344b3b81d1861c865e72fb3cabf9ab391496577fb57ad6d WatchSource:0}: Error finding container cb22d686e026a12d6344b3b81d1861c865e72fb3cabf9ab391496577fb57ad6d: Status 404 returned error can't find the container with id cb22d686e026a12d6344b3b81d1861c865e72fb3cabf9ab391496577fb57ad6d Feb 19 10:05:52 crc kubenswrapper[4965]: I0219 10:05:52.228797 4965 generic.go:334] "Generic (PLEG): container finished" podID="1c48eb36-b3c5-4516-a345-6dc813d425cb" containerID="f90f4c1bd3e20625cc9aced4d5faaaf9897c41870dbbb0b10fd90dddbd81cb98" exitCode=0 Feb 19 10:05:52 crc kubenswrapper[4965]: I0219 10:05:52.229110 4965 generic.go:334] "Generic (PLEG): container finished" podID="1c48eb36-b3c5-4516-a345-6dc813d425cb" containerID="37b03556e861708c26d3c1f49e7151c9cbd6b98e53ec0294941fe42652a494dd" exitCode=2 Feb 19 10:05:52 crc kubenswrapper[4965]: I0219 10:05:52.228943 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1c48eb36-b3c5-4516-a345-6dc813d425cb","Type":"ContainerDied","Data":"f90f4c1bd3e20625cc9aced4d5faaaf9897c41870dbbb0b10fd90dddbd81cb98"} Feb 19 10:05:52 crc kubenswrapper[4965]: I0219 10:05:52.229167 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1c48eb36-b3c5-4516-a345-6dc813d425cb","Type":"ContainerDied","Data":"37b03556e861708c26d3c1f49e7151c9cbd6b98e53ec0294941fe42652a494dd"} Feb 19 10:05:52 crc kubenswrapper[4965]: I0219 10:05:52.236104 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"947a2943-c25c-4606-848a-a2942e8988c9","Type":"ContainerStarted","Data":"cb22d686e026a12d6344b3b81d1861c865e72fb3cabf9ab391496577fb57ad6d"} Feb 19 10:05:53 crc kubenswrapper[4965]: I0219 10:05:53.211326 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6263bd60-b2d0-44ff-ae54-874728576f1d" path="/var/lib/kubelet/pods/6263bd60-b2d0-44ff-ae54-874728576f1d/volumes" Feb 19 10:05:53 crc kubenswrapper[4965]: I0219 10:05:53.251677 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"947a2943-c25c-4606-848a-a2942e8988c9","Type":"ContainerStarted","Data":"8420f267db2a559f05678b75edfaa9f370f1087b84e2c60a70d38d9105ff2b0f"} Feb 19 10:05:53 crc kubenswrapper[4965]: I0219 10:05:53.252536 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 19 10:05:53 crc kubenswrapper[4965]: I0219 10:05:53.255797 4965 generic.go:334] "Generic (PLEG): container finished" podID="1c48eb36-b3c5-4516-a345-6dc813d425cb" containerID="95b86d8f471de0d406d27b87b967f19339596823d3562e22741b5ef38fbefd78" exitCode=0 Feb 19 10:05:53 crc kubenswrapper[4965]: I0219 10:05:53.255833 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1c48eb36-b3c5-4516-a345-6dc813d425cb","Type":"ContainerDied","Data":"95b86d8f471de0d406d27b87b967f19339596823d3562e22741b5ef38fbefd78"} Feb 19 10:05:53 crc kubenswrapper[4965]: I0219 10:05:53.275055 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.930682412 podStartE2EDuration="2.275035817s" podCreationTimestamp="2026-02-19 10:05:51 +0000 UTC" firstStartedPulling="2026-02-19 10:05:52.099301256 +0000 UTC m=+1407.720622566" lastFinishedPulling="2026-02-19 10:05:52.443654661 +0000 UTC m=+1408.064975971" observedRunningTime="2026-02-19 10:05:53.270525806 +0000 UTC m=+1408.891847136" watchObservedRunningTime="2026-02-19 10:05:53.275035817 +0000 UTC m=+1408.896357127" Feb 19 10:05:53 crc kubenswrapper[4965]: I0219 10:05:53.741466 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 19 10:05:53 crc kubenswrapper[4965]: I0219 10:05:53.742671 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 19 10:05:53 crc kubenswrapper[4965]: I0219 10:05:53.746755 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 19 10:05:54 crc kubenswrapper[4965]: I0219 10:05:54.272392 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 19 10:05:54 crc kubenswrapper[4965]: I0219 10:05:54.407172 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 10:05:54 crc kubenswrapper[4965]: I0219 10:05:54.407563 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 10:05:54 crc kubenswrapper[4965]: I0219 10:05:54.410864 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 10:05:54 crc kubenswrapper[4965]: I0219 10:05:54.414608 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 10:05:55 crc kubenswrapper[4965]: I0219 10:05:55.274031 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 10:05:55 crc kubenswrapper[4965]: I0219 10:05:55.278561 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 10:05:55 crc kubenswrapper[4965]: I0219 10:05:55.476679 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-54dd998c-dbkhp"] Feb 19 10:05:55 crc kubenswrapper[4965]: I0219 10:05:55.478947 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54dd998c-dbkhp" Feb 19 10:05:55 crc kubenswrapper[4965]: I0219 10:05:55.506671 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54dd998c-dbkhp"] Feb 19 10:05:55 crc kubenswrapper[4965]: I0219 10:05:55.562973 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:05:55 crc kubenswrapper[4965]: I0219 10:05:55.585298 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:05:55 crc kubenswrapper[4965]: I0219 10:05:55.599974 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6d323130-9034-45b0-9f95-02b4494ff391-ovsdbserver-nb\") pod \"dnsmasq-dns-54dd998c-dbkhp\" (UID: \"6d323130-9034-45b0-9f95-02b4494ff391\") " pod="openstack/dnsmasq-dns-54dd998c-dbkhp" Feb 19 10:05:55 crc kubenswrapper[4965]: I0219 10:05:55.600171 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d323130-9034-45b0-9f95-02b4494ff391-dns-svc\") pod \"dnsmasq-dns-54dd998c-dbkhp\" (UID: \"6d323130-9034-45b0-9f95-02b4494ff391\") " pod="openstack/dnsmasq-dns-54dd998c-dbkhp" Feb 19 10:05:55 crc kubenswrapper[4965]: I0219 10:05:55.600232 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqp2v\" (UniqueName: \"kubernetes.io/projected/6d323130-9034-45b0-9f95-02b4494ff391-kube-api-access-lqp2v\") pod \"dnsmasq-dns-54dd998c-dbkhp\" (UID: \"6d323130-9034-45b0-9f95-02b4494ff391\") " pod="openstack/dnsmasq-dns-54dd998c-dbkhp" Feb 19 10:05:55 crc kubenswrapper[4965]: I0219 10:05:55.600270 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6d323130-9034-45b0-9f95-02b4494ff391-dns-swift-storage-0\") pod \"dnsmasq-dns-54dd998c-dbkhp\" (UID: \"6d323130-9034-45b0-9f95-02b4494ff391\") " pod="openstack/dnsmasq-dns-54dd998c-dbkhp" Feb 19 10:05:55 crc kubenswrapper[4965]: I0219 10:05:55.600347 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d323130-9034-45b0-9f95-02b4494ff391-config\") pod \"dnsmasq-dns-54dd998c-dbkhp\" (UID: \"6d323130-9034-45b0-9f95-02b4494ff391\") " pod="openstack/dnsmasq-dns-54dd998c-dbkhp" Feb 19 10:05:55 crc kubenswrapper[4965]: I0219 10:05:55.600417 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6d323130-9034-45b0-9f95-02b4494ff391-ovsdbserver-sb\") pod \"dnsmasq-dns-54dd998c-dbkhp\" (UID: \"6d323130-9034-45b0-9f95-02b4494ff391\") " pod="openstack/dnsmasq-dns-54dd998c-dbkhp" Feb 19 10:05:55 crc kubenswrapper[4965]: I0219 10:05:55.702540 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6d323130-9034-45b0-9f95-02b4494ff391-dns-swift-storage-0\") pod \"dnsmasq-dns-54dd998c-dbkhp\" (UID: \"6d323130-9034-45b0-9f95-02b4494ff391\") " pod="openstack/dnsmasq-dns-54dd998c-dbkhp" Feb 19 10:05:55 crc kubenswrapper[4965]: I0219 10:05:55.702642 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d323130-9034-45b0-9f95-02b4494ff391-config\") pod \"dnsmasq-dns-54dd998c-dbkhp\" (UID: \"6d323130-9034-45b0-9f95-02b4494ff391\") " pod="openstack/dnsmasq-dns-54dd998c-dbkhp" Feb 19 10:05:55 crc kubenswrapper[4965]: I0219 10:05:55.702720 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6d323130-9034-45b0-9f95-02b4494ff391-ovsdbserver-sb\") pod \"dnsmasq-dns-54dd998c-dbkhp\" (UID: \"6d323130-9034-45b0-9f95-02b4494ff391\") " pod="openstack/dnsmasq-dns-54dd998c-dbkhp" Feb 19 10:05:55 crc kubenswrapper[4965]: I0219 10:05:55.702789 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6d323130-9034-45b0-9f95-02b4494ff391-ovsdbserver-nb\") pod \"dnsmasq-dns-54dd998c-dbkhp\" (UID: \"6d323130-9034-45b0-9f95-02b4494ff391\") " pod="openstack/dnsmasq-dns-54dd998c-dbkhp" Feb 19 10:05:55 crc kubenswrapper[4965]: I0219 10:05:55.702903 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d323130-9034-45b0-9f95-02b4494ff391-dns-svc\") pod \"dnsmasq-dns-54dd998c-dbkhp\" (UID: \"6d323130-9034-45b0-9f95-02b4494ff391\") " pod="openstack/dnsmasq-dns-54dd998c-dbkhp" Feb 19 10:05:55 crc kubenswrapper[4965]: I0219 10:05:55.702921 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqp2v\" (UniqueName: \"kubernetes.io/projected/6d323130-9034-45b0-9f95-02b4494ff391-kube-api-access-lqp2v\") pod \"dnsmasq-dns-54dd998c-dbkhp\" (UID: \"6d323130-9034-45b0-9f95-02b4494ff391\") " pod="openstack/dnsmasq-dns-54dd998c-dbkhp" Feb 19 10:05:55 crc kubenswrapper[4965]: I0219 10:05:55.704056 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d323130-9034-45b0-9f95-02b4494ff391-config\") pod \"dnsmasq-dns-54dd998c-dbkhp\" (UID: \"6d323130-9034-45b0-9f95-02b4494ff391\") " pod="openstack/dnsmasq-dns-54dd998c-dbkhp" Feb 19 10:05:55 crc kubenswrapper[4965]: I0219 10:05:55.704545 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6d323130-9034-45b0-9f95-02b4494ff391-ovsdbserver-nb\") pod \"dnsmasq-dns-54dd998c-dbkhp\" (UID: \"6d323130-9034-45b0-9f95-02b4494ff391\") " pod="openstack/dnsmasq-dns-54dd998c-dbkhp" Feb 19 10:05:55 crc kubenswrapper[4965]: I0219 10:05:55.704600 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6d323130-9034-45b0-9f95-02b4494ff391-ovsdbserver-sb\") pod \"dnsmasq-dns-54dd998c-dbkhp\" (UID: \"6d323130-9034-45b0-9f95-02b4494ff391\") " pod="openstack/dnsmasq-dns-54dd998c-dbkhp" Feb 19 10:05:55 crc kubenswrapper[4965]: I0219 10:05:55.705023 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d323130-9034-45b0-9f95-02b4494ff391-dns-svc\") pod \"dnsmasq-dns-54dd998c-dbkhp\" (UID: \"6d323130-9034-45b0-9f95-02b4494ff391\") " pod="openstack/dnsmasq-dns-54dd998c-dbkhp" Feb 19 10:05:55 crc kubenswrapper[4965]: I0219 10:05:55.705415 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6d323130-9034-45b0-9f95-02b4494ff391-dns-swift-storage-0\") pod \"dnsmasq-dns-54dd998c-dbkhp\" (UID: \"6d323130-9034-45b0-9f95-02b4494ff391\") " pod="openstack/dnsmasq-dns-54dd998c-dbkhp" Feb 19 10:05:55 crc kubenswrapper[4965]: I0219 10:05:55.782884 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqp2v\" (UniqueName: \"kubernetes.io/projected/6d323130-9034-45b0-9f95-02b4494ff391-kube-api-access-lqp2v\") pod \"dnsmasq-dns-54dd998c-dbkhp\" (UID: \"6d323130-9034-45b0-9f95-02b4494ff391\") " pod="openstack/dnsmasq-dns-54dd998c-dbkhp" Feb 19 10:05:55 crc kubenswrapper[4965]: I0219 10:05:55.805928 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54dd998c-dbkhp" Feb 19 10:05:56 crc kubenswrapper[4965]: I0219 10:05:56.037941 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 19 10:05:56 crc kubenswrapper[4965]: I0219 10:05:56.119012 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 19 10:05:56 crc kubenswrapper[4965]: I0219 10:05:56.125286 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.223385 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1c48eb36-b3c5-4516-a345-6dc813d425cb-run-httpd\") pod \"1c48eb36-b3c5-4516-a345-6dc813d425cb\" (UID: \"1c48eb36-b3c5-4516-a345-6dc813d425cb\") " Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.223419 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1c48eb36-b3c5-4516-a345-6dc813d425cb-log-httpd\") pod \"1c48eb36-b3c5-4516-a345-6dc813d425cb\" (UID: \"1c48eb36-b3c5-4516-a345-6dc813d425cb\") " Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.225154 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c48eb36-b3c5-4516-a345-6dc813d425cb-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "1c48eb36-b3c5-4516-a345-6dc813d425cb" (UID: "1c48eb36-b3c5-4516-a345-6dc813d425cb"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.225772 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c48eb36-b3c5-4516-a345-6dc813d425cb-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "1c48eb36-b3c5-4516-a345-6dc813d425cb" (UID: "1c48eb36-b3c5-4516-a345-6dc813d425cb"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.226112 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c48eb36-b3c5-4516-a345-6dc813d425cb-scripts\") pod \"1c48eb36-b3c5-4516-a345-6dc813d425cb\" (UID: \"1c48eb36-b3c5-4516-a345-6dc813d425cb\") " Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.226169 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dw97c\" (UniqueName: \"kubernetes.io/projected/1c48eb36-b3c5-4516-a345-6dc813d425cb-kube-api-access-dw97c\") pod \"1c48eb36-b3c5-4516-a345-6dc813d425cb\" (UID: \"1c48eb36-b3c5-4516-a345-6dc813d425cb\") " Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.226231 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1c48eb36-b3c5-4516-a345-6dc813d425cb-sg-core-conf-yaml\") pod \"1c48eb36-b3c5-4516-a345-6dc813d425cb\" (UID: \"1c48eb36-b3c5-4516-a345-6dc813d425cb\") " Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.226262 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c48eb36-b3c5-4516-a345-6dc813d425cb-config-data\") pod \"1c48eb36-b3c5-4516-a345-6dc813d425cb\" (UID: \"1c48eb36-b3c5-4516-a345-6dc813d425cb\") " Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.226345 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c48eb36-b3c5-4516-a345-6dc813d425cb-combined-ca-bundle\") pod \"1c48eb36-b3c5-4516-a345-6dc813d425cb\" (UID: \"1c48eb36-b3c5-4516-a345-6dc813d425cb\") " Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.227007 4965 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1c48eb36-b3c5-4516-a345-6dc813d425cb-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.227024 4965 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1c48eb36-b3c5-4516-a345-6dc813d425cb-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.232145 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c48eb36-b3c5-4516-a345-6dc813d425cb-scripts" (OuterVolumeSpecName: "scripts") pod "1c48eb36-b3c5-4516-a345-6dc813d425cb" (UID: "1c48eb36-b3c5-4516-a345-6dc813d425cb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.238326 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c48eb36-b3c5-4516-a345-6dc813d425cb-kube-api-access-dw97c" (OuterVolumeSpecName: "kube-api-access-dw97c") pod "1c48eb36-b3c5-4516-a345-6dc813d425cb" (UID: "1c48eb36-b3c5-4516-a345-6dc813d425cb"). InnerVolumeSpecName "kube-api-access-dw97c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.287073 4965 generic.go:334] "Generic (PLEG): container finished" podID="1c48eb36-b3c5-4516-a345-6dc813d425cb" containerID="64658e35d12f5cabaf06523e730d47175a03175015974d985b3a19c27f08eeb0" exitCode=0 Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.287668 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c48eb36-b3c5-4516-a345-6dc813d425cb-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "1c48eb36-b3c5-4516-a345-6dc813d425cb" (UID: "1c48eb36-b3c5-4516-a345-6dc813d425cb"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.287723 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1c48eb36-b3c5-4516-a345-6dc813d425cb","Type":"ContainerDied","Data":"64658e35d12f5cabaf06523e730d47175a03175015974d985b3a19c27f08eeb0"} Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.287765 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1c48eb36-b3c5-4516-a345-6dc813d425cb","Type":"ContainerDied","Data":"0063e24ff2fbb68b3564678bf6a64e11e71b5973b5ba2e436d6cb04929a70ee4"} Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.287782 4965 scope.go:117] "RemoveContainer" containerID="f90f4c1bd3e20625cc9aced4d5faaaf9897c41870dbbb0b10fd90dddbd81cb98" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.288014 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.312346 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.339854 4965 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c48eb36-b3c5-4516-a345-6dc813d425cb-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.339883 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dw97c\" (UniqueName: \"kubernetes.io/projected/1c48eb36-b3c5-4516-a345-6dc813d425cb-kube-api-access-dw97c\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.339894 4965 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1c48eb36-b3c5-4516-a345-6dc813d425cb-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.344428 4965 scope.go:117] "RemoveContainer" containerID="37b03556e861708c26d3c1f49e7151c9cbd6b98e53ec0294941fe42652a494dd" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.350951 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.352055 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c48eb36-b3c5-4516-a345-6dc813d425cb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1c48eb36-b3c5-4516-a345-6dc813d425cb" (UID: "1c48eb36-b3c5-4516-a345-6dc813d425cb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.397292 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c48eb36-b3c5-4516-a345-6dc813d425cb-config-data" (OuterVolumeSpecName: "config-data") pod "1c48eb36-b3c5-4516-a345-6dc813d425cb" (UID: "1c48eb36-b3c5-4516-a345-6dc813d425cb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.451945 4965 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c48eb36-b3c5-4516-a345-6dc813d425cb-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.451977 4965 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c48eb36-b3c5-4516-a345-6dc813d425cb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.454372 4965 scope.go:117] "RemoveContainer" containerID="64658e35d12f5cabaf06523e730d47175a03175015974d985b3a19c27f08eeb0" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.454892 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54dd998c-dbkhp"] Feb 19 10:05:57 crc kubenswrapper[4965]: W0219 10:05:56.463183 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d323130_9034_45b0_9f95_02b4494ff391.slice/crio-649c5c2d86383d5aa368caa7bf569c17540d93290773970d519f329a0774a7e3 WatchSource:0}: Error finding container 649c5c2d86383d5aa368caa7bf569c17540d93290773970d519f329a0774a7e3: Status 404 returned error can't find the container with id 649c5c2d86383d5aa368caa7bf569c17540d93290773970d519f329a0774a7e3 Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.491138 4965 scope.go:117] "RemoveContainer" containerID="95b86d8f471de0d406d27b87b967f19339596823d3562e22741b5ef38fbefd78" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.529370 4965 scope.go:117] "RemoveContainer" containerID="f90f4c1bd3e20625cc9aced4d5faaaf9897c41870dbbb0b10fd90dddbd81cb98" Feb 19 10:05:57 crc kubenswrapper[4965]: E0219 10:05:56.530090 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f90f4c1bd3e20625cc9aced4d5faaaf9897c41870dbbb0b10fd90dddbd81cb98\": container with ID starting with f90f4c1bd3e20625cc9aced4d5faaaf9897c41870dbbb0b10fd90dddbd81cb98 not found: ID does not exist" containerID="f90f4c1bd3e20625cc9aced4d5faaaf9897c41870dbbb0b10fd90dddbd81cb98" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.530136 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f90f4c1bd3e20625cc9aced4d5faaaf9897c41870dbbb0b10fd90dddbd81cb98"} err="failed to get container status \"f90f4c1bd3e20625cc9aced4d5faaaf9897c41870dbbb0b10fd90dddbd81cb98\": rpc error: code = NotFound desc = could not find container \"f90f4c1bd3e20625cc9aced4d5faaaf9897c41870dbbb0b10fd90dddbd81cb98\": container with ID starting with f90f4c1bd3e20625cc9aced4d5faaaf9897c41870dbbb0b10fd90dddbd81cb98 not found: ID does not exist" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.530169 4965 scope.go:117] "RemoveContainer" containerID="37b03556e861708c26d3c1f49e7151c9cbd6b98e53ec0294941fe42652a494dd" Feb 19 10:05:57 crc kubenswrapper[4965]: E0219 10:05:56.530467 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37b03556e861708c26d3c1f49e7151c9cbd6b98e53ec0294941fe42652a494dd\": container with ID starting with 37b03556e861708c26d3c1f49e7151c9cbd6b98e53ec0294941fe42652a494dd not found: ID does not exist" containerID="37b03556e861708c26d3c1f49e7151c9cbd6b98e53ec0294941fe42652a494dd" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.530494 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37b03556e861708c26d3c1f49e7151c9cbd6b98e53ec0294941fe42652a494dd"} err="failed to get container status \"37b03556e861708c26d3c1f49e7151c9cbd6b98e53ec0294941fe42652a494dd\": rpc error: code = NotFound desc = could not find container \"37b03556e861708c26d3c1f49e7151c9cbd6b98e53ec0294941fe42652a494dd\": container with ID starting with 37b03556e861708c26d3c1f49e7151c9cbd6b98e53ec0294941fe42652a494dd not found: ID does not exist" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.530516 4965 scope.go:117] "RemoveContainer" containerID="64658e35d12f5cabaf06523e730d47175a03175015974d985b3a19c27f08eeb0" Feb 19 10:05:57 crc kubenswrapper[4965]: E0219 10:05:56.530725 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64658e35d12f5cabaf06523e730d47175a03175015974d985b3a19c27f08eeb0\": container with ID starting with 64658e35d12f5cabaf06523e730d47175a03175015974d985b3a19c27f08eeb0 not found: ID does not exist" containerID="64658e35d12f5cabaf06523e730d47175a03175015974d985b3a19c27f08eeb0" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.530749 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64658e35d12f5cabaf06523e730d47175a03175015974d985b3a19c27f08eeb0"} err="failed to get container status \"64658e35d12f5cabaf06523e730d47175a03175015974d985b3a19c27f08eeb0\": rpc error: code = NotFound desc = could not find container \"64658e35d12f5cabaf06523e730d47175a03175015974d985b3a19c27f08eeb0\": container with ID starting with 64658e35d12f5cabaf06523e730d47175a03175015974d985b3a19c27f08eeb0 not found: ID does not exist" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.530765 4965 scope.go:117] "RemoveContainer" containerID="95b86d8f471de0d406d27b87b967f19339596823d3562e22741b5ef38fbefd78" Feb 19 10:05:57 crc kubenswrapper[4965]: E0219 10:05:56.531147 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95b86d8f471de0d406d27b87b967f19339596823d3562e22741b5ef38fbefd78\": container with ID starting with 95b86d8f471de0d406d27b87b967f19339596823d3562e22741b5ef38fbefd78 not found: ID does not exist" containerID="95b86d8f471de0d406d27b87b967f19339596823d3562e22741b5ef38fbefd78" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.531208 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95b86d8f471de0d406d27b87b967f19339596823d3562e22741b5ef38fbefd78"} err="failed to get container status \"95b86d8f471de0d406d27b87b967f19339596823d3562e22741b5ef38fbefd78\": rpc error: code = NotFound desc = could not find container \"95b86d8f471de0d406d27b87b967f19339596823d3562e22741b5ef38fbefd78\": container with ID starting with 95b86d8f471de0d406d27b87b967f19339596823d3562e22741b5ef38fbefd78 not found: ID does not exist" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.656905 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.676276 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.695668 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-rdjn2"] Feb 19 10:05:57 crc kubenswrapper[4965]: E0219 10:05:56.696142 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c48eb36-b3c5-4516-a345-6dc813d425cb" containerName="ceilometer-notification-agent" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.696154 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c48eb36-b3c5-4516-a345-6dc813d425cb" containerName="ceilometer-notification-agent" Feb 19 10:05:57 crc kubenswrapper[4965]: E0219 10:05:56.696165 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c48eb36-b3c5-4516-a345-6dc813d425cb" containerName="proxy-httpd" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.696171 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c48eb36-b3c5-4516-a345-6dc813d425cb" containerName="proxy-httpd" Feb 19 10:05:57 crc kubenswrapper[4965]: E0219 10:05:56.696182 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c48eb36-b3c5-4516-a345-6dc813d425cb" containerName="sg-core" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.696188 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c48eb36-b3c5-4516-a345-6dc813d425cb" containerName="sg-core" Feb 19 10:05:57 crc kubenswrapper[4965]: E0219 10:05:56.696211 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c48eb36-b3c5-4516-a345-6dc813d425cb" containerName="ceilometer-central-agent" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.696218 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c48eb36-b3c5-4516-a345-6dc813d425cb" containerName="ceilometer-central-agent" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.696401 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c48eb36-b3c5-4516-a345-6dc813d425cb" containerName="ceilometer-notification-agent" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.696419 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c48eb36-b3c5-4516-a345-6dc813d425cb" containerName="proxy-httpd" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.696429 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c48eb36-b3c5-4516-a345-6dc813d425cb" containerName="ceilometer-central-agent" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.696444 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c48eb36-b3c5-4516-a345-6dc813d425cb" containerName="sg-core" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.697155 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-rdjn2" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.700564 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.702265 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.708259 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-rdjn2"] Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.732057 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.734892 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.742252 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.742382 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.742517 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.760509 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b21ddc99-df08-4635-996d-872a7c3f6f3b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-rdjn2\" (UID: \"b21ddc99-df08-4635-996d-872a7c3f6f3b\") " pod="openstack/nova-cell1-cell-mapping-rdjn2" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.760743 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2284\" (UniqueName: \"kubernetes.io/projected/b21ddc99-df08-4635-996d-872a7c3f6f3b-kube-api-access-k2284\") pod \"nova-cell1-cell-mapping-rdjn2\" (UID: \"b21ddc99-df08-4635-996d-872a7c3f6f3b\") " pod="openstack/nova-cell1-cell-mapping-rdjn2" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.760894 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b21ddc99-df08-4635-996d-872a7c3f6f3b-config-data\") pod \"nova-cell1-cell-mapping-rdjn2\" (UID: \"b21ddc99-df08-4635-996d-872a7c3f6f3b\") " pod="openstack/nova-cell1-cell-mapping-rdjn2" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.760956 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b21ddc99-df08-4635-996d-872a7c3f6f3b-scripts\") pod \"nova-cell1-cell-mapping-rdjn2\" (UID: \"b21ddc99-df08-4635-996d-872a7c3f6f3b\") " pod="openstack/nova-cell1-cell-mapping-rdjn2" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.772599 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.863024 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b21ddc99-df08-4635-996d-872a7c3f6f3b-config-data\") pod \"nova-cell1-cell-mapping-rdjn2\" (UID: \"b21ddc99-df08-4635-996d-872a7c3f6f3b\") " pod="openstack/nova-cell1-cell-mapping-rdjn2" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.863077 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3453a132-2f25-411c-8ac8-fa0f8f9b958b-log-httpd\") pod \"ceilometer-0\" (UID: \"3453a132-2f25-411c-8ac8-fa0f8f9b958b\") " pod="openstack/ceilometer-0" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.863111 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2fsf\" (UniqueName: \"kubernetes.io/projected/3453a132-2f25-411c-8ac8-fa0f8f9b958b-kube-api-access-h2fsf\") pod \"ceilometer-0\" (UID: \"3453a132-2f25-411c-8ac8-fa0f8f9b958b\") " pod="openstack/ceilometer-0" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.863155 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b21ddc99-df08-4635-996d-872a7c3f6f3b-scripts\") pod \"nova-cell1-cell-mapping-rdjn2\" (UID: \"b21ddc99-df08-4635-996d-872a7c3f6f3b\") " pod="openstack/nova-cell1-cell-mapping-rdjn2" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.863324 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3453a132-2f25-411c-8ac8-fa0f8f9b958b-run-httpd\") pod \"ceilometer-0\" (UID: \"3453a132-2f25-411c-8ac8-fa0f8f9b958b\") " pod="openstack/ceilometer-0" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.863457 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3453a132-2f25-411c-8ac8-fa0f8f9b958b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3453a132-2f25-411c-8ac8-fa0f8f9b958b\") " pod="openstack/ceilometer-0" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.863558 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3453a132-2f25-411c-8ac8-fa0f8f9b958b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3453a132-2f25-411c-8ac8-fa0f8f9b958b\") " pod="openstack/ceilometer-0" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.863684 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3453a132-2f25-411c-8ac8-fa0f8f9b958b-scripts\") pod \"ceilometer-0\" (UID: \"3453a132-2f25-411c-8ac8-fa0f8f9b958b\") " pod="openstack/ceilometer-0" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.863820 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3453a132-2f25-411c-8ac8-fa0f8f9b958b-config-data\") pod \"ceilometer-0\" (UID: \"3453a132-2f25-411c-8ac8-fa0f8f9b958b\") " pod="openstack/ceilometer-0" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.863875 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b21ddc99-df08-4635-996d-872a7c3f6f3b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-rdjn2\" (UID: \"b21ddc99-df08-4635-996d-872a7c3f6f3b\") " pod="openstack/nova-cell1-cell-mapping-rdjn2" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.863997 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3453a132-2f25-411c-8ac8-fa0f8f9b958b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3453a132-2f25-411c-8ac8-fa0f8f9b958b\") " pod="openstack/ceilometer-0" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.864030 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2284\" (UniqueName: \"kubernetes.io/projected/b21ddc99-df08-4635-996d-872a7c3f6f3b-kube-api-access-k2284\") pod \"nova-cell1-cell-mapping-rdjn2\" (UID: \"b21ddc99-df08-4635-996d-872a7c3f6f3b\") " pod="openstack/nova-cell1-cell-mapping-rdjn2" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.870012 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b21ddc99-df08-4635-996d-872a7c3f6f3b-scripts\") pod \"nova-cell1-cell-mapping-rdjn2\" (UID: \"b21ddc99-df08-4635-996d-872a7c3f6f3b\") " pod="openstack/nova-cell1-cell-mapping-rdjn2" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.894881 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b21ddc99-df08-4635-996d-872a7c3f6f3b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-rdjn2\" (UID: \"b21ddc99-df08-4635-996d-872a7c3f6f3b\") " pod="openstack/nova-cell1-cell-mapping-rdjn2" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.895446 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b21ddc99-df08-4635-996d-872a7c3f6f3b-config-data\") pod \"nova-cell1-cell-mapping-rdjn2\" (UID: \"b21ddc99-df08-4635-996d-872a7c3f6f3b\") " pod="openstack/nova-cell1-cell-mapping-rdjn2" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.901872 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2284\" (UniqueName: \"kubernetes.io/projected/b21ddc99-df08-4635-996d-872a7c3f6f3b-kube-api-access-k2284\") pod \"nova-cell1-cell-mapping-rdjn2\" (UID: \"b21ddc99-df08-4635-996d-872a7c3f6f3b\") " pod="openstack/nova-cell1-cell-mapping-rdjn2" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.967283 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3453a132-2f25-411c-8ac8-fa0f8f9b958b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3453a132-2f25-411c-8ac8-fa0f8f9b958b\") " pod="openstack/ceilometer-0" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.967342 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3453a132-2f25-411c-8ac8-fa0f8f9b958b-log-httpd\") pod \"ceilometer-0\" (UID: \"3453a132-2f25-411c-8ac8-fa0f8f9b958b\") " pod="openstack/ceilometer-0" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.967364 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2fsf\" (UniqueName: \"kubernetes.io/projected/3453a132-2f25-411c-8ac8-fa0f8f9b958b-kube-api-access-h2fsf\") pod \"ceilometer-0\" (UID: \"3453a132-2f25-411c-8ac8-fa0f8f9b958b\") " pod="openstack/ceilometer-0" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.967407 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3453a132-2f25-411c-8ac8-fa0f8f9b958b-run-httpd\") pod \"ceilometer-0\" (UID: \"3453a132-2f25-411c-8ac8-fa0f8f9b958b\") " pod="openstack/ceilometer-0" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.967443 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3453a132-2f25-411c-8ac8-fa0f8f9b958b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3453a132-2f25-411c-8ac8-fa0f8f9b958b\") " pod="openstack/ceilometer-0" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.967476 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3453a132-2f25-411c-8ac8-fa0f8f9b958b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3453a132-2f25-411c-8ac8-fa0f8f9b958b\") " pod="openstack/ceilometer-0" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.967519 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3453a132-2f25-411c-8ac8-fa0f8f9b958b-scripts\") pod \"ceilometer-0\" (UID: \"3453a132-2f25-411c-8ac8-fa0f8f9b958b\") " pod="openstack/ceilometer-0" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.967553 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3453a132-2f25-411c-8ac8-fa0f8f9b958b-config-data\") pod \"ceilometer-0\" (UID: \"3453a132-2f25-411c-8ac8-fa0f8f9b958b\") " pod="openstack/ceilometer-0" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.968414 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3453a132-2f25-411c-8ac8-fa0f8f9b958b-run-httpd\") pod \"ceilometer-0\" (UID: \"3453a132-2f25-411c-8ac8-fa0f8f9b958b\") " pod="openstack/ceilometer-0" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.968641 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3453a132-2f25-411c-8ac8-fa0f8f9b958b-log-httpd\") pod \"ceilometer-0\" (UID: \"3453a132-2f25-411c-8ac8-fa0f8f9b958b\") " pod="openstack/ceilometer-0" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.971113 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3453a132-2f25-411c-8ac8-fa0f8f9b958b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3453a132-2f25-411c-8ac8-fa0f8f9b958b\") " pod="openstack/ceilometer-0" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.973692 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3453a132-2f25-411c-8ac8-fa0f8f9b958b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3453a132-2f25-411c-8ac8-fa0f8f9b958b\") " pod="openstack/ceilometer-0" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.974229 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3453a132-2f25-411c-8ac8-fa0f8f9b958b-config-data\") pod \"ceilometer-0\" (UID: \"3453a132-2f25-411c-8ac8-fa0f8f9b958b\") " pod="openstack/ceilometer-0" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.974965 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3453a132-2f25-411c-8ac8-fa0f8f9b958b-scripts\") pod \"ceilometer-0\" (UID: \"3453a132-2f25-411c-8ac8-fa0f8f9b958b\") " pod="openstack/ceilometer-0" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.982344 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3453a132-2f25-411c-8ac8-fa0f8f9b958b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3453a132-2f25-411c-8ac8-fa0f8f9b958b\") " pod="openstack/ceilometer-0" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:56.985721 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2fsf\" (UniqueName: \"kubernetes.io/projected/3453a132-2f25-411c-8ac8-fa0f8f9b958b-kube-api-access-h2fsf\") pod \"ceilometer-0\" (UID: \"3453a132-2f25-411c-8ac8-fa0f8f9b958b\") " pod="openstack/ceilometer-0" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:57.075398 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:57.086645 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-rdjn2" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:57.214437 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c48eb36-b3c5-4516-a345-6dc813d425cb" path="/var/lib/kubelet/pods/1c48eb36-b3c5-4516-a345-6dc813d425cb/volumes" Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:57.388128 4965 generic.go:334] "Generic (PLEG): container finished" podID="6d323130-9034-45b0-9f95-02b4494ff391" containerID="507b642af4de22e2d6208254efb55f0bd41d0516f0b4d92d0798908d6f278f80" exitCode=0 Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:57.389524 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54dd998c-dbkhp" event={"ID":"6d323130-9034-45b0-9f95-02b4494ff391","Type":"ContainerDied","Data":"507b642af4de22e2d6208254efb55f0bd41d0516f0b4d92d0798908d6f278f80"} Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:57.389594 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54dd998c-dbkhp" event={"ID":"6d323130-9034-45b0-9f95-02b4494ff391","Type":"ContainerStarted","Data":"649c5c2d86383d5aa368caa7bf569c17540d93290773970d519f329a0774a7e3"} Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:57.632782 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:05:57 crc kubenswrapper[4965]: W0219 10:05:57.778237 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb21ddc99_df08_4635_996d_872a7c3f6f3b.slice/crio-3a70d4c41e14babdcfedd057c95125d48952d790c4de722bb52ed922984020b0 WatchSource:0}: Error finding container 3a70d4c41e14babdcfedd057c95125d48952d790c4de722bb52ed922984020b0: Status 404 returned error can't find the container with id 3a70d4c41e14babdcfedd057c95125d48952d790c4de722bb52ed922984020b0 Feb 19 10:05:57 crc kubenswrapper[4965]: I0219 10:05:57.778481 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-rdjn2"] Feb 19 10:05:58 crc kubenswrapper[4965]: I0219 10:05:58.003535 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:05:58 crc kubenswrapper[4965]: I0219 10:05:58.235984 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 10:05:58 crc kubenswrapper[4965]: I0219 10:05:58.399702 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3453a132-2f25-411c-8ac8-fa0f8f9b958b","Type":"ContainerStarted","Data":"2c5a3ba9364f2128839107d5d44af1d085d997ca33ef556dcc6d64ec0efa6c3a"} Feb 19 10:05:58 crc kubenswrapper[4965]: I0219 10:05:58.399743 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3453a132-2f25-411c-8ac8-fa0f8f9b958b","Type":"ContainerStarted","Data":"c8e48c4d67d9c3914d6bee8d939787af3a9a2e506d8cd0ab433a2d462c0437c4"} Feb 19 10:05:58 crc kubenswrapper[4965]: I0219 10:05:58.401733 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54dd998c-dbkhp" event={"ID":"6d323130-9034-45b0-9f95-02b4494ff391","Type":"ContainerStarted","Data":"4939c29d495e6f2cd4b1da938e4d9028249a2ce393ea6aa52c613d0350b32b4e"} Feb 19 10:05:58 crc kubenswrapper[4965]: I0219 10:05:58.402818 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-54dd998c-dbkhp" Feb 19 10:05:58 crc kubenswrapper[4965]: I0219 10:05:58.404434 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="eb877059-a8dd-4347-ac5e-08baba808882" containerName="nova-api-log" containerID="cri-o://c293a7335c0acf4b1b23da789a047620557caffe1b453f3b6e0ed68558acff77" gracePeriod=30 Feb 19 10:05:58 crc kubenswrapper[4965]: I0219 10:05:58.405051 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-rdjn2" event={"ID":"b21ddc99-df08-4635-996d-872a7c3f6f3b","Type":"ContainerStarted","Data":"9b278ef6932d007329c5791a7c5476eeda2384272529608de619fae7a14c7687"} Feb 19 10:05:58 crc kubenswrapper[4965]: I0219 10:05:58.405075 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-rdjn2" event={"ID":"b21ddc99-df08-4635-996d-872a7c3f6f3b","Type":"ContainerStarted","Data":"3a70d4c41e14babdcfedd057c95125d48952d790c4de722bb52ed922984020b0"} Feb 19 10:05:58 crc kubenswrapper[4965]: I0219 10:05:58.405123 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="eb877059-a8dd-4347-ac5e-08baba808882" containerName="nova-api-api" containerID="cri-o://a16fc5c3d0e0db847e840708f974a2bba3681c23e9b364723aa9a128602d3e57" gracePeriod=30 Feb 19 10:05:58 crc kubenswrapper[4965]: I0219 10:05:58.428936 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-54dd998c-dbkhp" podStartSLOduration=3.428922363 podStartE2EDuration="3.428922363s" podCreationTimestamp="2026-02-19 10:05:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:05:58.422932337 +0000 UTC m=+1414.044253647" watchObservedRunningTime="2026-02-19 10:05:58.428922363 +0000 UTC m=+1414.050243673" Feb 19 10:05:58 crc kubenswrapper[4965]: I0219 10:05:58.443583 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-rdjn2" podStartSLOduration=2.443561429 podStartE2EDuration="2.443561429s" podCreationTimestamp="2026-02-19 10:05:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:05:58.439008398 +0000 UTC m=+1414.060329728" watchObservedRunningTime="2026-02-19 10:05:58.443561429 +0000 UTC m=+1414.064882739" Feb 19 10:05:59 crc kubenswrapper[4965]: I0219 10:05:59.273238 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dgqlg"] Feb 19 10:05:59 crc kubenswrapper[4965]: I0219 10:05:59.276801 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dgqlg" Feb 19 10:05:59 crc kubenswrapper[4965]: I0219 10:05:59.288952 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dgqlg"] Feb 19 10:05:59 crc kubenswrapper[4965]: I0219 10:05:59.420285 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3453a132-2f25-411c-8ac8-fa0f8f9b958b","Type":"ContainerStarted","Data":"2e8feb81bf3f1d6fc3f4e924fa26d1c903f119da0bed5e3191a9704a190b1d7a"} Feb 19 10:05:59 crc kubenswrapper[4965]: I0219 10:05:59.425577 4965 generic.go:334] "Generic (PLEG): container finished" podID="eb877059-a8dd-4347-ac5e-08baba808882" containerID="c293a7335c0acf4b1b23da789a047620557caffe1b453f3b6e0ed68558acff77" exitCode=143 Feb 19 10:05:59 crc kubenswrapper[4965]: I0219 10:05:59.425666 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eb877059-a8dd-4347-ac5e-08baba808882","Type":"ContainerDied","Data":"c293a7335c0acf4b1b23da789a047620557caffe1b453f3b6e0ed68558acff77"} Feb 19 10:05:59 crc kubenswrapper[4965]: I0219 10:05:59.433366 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngk6f\" (UniqueName: \"kubernetes.io/projected/0b81c5e4-6bba-49a8-8687-dcff16739800-kube-api-access-ngk6f\") pod \"community-operators-dgqlg\" (UID: \"0b81c5e4-6bba-49a8-8687-dcff16739800\") " pod="openshift-marketplace/community-operators-dgqlg" Feb 19 10:05:59 crc kubenswrapper[4965]: I0219 10:05:59.433544 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b81c5e4-6bba-49a8-8687-dcff16739800-catalog-content\") pod \"community-operators-dgqlg\" (UID: \"0b81c5e4-6bba-49a8-8687-dcff16739800\") " pod="openshift-marketplace/community-operators-dgqlg" Feb 19 10:05:59 crc kubenswrapper[4965]: I0219 10:05:59.433705 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b81c5e4-6bba-49a8-8687-dcff16739800-utilities\") pod \"community-operators-dgqlg\" (UID: \"0b81c5e4-6bba-49a8-8687-dcff16739800\") " pod="openshift-marketplace/community-operators-dgqlg" Feb 19 10:05:59 crc kubenswrapper[4965]: I0219 10:05:59.535548 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngk6f\" (UniqueName: \"kubernetes.io/projected/0b81c5e4-6bba-49a8-8687-dcff16739800-kube-api-access-ngk6f\") pod \"community-operators-dgqlg\" (UID: \"0b81c5e4-6bba-49a8-8687-dcff16739800\") " pod="openshift-marketplace/community-operators-dgqlg" Feb 19 10:05:59 crc kubenswrapper[4965]: I0219 10:05:59.535663 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b81c5e4-6bba-49a8-8687-dcff16739800-catalog-content\") pod \"community-operators-dgqlg\" (UID: \"0b81c5e4-6bba-49a8-8687-dcff16739800\") " pod="openshift-marketplace/community-operators-dgqlg" Feb 19 10:05:59 crc kubenswrapper[4965]: I0219 10:05:59.535740 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b81c5e4-6bba-49a8-8687-dcff16739800-utilities\") pod \"community-operators-dgqlg\" (UID: \"0b81c5e4-6bba-49a8-8687-dcff16739800\") " pod="openshift-marketplace/community-operators-dgqlg" Feb 19 10:05:59 crc kubenswrapper[4965]: I0219 10:05:59.536233 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b81c5e4-6bba-49a8-8687-dcff16739800-catalog-content\") pod \"community-operators-dgqlg\" (UID: \"0b81c5e4-6bba-49a8-8687-dcff16739800\") " pod="openshift-marketplace/community-operators-dgqlg" Feb 19 10:05:59 crc kubenswrapper[4965]: I0219 10:05:59.536269 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b81c5e4-6bba-49a8-8687-dcff16739800-utilities\") pod \"community-operators-dgqlg\" (UID: \"0b81c5e4-6bba-49a8-8687-dcff16739800\") " pod="openshift-marketplace/community-operators-dgqlg" Feb 19 10:05:59 crc kubenswrapper[4965]: I0219 10:05:59.556874 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngk6f\" (UniqueName: \"kubernetes.io/projected/0b81c5e4-6bba-49a8-8687-dcff16739800-kube-api-access-ngk6f\") pod \"community-operators-dgqlg\" (UID: \"0b81c5e4-6bba-49a8-8687-dcff16739800\") " pod="openshift-marketplace/community-operators-dgqlg" Feb 19 10:05:59 crc kubenswrapper[4965]: I0219 10:05:59.663173 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dgqlg" Feb 19 10:06:00 crc kubenswrapper[4965]: I0219 10:06:00.207949 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dgqlg"] Feb 19 10:06:00 crc kubenswrapper[4965]: W0219 10:06:00.220616 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b81c5e4_6bba_49a8_8687_dcff16739800.slice/crio-87d893adf10a45da6e5e80ebbc3aa9017b64d9c834458f61a767fc755c997128 WatchSource:0}: Error finding container 87d893adf10a45da6e5e80ebbc3aa9017b64d9c834458f61a767fc755c997128: Status 404 returned error can't find the container with id 87d893adf10a45da6e5e80ebbc3aa9017b64d9c834458f61a767fc755c997128 Feb 19 10:06:00 crc kubenswrapper[4965]: I0219 10:06:00.436910 4965 generic.go:334] "Generic (PLEG): container finished" podID="0b81c5e4-6bba-49a8-8687-dcff16739800" containerID="486ddbfedbe8fe310b840a1694c4e556a2012aac749642592c90d0d6aca1a316" exitCode=0 Feb 19 10:06:00 crc kubenswrapper[4965]: I0219 10:06:00.437030 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dgqlg" event={"ID":"0b81c5e4-6bba-49a8-8687-dcff16739800","Type":"ContainerDied","Data":"486ddbfedbe8fe310b840a1694c4e556a2012aac749642592c90d0d6aca1a316"} Feb 19 10:06:00 crc kubenswrapper[4965]: I0219 10:06:00.437459 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dgqlg" event={"ID":"0b81c5e4-6bba-49a8-8687-dcff16739800","Type":"ContainerStarted","Data":"87d893adf10a45da6e5e80ebbc3aa9017b64d9c834458f61a767fc755c997128"} Feb 19 10:06:00 crc kubenswrapper[4965]: I0219 10:06:00.440817 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3453a132-2f25-411c-8ac8-fa0f8f9b958b","Type":"ContainerStarted","Data":"46318fc5a617651a1e0a7c3fa95e22907475f4b44ad8cf8718da1eae9ec1b345"} Feb 19 10:06:01 crc kubenswrapper[4965]: I0219 10:06:01.457582 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dgqlg" event={"ID":"0b81c5e4-6bba-49a8-8687-dcff16739800","Type":"ContainerStarted","Data":"074fcc96c41a59b4ac3aba10ce4f4fa544af07529f7e544e778362ad5bf77c09"} Feb 19 10:06:01 crc kubenswrapper[4965]: I0219 10:06:01.730765 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 19 10:06:02 crc kubenswrapper[4965]: I0219 10:06:02.471596 4965 generic.go:334] "Generic (PLEG): container finished" podID="0b81c5e4-6bba-49a8-8687-dcff16739800" containerID="074fcc96c41a59b4ac3aba10ce4f4fa544af07529f7e544e778362ad5bf77c09" exitCode=0 Feb 19 10:06:02 crc kubenswrapper[4965]: I0219 10:06:02.471996 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dgqlg" event={"ID":"0b81c5e4-6bba-49a8-8687-dcff16739800","Type":"ContainerDied","Data":"074fcc96c41a59b4ac3aba10ce4f4fa544af07529f7e544e778362ad5bf77c09"} Feb 19 10:06:02 crc kubenswrapper[4965]: I0219 10:06:02.480532 4965 generic.go:334] "Generic (PLEG): container finished" podID="eb877059-a8dd-4347-ac5e-08baba808882" containerID="a16fc5c3d0e0db847e840708f974a2bba3681c23e9b364723aa9a128602d3e57" exitCode=0 Feb 19 10:06:02 crc kubenswrapper[4965]: I0219 10:06:02.480573 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eb877059-a8dd-4347-ac5e-08baba808882","Type":"ContainerDied","Data":"a16fc5c3d0e0db847e840708f974a2bba3681c23e9b364723aa9a128602d3e57"} Feb 19 10:06:02 crc kubenswrapper[4965]: I0219 10:06:02.798701 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 10:06:02 crc kubenswrapper[4965]: I0219 10:06:02.907550 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb877059-a8dd-4347-ac5e-08baba808882-combined-ca-bundle\") pod \"eb877059-a8dd-4347-ac5e-08baba808882\" (UID: \"eb877059-a8dd-4347-ac5e-08baba808882\") " Feb 19 10:06:02 crc kubenswrapper[4965]: I0219 10:06:02.907707 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtk98\" (UniqueName: \"kubernetes.io/projected/eb877059-a8dd-4347-ac5e-08baba808882-kube-api-access-xtk98\") pod \"eb877059-a8dd-4347-ac5e-08baba808882\" (UID: \"eb877059-a8dd-4347-ac5e-08baba808882\") " Feb 19 10:06:02 crc kubenswrapper[4965]: I0219 10:06:02.907808 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb877059-a8dd-4347-ac5e-08baba808882-logs\") pod \"eb877059-a8dd-4347-ac5e-08baba808882\" (UID: \"eb877059-a8dd-4347-ac5e-08baba808882\") " Feb 19 10:06:02 crc kubenswrapper[4965]: I0219 10:06:02.907998 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb877059-a8dd-4347-ac5e-08baba808882-config-data\") pod \"eb877059-a8dd-4347-ac5e-08baba808882\" (UID: \"eb877059-a8dd-4347-ac5e-08baba808882\") " Feb 19 10:06:02 crc kubenswrapper[4965]: I0219 10:06:02.910644 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb877059-a8dd-4347-ac5e-08baba808882-logs" (OuterVolumeSpecName: "logs") pod "eb877059-a8dd-4347-ac5e-08baba808882" (UID: "eb877059-a8dd-4347-ac5e-08baba808882"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:06:02 crc kubenswrapper[4965]: I0219 10:06:02.950883 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb877059-a8dd-4347-ac5e-08baba808882-kube-api-access-xtk98" (OuterVolumeSpecName: "kube-api-access-xtk98") pod "eb877059-a8dd-4347-ac5e-08baba808882" (UID: "eb877059-a8dd-4347-ac5e-08baba808882"). InnerVolumeSpecName "kube-api-access-xtk98". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:06:03 crc kubenswrapper[4965]: I0219 10:06:03.013514 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtk98\" (UniqueName: \"kubernetes.io/projected/eb877059-a8dd-4347-ac5e-08baba808882-kube-api-access-xtk98\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:03 crc kubenswrapper[4965]: I0219 10:06:03.013566 4965 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb877059-a8dd-4347-ac5e-08baba808882-logs\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:03 crc kubenswrapper[4965]: I0219 10:06:03.030344 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb877059-a8dd-4347-ac5e-08baba808882-config-data" (OuterVolumeSpecName: "config-data") pod "eb877059-a8dd-4347-ac5e-08baba808882" (UID: "eb877059-a8dd-4347-ac5e-08baba808882"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:06:03 crc kubenswrapper[4965]: I0219 10:06:03.056490 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb877059-a8dd-4347-ac5e-08baba808882-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eb877059-a8dd-4347-ac5e-08baba808882" (UID: "eb877059-a8dd-4347-ac5e-08baba808882"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:06:03 crc kubenswrapper[4965]: I0219 10:06:03.115908 4965 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb877059-a8dd-4347-ac5e-08baba808882-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:03 crc kubenswrapper[4965]: I0219 10:06:03.115953 4965 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb877059-a8dd-4347-ac5e-08baba808882-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:03 crc kubenswrapper[4965]: I0219 10:06:03.520178 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eb877059-a8dd-4347-ac5e-08baba808882","Type":"ContainerDied","Data":"bb3e5893598adae78c0f7f2f4db331fbde15be8cc56d30e10b0f68e9ec76bfae"} Feb 19 10:06:03 crc kubenswrapper[4965]: I0219 10:06:03.520552 4965 scope.go:117] "RemoveContainer" containerID="a16fc5c3d0e0db847e840708f974a2bba3681c23e9b364723aa9a128602d3e57" Feb 19 10:06:03 crc kubenswrapper[4965]: I0219 10:06:03.520762 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 10:06:03 crc kubenswrapper[4965]: I0219 10:06:03.527121 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3453a132-2f25-411c-8ac8-fa0f8f9b958b","Type":"ContainerStarted","Data":"4671f1140347295dd9324e247ac7c25c93e8bd1ab5960be663d43a0270eeef8b"} Feb 19 10:06:03 crc kubenswrapper[4965]: I0219 10:06:03.527408 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3453a132-2f25-411c-8ac8-fa0f8f9b958b" containerName="ceilometer-central-agent" containerID="cri-o://2c5a3ba9364f2128839107d5d44af1d085d997ca33ef556dcc6d64ec0efa6c3a" gracePeriod=30 Feb 19 10:06:03 crc kubenswrapper[4965]: I0219 10:06:03.528674 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 10:06:03 crc kubenswrapper[4965]: I0219 10:06:03.528931 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3453a132-2f25-411c-8ac8-fa0f8f9b958b" containerName="proxy-httpd" containerID="cri-o://4671f1140347295dd9324e247ac7c25c93e8bd1ab5960be663d43a0270eeef8b" gracePeriod=30 Feb 19 10:06:03 crc kubenswrapper[4965]: I0219 10:06:03.529002 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3453a132-2f25-411c-8ac8-fa0f8f9b958b" containerName="sg-core" containerID="cri-o://46318fc5a617651a1e0a7c3fa95e22907475f4b44ad8cf8718da1eae9ec1b345" gracePeriod=30 Feb 19 10:06:03 crc kubenswrapper[4965]: I0219 10:06:03.529041 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3453a132-2f25-411c-8ac8-fa0f8f9b958b" containerName="ceilometer-notification-agent" containerID="cri-o://2e8feb81bf3f1d6fc3f4e924fa26d1c903f119da0bed5e3191a9704a190b1d7a" gracePeriod=30 Feb 19 10:06:03 crc kubenswrapper[4965]: I0219 10:06:03.568758 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.618301372 podStartE2EDuration="7.568736968s" podCreationTimestamp="2026-02-19 10:05:56 +0000 UTC" firstStartedPulling="2026-02-19 10:05:57.647268785 +0000 UTC m=+1413.268590095" lastFinishedPulling="2026-02-19 10:06:02.597704381 +0000 UTC m=+1418.219025691" observedRunningTime="2026-02-19 10:06:03.566102825 +0000 UTC m=+1419.187424135" watchObservedRunningTime="2026-02-19 10:06:03.568736968 +0000 UTC m=+1419.190058278" Feb 19 10:06:03 crc kubenswrapper[4965]: I0219 10:06:03.571325 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dgqlg" event={"ID":"0b81c5e4-6bba-49a8-8687-dcff16739800","Type":"ContainerStarted","Data":"3a1c87ed15711e87c59eac7ac5171dbaf79f27a23dfc23c77b7391db5101d5e5"} Feb 19 10:06:03 crc kubenswrapper[4965]: I0219 10:06:03.583177 4965 scope.go:117] "RemoveContainer" containerID="c293a7335c0acf4b1b23da789a047620557caffe1b453f3b6e0ed68558acff77" Feb 19 10:06:03 crc kubenswrapper[4965]: I0219 10:06:03.599663 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 10:06:03 crc kubenswrapper[4965]: I0219 10:06:03.611155 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 19 10:06:03 crc kubenswrapper[4965]: I0219 10:06:03.635466 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 10:06:03 crc kubenswrapper[4965]: E0219 10:06:03.636229 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb877059-a8dd-4347-ac5e-08baba808882" containerName="nova-api-api" Feb 19 10:06:03 crc kubenswrapper[4965]: I0219 10:06:03.636254 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb877059-a8dd-4347-ac5e-08baba808882" containerName="nova-api-api" Feb 19 10:06:03 crc kubenswrapper[4965]: E0219 10:06:03.636278 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb877059-a8dd-4347-ac5e-08baba808882" containerName="nova-api-log" Feb 19 10:06:03 crc kubenswrapper[4965]: I0219 10:06:03.636286 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb877059-a8dd-4347-ac5e-08baba808882" containerName="nova-api-log" Feb 19 10:06:03 crc kubenswrapper[4965]: I0219 10:06:03.636596 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb877059-a8dd-4347-ac5e-08baba808882" containerName="nova-api-log" Feb 19 10:06:03 crc kubenswrapper[4965]: I0219 10:06:03.636641 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb877059-a8dd-4347-ac5e-08baba808882" containerName="nova-api-api" Feb 19 10:06:03 crc kubenswrapper[4965]: I0219 10:06:03.637683 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dgqlg" podStartSLOduration=2.185140566 podStartE2EDuration="4.637657243s" podCreationTimestamp="2026-02-19 10:05:59 +0000 UTC" firstStartedPulling="2026-02-19 10:06:00.438810507 +0000 UTC m=+1416.060131817" lastFinishedPulling="2026-02-19 10:06:02.891327184 +0000 UTC m=+1418.512648494" observedRunningTime="2026-02-19 10:06:03.611851586 +0000 UTC m=+1419.233172896" watchObservedRunningTime="2026-02-19 10:06:03.637657243 +0000 UTC m=+1419.258978553" Feb 19 10:06:03 crc kubenswrapper[4965]: I0219 10:06:03.638390 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 10:06:03 crc kubenswrapper[4965]: I0219 10:06:03.646896 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 10:06:03 crc kubenswrapper[4965]: I0219 10:06:03.647356 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 19 10:06:03 crc kubenswrapper[4965]: I0219 10:06:03.647781 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 19 10:06:03 crc kubenswrapper[4965]: I0219 10:06:03.660464 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 10:06:03 crc kubenswrapper[4965]: I0219 10:06:03.731647 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c20b024-6fec-4f85-8250-1238c52a6b95-config-data\") pod \"nova-api-0\" (UID: \"6c20b024-6fec-4f85-8250-1238c52a6b95\") " pod="openstack/nova-api-0" Feb 19 10:06:03 crc kubenswrapper[4965]: I0219 10:06:03.731688 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs9x5\" (UniqueName: \"kubernetes.io/projected/6c20b024-6fec-4f85-8250-1238c52a6b95-kube-api-access-cs9x5\") pod \"nova-api-0\" (UID: \"6c20b024-6fec-4f85-8250-1238c52a6b95\") " pod="openstack/nova-api-0" Feb 19 10:06:03 crc kubenswrapper[4965]: I0219 10:06:03.731732 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c20b024-6fec-4f85-8250-1238c52a6b95-logs\") pod \"nova-api-0\" (UID: \"6c20b024-6fec-4f85-8250-1238c52a6b95\") " pod="openstack/nova-api-0" Feb 19 10:06:03 crc kubenswrapper[4965]: I0219 10:06:03.732633 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c20b024-6fec-4f85-8250-1238c52a6b95-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6c20b024-6fec-4f85-8250-1238c52a6b95\") " pod="openstack/nova-api-0" Feb 19 10:06:03 crc kubenswrapper[4965]: I0219 10:06:03.732768 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c20b024-6fec-4f85-8250-1238c52a6b95-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6c20b024-6fec-4f85-8250-1238c52a6b95\") " pod="openstack/nova-api-0" Feb 19 10:06:03 crc kubenswrapper[4965]: I0219 10:06:03.734091 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c20b024-6fec-4f85-8250-1238c52a6b95-public-tls-certs\") pod \"nova-api-0\" (UID: \"6c20b024-6fec-4f85-8250-1238c52a6b95\") " pod="openstack/nova-api-0" Feb 19 10:06:03 crc kubenswrapper[4965]: I0219 10:06:03.836928 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c20b024-6fec-4f85-8250-1238c52a6b95-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6c20b024-6fec-4f85-8250-1238c52a6b95\") " pod="openstack/nova-api-0" Feb 19 10:06:03 crc kubenswrapper[4965]: I0219 10:06:03.836984 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c20b024-6fec-4f85-8250-1238c52a6b95-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6c20b024-6fec-4f85-8250-1238c52a6b95\") " pod="openstack/nova-api-0" Feb 19 10:06:03 crc kubenswrapper[4965]: I0219 10:06:03.837044 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c20b024-6fec-4f85-8250-1238c52a6b95-public-tls-certs\") pod \"nova-api-0\" (UID: \"6c20b024-6fec-4f85-8250-1238c52a6b95\") " pod="openstack/nova-api-0" Feb 19 10:06:03 crc kubenswrapper[4965]: I0219 10:06:03.837154 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c20b024-6fec-4f85-8250-1238c52a6b95-config-data\") pod \"nova-api-0\" (UID: \"6c20b024-6fec-4f85-8250-1238c52a6b95\") " pod="openstack/nova-api-0" Feb 19 10:06:03 crc kubenswrapper[4965]: I0219 10:06:03.837178 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cs9x5\" (UniqueName: \"kubernetes.io/projected/6c20b024-6fec-4f85-8250-1238c52a6b95-kube-api-access-cs9x5\") pod \"nova-api-0\" (UID: \"6c20b024-6fec-4f85-8250-1238c52a6b95\") " pod="openstack/nova-api-0" Feb 19 10:06:03 crc kubenswrapper[4965]: I0219 10:06:03.837252 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c20b024-6fec-4f85-8250-1238c52a6b95-logs\") pod \"nova-api-0\" (UID: \"6c20b024-6fec-4f85-8250-1238c52a6b95\") " pod="openstack/nova-api-0" Feb 19 10:06:03 crc kubenswrapper[4965]: I0219 10:06:03.837757 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c20b024-6fec-4f85-8250-1238c52a6b95-logs\") pod \"nova-api-0\" (UID: \"6c20b024-6fec-4f85-8250-1238c52a6b95\") " pod="openstack/nova-api-0" Feb 19 10:06:03 crc kubenswrapper[4965]: I0219 10:06:03.843654 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c20b024-6fec-4f85-8250-1238c52a6b95-public-tls-certs\") pod \"nova-api-0\" (UID: \"6c20b024-6fec-4f85-8250-1238c52a6b95\") " pod="openstack/nova-api-0" Feb 19 10:06:03 crc kubenswrapper[4965]: I0219 10:06:03.844745 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c20b024-6fec-4f85-8250-1238c52a6b95-config-data\") pod \"nova-api-0\" (UID: \"6c20b024-6fec-4f85-8250-1238c52a6b95\") " pod="openstack/nova-api-0" Feb 19 10:06:03 crc kubenswrapper[4965]: I0219 10:06:03.847853 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c20b024-6fec-4f85-8250-1238c52a6b95-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6c20b024-6fec-4f85-8250-1238c52a6b95\") " pod="openstack/nova-api-0" Feb 19 10:06:03 crc kubenswrapper[4965]: I0219 10:06:03.854703 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c20b024-6fec-4f85-8250-1238c52a6b95-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6c20b024-6fec-4f85-8250-1238c52a6b95\") " pod="openstack/nova-api-0" Feb 19 10:06:03 crc kubenswrapper[4965]: I0219 10:06:03.854787 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cs9x5\" (UniqueName: \"kubernetes.io/projected/6c20b024-6fec-4f85-8250-1238c52a6b95-kube-api-access-cs9x5\") pod \"nova-api-0\" (UID: \"6c20b024-6fec-4f85-8250-1238c52a6b95\") " pod="openstack/nova-api-0" Feb 19 10:06:04 crc kubenswrapper[4965]: I0219 10:06:04.024103 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 10:06:04 crc kubenswrapper[4965]: I0219 10:06:04.539652 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 10:06:04 crc kubenswrapper[4965]: W0219 10:06:04.551091 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c20b024_6fec_4f85_8250_1238c52a6b95.slice/crio-ec8f8d02b1e036ebd6be4c564e3c312a0d37fa758fef9200546360ae82dbf710 WatchSource:0}: Error finding container ec8f8d02b1e036ebd6be4c564e3c312a0d37fa758fef9200546360ae82dbf710: Status 404 returned error can't find the container with id ec8f8d02b1e036ebd6be4c564e3c312a0d37fa758fef9200546360ae82dbf710 Feb 19 10:06:04 crc kubenswrapper[4965]: I0219 10:06:04.583390 4965 generic.go:334] "Generic (PLEG): container finished" podID="3453a132-2f25-411c-8ac8-fa0f8f9b958b" containerID="4671f1140347295dd9324e247ac7c25c93e8bd1ab5960be663d43a0270eeef8b" exitCode=0 Feb 19 10:06:04 crc kubenswrapper[4965]: I0219 10:06:04.583455 4965 generic.go:334] "Generic (PLEG): container finished" podID="3453a132-2f25-411c-8ac8-fa0f8f9b958b" containerID="46318fc5a617651a1e0a7c3fa95e22907475f4b44ad8cf8718da1eae9ec1b345" exitCode=2 Feb 19 10:06:04 crc kubenswrapper[4965]: I0219 10:06:04.583471 4965 generic.go:334] "Generic (PLEG): container finished" podID="3453a132-2f25-411c-8ac8-fa0f8f9b958b" containerID="2e8feb81bf3f1d6fc3f4e924fa26d1c903f119da0bed5e3191a9704a190b1d7a" exitCode=0 Feb 19 10:06:04 crc kubenswrapper[4965]: I0219 10:06:04.583480 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3453a132-2f25-411c-8ac8-fa0f8f9b958b","Type":"ContainerDied","Data":"4671f1140347295dd9324e247ac7c25c93e8bd1ab5960be663d43a0270eeef8b"} Feb 19 10:06:04 crc kubenswrapper[4965]: I0219 10:06:04.583551 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3453a132-2f25-411c-8ac8-fa0f8f9b958b","Type":"ContainerDied","Data":"46318fc5a617651a1e0a7c3fa95e22907475f4b44ad8cf8718da1eae9ec1b345"} Feb 19 10:06:04 crc kubenswrapper[4965]: I0219 10:06:04.583568 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3453a132-2f25-411c-8ac8-fa0f8f9b958b","Type":"ContainerDied","Data":"2e8feb81bf3f1d6fc3f4e924fa26d1c903f119da0bed5e3191a9704a190b1d7a"} Feb 19 10:06:04 crc kubenswrapper[4965]: I0219 10:06:04.585238 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6c20b024-6fec-4f85-8250-1238c52a6b95","Type":"ContainerStarted","Data":"ec8f8d02b1e036ebd6be4c564e3c312a0d37fa758fef9200546360ae82dbf710"} Feb 19 10:06:05 crc kubenswrapper[4965]: I0219 10:06:05.217985 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb877059-a8dd-4347-ac5e-08baba808882" path="/var/lib/kubelet/pods/eb877059-a8dd-4347-ac5e-08baba808882/volumes" Feb 19 10:06:05 crc kubenswrapper[4965]: I0219 10:06:05.614450 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6c20b024-6fec-4f85-8250-1238c52a6b95","Type":"ContainerStarted","Data":"c07acf87a868063127fde6f534eeea64fa997a8801dbc3d9e5191cb575d6041c"} Feb 19 10:06:05 crc kubenswrapper[4965]: I0219 10:06:05.615807 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6c20b024-6fec-4f85-8250-1238c52a6b95","Type":"ContainerStarted","Data":"047edf91c45c5b44f961f6ddc800f6bce583c1eb724fe25de58cdf079c27bb26"} Feb 19 10:06:05 crc kubenswrapper[4965]: I0219 10:06:05.618901 4965 generic.go:334] "Generic (PLEG): container finished" podID="b21ddc99-df08-4635-996d-872a7c3f6f3b" containerID="9b278ef6932d007329c5791a7c5476eeda2384272529608de619fae7a14c7687" exitCode=0 Feb 19 10:06:05 crc kubenswrapper[4965]: I0219 10:06:05.619241 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-rdjn2" event={"ID":"b21ddc99-df08-4635-996d-872a7c3f6f3b","Type":"ContainerDied","Data":"9b278ef6932d007329c5791a7c5476eeda2384272529608de619fae7a14c7687"} Feb 19 10:06:05 crc kubenswrapper[4965]: I0219 10:06:05.632321 4965 generic.go:334] "Generic (PLEG): container finished" podID="3453a132-2f25-411c-8ac8-fa0f8f9b958b" containerID="2c5a3ba9364f2128839107d5d44af1d085d997ca33ef556dcc6d64ec0efa6c3a" exitCode=0 Feb 19 10:06:05 crc kubenswrapper[4965]: I0219 10:06:05.632531 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3453a132-2f25-411c-8ac8-fa0f8f9b958b","Type":"ContainerDied","Data":"2c5a3ba9364f2128839107d5d44af1d085d997ca33ef556dcc6d64ec0efa6c3a"} Feb 19 10:06:05 crc kubenswrapper[4965]: I0219 10:06:05.640878 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.639125682 podStartE2EDuration="2.639125682s" podCreationTimestamp="2026-02-19 10:06:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:06:05.635151856 +0000 UTC m=+1421.256473166" watchObservedRunningTime="2026-02-19 10:06:05.639125682 +0000 UTC m=+1421.260446992" Feb 19 10:06:05 crc kubenswrapper[4965]: I0219 10:06:05.807362 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-54dd998c-dbkhp" Feb 19 10:06:05 crc kubenswrapper[4965]: I0219 10:06:05.835074 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:06:05 crc kubenswrapper[4965]: I0219 10:06:05.897663 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-884c8b8f5-s9fh4"] Feb 19 10:06:05 crc kubenswrapper[4965]: I0219 10:06:05.898151 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-884c8b8f5-s9fh4" podUID="03baa534-d46c-4cb3-93ce-d124f65241ed" containerName="dnsmasq-dns" containerID="cri-o://3c8017670763c823c5cc67a9fc487d6567a64c60883754f318545572af0596ff" gracePeriod=10 Feb 19 10:06:05 crc kubenswrapper[4965]: I0219 10:06:05.979941 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3453a132-2f25-411c-8ac8-fa0f8f9b958b-scripts\") pod \"3453a132-2f25-411c-8ac8-fa0f8f9b958b\" (UID: \"3453a132-2f25-411c-8ac8-fa0f8f9b958b\") " Feb 19 10:06:05 crc kubenswrapper[4965]: I0219 10:06:05.980325 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3453a132-2f25-411c-8ac8-fa0f8f9b958b-log-httpd\") pod \"3453a132-2f25-411c-8ac8-fa0f8f9b958b\" (UID: \"3453a132-2f25-411c-8ac8-fa0f8f9b958b\") " Feb 19 10:06:05 crc kubenswrapper[4965]: I0219 10:06:05.980423 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2fsf\" (UniqueName: \"kubernetes.io/projected/3453a132-2f25-411c-8ac8-fa0f8f9b958b-kube-api-access-h2fsf\") pod \"3453a132-2f25-411c-8ac8-fa0f8f9b958b\" (UID: \"3453a132-2f25-411c-8ac8-fa0f8f9b958b\") " Feb 19 10:06:05 crc kubenswrapper[4965]: I0219 10:06:05.980469 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3453a132-2f25-411c-8ac8-fa0f8f9b958b-ceilometer-tls-certs\") pod \"3453a132-2f25-411c-8ac8-fa0f8f9b958b\" (UID: \"3453a132-2f25-411c-8ac8-fa0f8f9b958b\") " Feb 19 10:06:05 crc kubenswrapper[4965]: I0219 10:06:05.980561 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3453a132-2f25-411c-8ac8-fa0f8f9b958b-config-data\") pod \"3453a132-2f25-411c-8ac8-fa0f8f9b958b\" (UID: \"3453a132-2f25-411c-8ac8-fa0f8f9b958b\") " Feb 19 10:06:05 crc kubenswrapper[4965]: I0219 10:06:05.980673 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3453a132-2f25-411c-8ac8-fa0f8f9b958b-run-httpd\") pod \"3453a132-2f25-411c-8ac8-fa0f8f9b958b\" (UID: \"3453a132-2f25-411c-8ac8-fa0f8f9b958b\") " Feb 19 10:06:05 crc kubenswrapper[4965]: I0219 10:06:05.980787 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3453a132-2f25-411c-8ac8-fa0f8f9b958b-sg-core-conf-yaml\") pod \"3453a132-2f25-411c-8ac8-fa0f8f9b958b\" (UID: \"3453a132-2f25-411c-8ac8-fa0f8f9b958b\") " Feb 19 10:06:05 crc kubenswrapper[4965]: I0219 10:06:05.980826 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3453a132-2f25-411c-8ac8-fa0f8f9b958b-combined-ca-bundle\") pod \"3453a132-2f25-411c-8ac8-fa0f8f9b958b\" (UID: \"3453a132-2f25-411c-8ac8-fa0f8f9b958b\") " Feb 19 10:06:05 crc kubenswrapper[4965]: I0219 10:06:05.981246 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3453a132-2f25-411c-8ac8-fa0f8f9b958b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3453a132-2f25-411c-8ac8-fa0f8f9b958b" (UID: "3453a132-2f25-411c-8ac8-fa0f8f9b958b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:06:05 crc kubenswrapper[4965]: I0219 10:06:05.981641 4965 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3453a132-2f25-411c-8ac8-fa0f8f9b958b-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:05 crc kubenswrapper[4965]: I0219 10:06:05.981799 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3453a132-2f25-411c-8ac8-fa0f8f9b958b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3453a132-2f25-411c-8ac8-fa0f8f9b958b" (UID: "3453a132-2f25-411c-8ac8-fa0f8f9b958b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:06:05 crc kubenswrapper[4965]: I0219 10:06:05.989401 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3453a132-2f25-411c-8ac8-fa0f8f9b958b-scripts" (OuterVolumeSpecName: "scripts") pod "3453a132-2f25-411c-8ac8-fa0f8f9b958b" (UID: "3453a132-2f25-411c-8ac8-fa0f8f9b958b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:06:05 crc kubenswrapper[4965]: I0219 10:06:05.996892 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3453a132-2f25-411c-8ac8-fa0f8f9b958b-kube-api-access-h2fsf" (OuterVolumeSpecName: "kube-api-access-h2fsf") pod "3453a132-2f25-411c-8ac8-fa0f8f9b958b" (UID: "3453a132-2f25-411c-8ac8-fa0f8f9b958b"). InnerVolumeSpecName "kube-api-access-h2fsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.015260 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3453a132-2f25-411c-8ac8-fa0f8f9b958b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3453a132-2f25-411c-8ac8-fa0f8f9b958b" (UID: "3453a132-2f25-411c-8ac8-fa0f8f9b958b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.089516 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3453a132-2f25-411c-8ac8-fa0f8f9b958b-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "3453a132-2f25-411c-8ac8-fa0f8f9b958b" (UID: "3453a132-2f25-411c-8ac8-fa0f8f9b958b"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.089952 4965 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3453a132-2f25-411c-8ac8-fa0f8f9b958b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.089989 4965 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3453a132-2f25-411c-8ac8-fa0f8f9b958b-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.090006 4965 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3453a132-2f25-411c-8ac8-fa0f8f9b958b-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.090019 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2fsf\" (UniqueName: \"kubernetes.io/projected/3453a132-2f25-411c-8ac8-fa0f8f9b958b-kube-api-access-h2fsf\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.090032 4965 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3453a132-2f25-411c-8ac8-fa0f8f9b958b-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.092922 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3453a132-2f25-411c-8ac8-fa0f8f9b958b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3453a132-2f25-411c-8ac8-fa0f8f9b958b" (UID: "3453a132-2f25-411c-8ac8-fa0f8f9b958b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.138898 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3453a132-2f25-411c-8ac8-fa0f8f9b958b-config-data" (OuterVolumeSpecName: "config-data") pod "3453a132-2f25-411c-8ac8-fa0f8f9b958b" (UID: "3453a132-2f25-411c-8ac8-fa0f8f9b958b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.197098 4965 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3453a132-2f25-411c-8ac8-fa0f8f9b958b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.197135 4965 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3453a132-2f25-411c-8ac8-fa0f8f9b958b-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.414526 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-884c8b8f5-s9fh4" Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.502092 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03baa534-d46c-4cb3-93ce-d124f65241ed-ovsdbserver-nb\") pod \"03baa534-d46c-4cb3-93ce-d124f65241ed\" (UID: \"03baa534-d46c-4cb3-93ce-d124f65241ed\") " Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.502143 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03baa534-d46c-4cb3-93ce-d124f65241ed-ovsdbserver-sb\") pod \"03baa534-d46c-4cb3-93ce-d124f65241ed\" (UID: \"03baa534-d46c-4cb3-93ce-d124f65241ed\") " Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.502200 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03baa534-d46c-4cb3-93ce-d124f65241ed-dns-svc\") pod \"03baa534-d46c-4cb3-93ce-d124f65241ed\" (UID: \"03baa534-d46c-4cb3-93ce-d124f65241ed\") " Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.502368 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/03baa534-d46c-4cb3-93ce-d124f65241ed-dns-swift-storage-0\") pod \"03baa534-d46c-4cb3-93ce-d124f65241ed\" (UID: \"03baa534-d46c-4cb3-93ce-d124f65241ed\") " Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.502481 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03baa534-d46c-4cb3-93ce-d124f65241ed-config\") pod \"03baa534-d46c-4cb3-93ce-d124f65241ed\" (UID: \"03baa534-d46c-4cb3-93ce-d124f65241ed\") " Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.502562 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgwf8\" (UniqueName: \"kubernetes.io/projected/03baa534-d46c-4cb3-93ce-d124f65241ed-kube-api-access-tgwf8\") pod \"03baa534-d46c-4cb3-93ce-d124f65241ed\" (UID: \"03baa534-d46c-4cb3-93ce-d124f65241ed\") " Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.505688 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03baa534-d46c-4cb3-93ce-d124f65241ed-kube-api-access-tgwf8" (OuterVolumeSpecName: "kube-api-access-tgwf8") pod "03baa534-d46c-4cb3-93ce-d124f65241ed" (UID: "03baa534-d46c-4cb3-93ce-d124f65241ed"). InnerVolumeSpecName "kube-api-access-tgwf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.555885 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03baa534-d46c-4cb3-93ce-d124f65241ed-config" (OuterVolumeSpecName: "config") pod "03baa534-d46c-4cb3-93ce-d124f65241ed" (UID: "03baa534-d46c-4cb3-93ce-d124f65241ed"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.558730 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03baa534-d46c-4cb3-93ce-d124f65241ed-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "03baa534-d46c-4cb3-93ce-d124f65241ed" (UID: "03baa534-d46c-4cb3-93ce-d124f65241ed"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.558957 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03baa534-d46c-4cb3-93ce-d124f65241ed-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "03baa534-d46c-4cb3-93ce-d124f65241ed" (UID: "03baa534-d46c-4cb3-93ce-d124f65241ed"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.562682 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03baa534-d46c-4cb3-93ce-d124f65241ed-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "03baa534-d46c-4cb3-93ce-d124f65241ed" (UID: "03baa534-d46c-4cb3-93ce-d124f65241ed"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.566355 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03baa534-d46c-4cb3-93ce-d124f65241ed-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "03baa534-d46c-4cb3-93ce-d124f65241ed" (UID: "03baa534-d46c-4cb3-93ce-d124f65241ed"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.605499 4965 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03baa534-d46c-4cb3-93ce-d124f65241ed-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.605547 4965 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03baa534-d46c-4cb3-93ce-d124f65241ed-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.605562 4965 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03baa534-d46c-4cb3-93ce-d124f65241ed-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.605575 4965 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/03baa534-d46c-4cb3-93ce-d124f65241ed-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.605589 4965 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03baa534-d46c-4cb3-93ce-d124f65241ed-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.605600 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgwf8\" (UniqueName: \"kubernetes.io/projected/03baa534-d46c-4cb3-93ce-d124f65241ed-kube-api-access-tgwf8\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.643216 4965 generic.go:334] "Generic (PLEG): container finished" podID="03baa534-d46c-4cb3-93ce-d124f65241ed" containerID="3c8017670763c823c5cc67a9fc487d6567a64c60883754f318545572af0596ff" exitCode=0 Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.643247 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-884c8b8f5-s9fh4" event={"ID":"03baa534-d46c-4cb3-93ce-d124f65241ed","Type":"ContainerDied","Data":"3c8017670763c823c5cc67a9fc487d6567a64c60883754f318545572af0596ff"} Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.643300 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-884c8b8f5-s9fh4" Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.644481 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-884c8b8f5-s9fh4" event={"ID":"03baa534-d46c-4cb3-93ce-d124f65241ed","Type":"ContainerDied","Data":"369fa776a874e268ee544a320b9003afa4f74f3c569eb275e5adc7d34a883b65"} Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.644584 4965 scope.go:117] "RemoveContainer" containerID="3c8017670763c823c5cc67a9fc487d6567a64c60883754f318545572af0596ff" Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.648808 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.652276 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3453a132-2f25-411c-8ac8-fa0f8f9b958b","Type":"ContainerDied","Data":"c8e48c4d67d9c3914d6bee8d939787af3a9a2e506d8cd0ab433a2d462c0437c4"} Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.668443 4965 scope.go:117] "RemoveContainer" containerID="c765b087378c663902e648dc871993e786501e97cf054ad30939e2d5a82e3fb5" Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.704437 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-884c8b8f5-s9fh4"] Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.710639 4965 scope.go:117] "RemoveContainer" containerID="3c8017670763c823c5cc67a9fc487d6567a64c60883754f318545572af0596ff" Feb 19 10:06:06 crc kubenswrapper[4965]: E0219 10:06:06.714356 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c8017670763c823c5cc67a9fc487d6567a64c60883754f318545572af0596ff\": container with ID starting with 3c8017670763c823c5cc67a9fc487d6567a64c60883754f318545572af0596ff not found: ID does not exist" containerID="3c8017670763c823c5cc67a9fc487d6567a64c60883754f318545572af0596ff" Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.714409 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c8017670763c823c5cc67a9fc487d6567a64c60883754f318545572af0596ff"} err="failed to get container status \"3c8017670763c823c5cc67a9fc487d6567a64c60883754f318545572af0596ff\": rpc error: code = NotFound desc = could not find container \"3c8017670763c823c5cc67a9fc487d6567a64c60883754f318545572af0596ff\": container with ID starting with 3c8017670763c823c5cc67a9fc487d6567a64c60883754f318545572af0596ff not found: ID does not exist" Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.714434 4965 scope.go:117] "RemoveContainer" containerID="c765b087378c663902e648dc871993e786501e97cf054ad30939e2d5a82e3fb5" Feb 19 10:06:06 crc kubenswrapper[4965]: E0219 10:06:06.714754 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c765b087378c663902e648dc871993e786501e97cf054ad30939e2d5a82e3fb5\": container with ID starting with c765b087378c663902e648dc871993e786501e97cf054ad30939e2d5a82e3fb5 not found: ID does not exist" containerID="c765b087378c663902e648dc871993e786501e97cf054ad30939e2d5a82e3fb5" Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.714800 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c765b087378c663902e648dc871993e786501e97cf054ad30939e2d5a82e3fb5"} err="failed to get container status \"c765b087378c663902e648dc871993e786501e97cf054ad30939e2d5a82e3fb5\": rpc error: code = NotFound desc = could not find container \"c765b087378c663902e648dc871993e786501e97cf054ad30939e2d5a82e3fb5\": container with ID starting with c765b087378c663902e648dc871993e786501e97cf054ad30939e2d5a82e3fb5 not found: ID does not exist" Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.714832 4965 scope.go:117] "RemoveContainer" containerID="4671f1140347295dd9324e247ac7c25c93e8bd1ab5960be663d43a0270eeef8b" Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.733737 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-884c8b8f5-s9fh4"] Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.744007 4965 scope.go:117] "RemoveContainer" containerID="46318fc5a617651a1e0a7c3fa95e22907475f4b44ad8cf8718da1eae9ec1b345" Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.747183 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.759439 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.768613 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:06:06 crc kubenswrapper[4965]: E0219 10:06:06.769062 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3453a132-2f25-411c-8ac8-fa0f8f9b958b" containerName="proxy-httpd" Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.769078 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="3453a132-2f25-411c-8ac8-fa0f8f9b958b" containerName="proxy-httpd" Feb 19 10:06:06 crc kubenswrapper[4965]: E0219 10:06:06.769099 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3453a132-2f25-411c-8ac8-fa0f8f9b958b" containerName="ceilometer-central-agent" Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.769106 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="3453a132-2f25-411c-8ac8-fa0f8f9b958b" containerName="ceilometer-central-agent" Feb 19 10:06:06 crc kubenswrapper[4965]: E0219 10:06:06.769121 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3453a132-2f25-411c-8ac8-fa0f8f9b958b" containerName="ceilometer-notification-agent" Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.769130 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="3453a132-2f25-411c-8ac8-fa0f8f9b958b" containerName="ceilometer-notification-agent" Feb 19 10:06:06 crc kubenswrapper[4965]: E0219 10:06:06.769142 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03baa534-d46c-4cb3-93ce-d124f65241ed" containerName="dnsmasq-dns" Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.769147 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="03baa534-d46c-4cb3-93ce-d124f65241ed" containerName="dnsmasq-dns" Feb 19 10:06:06 crc kubenswrapper[4965]: E0219 10:06:06.769166 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3453a132-2f25-411c-8ac8-fa0f8f9b958b" containerName="sg-core" Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.769172 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="3453a132-2f25-411c-8ac8-fa0f8f9b958b" containerName="sg-core" Feb 19 10:06:06 crc kubenswrapper[4965]: E0219 10:06:06.769184 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03baa534-d46c-4cb3-93ce-d124f65241ed" containerName="init" Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.769189 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="03baa534-d46c-4cb3-93ce-d124f65241ed" containerName="init" Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.769420 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="03baa534-d46c-4cb3-93ce-d124f65241ed" containerName="dnsmasq-dns" Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.769431 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="3453a132-2f25-411c-8ac8-fa0f8f9b958b" containerName="ceilometer-notification-agent" Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.769445 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="3453a132-2f25-411c-8ac8-fa0f8f9b958b" containerName="proxy-httpd" Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.769454 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="3453a132-2f25-411c-8ac8-fa0f8f9b958b" containerName="sg-core" Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.769475 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="3453a132-2f25-411c-8ac8-fa0f8f9b958b" containerName="ceilometer-central-agent" Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.771435 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.773131 4965 scope.go:117] "RemoveContainer" containerID="2e8feb81bf3f1d6fc3f4e924fa26d1c903f119da0bed5e3191a9704a190b1d7a" Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.774884 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.775776 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.775931 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.793768 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.862963 4965 scope.go:117] "RemoveContainer" containerID="2c5a3ba9364f2128839107d5d44af1d085d997ca33ef556dcc6d64ec0efa6c3a" Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.911341 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9-log-httpd\") pod \"ceilometer-0\" (UID: \"e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9\") " pod="openstack/ceilometer-0" Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.911441 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9-scripts\") pod \"ceilometer-0\" (UID: \"e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9\") " pod="openstack/ceilometer-0" Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.911495 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9-run-httpd\") pod \"ceilometer-0\" (UID: \"e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9\") " pod="openstack/ceilometer-0" Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.911576 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9\") " pod="openstack/ceilometer-0" Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.911630 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9\") " pod="openstack/ceilometer-0" Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.911660 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9-config-data\") pod \"ceilometer-0\" (UID: \"e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9\") " pod="openstack/ceilometer-0" Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.911688 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hprq8\" (UniqueName: \"kubernetes.io/projected/e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9-kube-api-access-hprq8\") pod \"ceilometer-0\" (UID: \"e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9\") " pod="openstack/ceilometer-0" Feb 19 10:06:06 crc kubenswrapper[4965]: I0219 10:06:06.911737 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9\") " pod="openstack/ceilometer-0" Feb 19 10:06:07 crc kubenswrapper[4965]: I0219 10:06:07.013448 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9-log-httpd\") pod \"ceilometer-0\" (UID: \"e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9\") " pod="openstack/ceilometer-0" Feb 19 10:06:07 crc kubenswrapper[4965]: I0219 10:06:07.013537 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9-scripts\") pod \"ceilometer-0\" (UID: \"e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9\") " pod="openstack/ceilometer-0" Feb 19 10:06:07 crc kubenswrapper[4965]: I0219 10:06:07.013591 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9-run-httpd\") pod \"ceilometer-0\" (UID: \"e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9\") " pod="openstack/ceilometer-0" Feb 19 10:06:07 crc kubenswrapper[4965]: I0219 10:06:07.013643 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9\") " pod="openstack/ceilometer-0" Feb 19 10:06:07 crc kubenswrapper[4965]: I0219 10:06:07.013693 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9\") " pod="openstack/ceilometer-0" Feb 19 10:06:07 crc kubenswrapper[4965]: I0219 10:06:07.013717 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9-config-data\") pod \"ceilometer-0\" (UID: \"e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9\") " pod="openstack/ceilometer-0" Feb 19 10:06:07 crc kubenswrapper[4965]: I0219 10:06:07.013747 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hprq8\" (UniqueName: \"kubernetes.io/projected/e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9-kube-api-access-hprq8\") pod \"ceilometer-0\" (UID: \"e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9\") " pod="openstack/ceilometer-0" Feb 19 10:06:07 crc kubenswrapper[4965]: I0219 10:06:07.013784 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9\") " pod="openstack/ceilometer-0" Feb 19 10:06:07 crc kubenswrapper[4965]: I0219 10:06:07.014684 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9-log-httpd\") pod \"ceilometer-0\" (UID: \"e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9\") " pod="openstack/ceilometer-0" Feb 19 10:06:07 crc kubenswrapper[4965]: I0219 10:06:07.014748 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9-run-httpd\") pod \"ceilometer-0\" (UID: \"e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9\") " pod="openstack/ceilometer-0" Feb 19 10:06:07 crc kubenswrapper[4965]: I0219 10:06:07.019553 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9\") " pod="openstack/ceilometer-0" Feb 19 10:06:07 crc kubenswrapper[4965]: I0219 10:06:07.021187 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9-config-data\") pod \"ceilometer-0\" (UID: \"e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9\") " pod="openstack/ceilometer-0" Feb 19 10:06:07 crc kubenswrapper[4965]: I0219 10:06:07.021733 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9\") " pod="openstack/ceilometer-0" Feb 19 10:06:07 crc kubenswrapper[4965]: I0219 10:06:07.027634 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9-scripts\") pod \"ceilometer-0\" (UID: \"e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9\") " pod="openstack/ceilometer-0" Feb 19 10:06:07 crc kubenswrapper[4965]: I0219 10:06:07.030318 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9\") " pod="openstack/ceilometer-0" Feb 19 10:06:07 crc kubenswrapper[4965]: I0219 10:06:07.033390 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hprq8\" (UniqueName: \"kubernetes.io/projected/e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9-kube-api-access-hprq8\") pod \"ceilometer-0\" (UID: \"e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9\") " pod="openstack/ceilometer-0" Feb 19 10:06:07 crc kubenswrapper[4965]: I0219 10:06:07.157031 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:06:07 crc kubenswrapper[4965]: I0219 10:06:07.218843 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03baa534-d46c-4cb3-93ce-d124f65241ed" path="/var/lib/kubelet/pods/03baa534-d46c-4cb3-93ce-d124f65241ed/volumes" Feb 19 10:06:07 crc kubenswrapper[4965]: I0219 10:06:07.220017 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3453a132-2f25-411c-8ac8-fa0f8f9b958b" path="/var/lib/kubelet/pods/3453a132-2f25-411c-8ac8-fa0f8f9b958b/volumes" Feb 19 10:06:07 crc kubenswrapper[4965]: I0219 10:06:07.266314 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-rdjn2" Feb 19 10:06:07 crc kubenswrapper[4965]: I0219 10:06:07.423027 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b21ddc99-df08-4635-996d-872a7c3f6f3b-config-data\") pod \"b21ddc99-df08-4635-996d-872a7c3f6f3b\" (UID: \"b21ddc99-df08-4635-996d-872a7c3f6f3b\") " Feb 19 10:06:07 crc kubenswrapper[4965]: I0219 10:06:07.423476 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2284\" (UniqueName: \"kubernetes.io/projected/b21ddc99-df08-4635-996d-872a7c3f6f3b-kube-api-access-k2284\") pod \"b21ddc99-df08-4635-996d-872a7c3f6f3b\" (UID: \"b21ddc99-df08-4635-996d-872a7c3f6f3b\") " Feb 19 10:06:07 crc kubenswrapper[4965]: I0219 10:06:07.424279 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b21ddc99-df08-4635-996d-872a7c3f6f3b-scripts\") pod \"b21ddc99-df08-4635-996d-872a7c3f6f3b\" (UID: \"b21ddc99-df08-4635-996d-872a7c3f6f3b\") " Feb 19 10:06:07 crc kubenswrapper[4965]: I0219 10:06:07.424429 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b21ddc99-df08-4635-996d-872a7c3f6f3b-combined-ca-bundle\") pod \"b21ddc99-df08-4635-996d-872a7c3f6f3b\" (UID: \"b21ddc99-df08-4635-996d-872a7c3f6f3b\") " Feb 19 10:06:07 crc kubenswrapper[4965]: I0219 10:06:07.427678 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b21ddc99-df08-4635-996d-872a7c3f6f3b-kube-api-access-k2284" (OuterVolumeSpecName: "kube-api-access-k2284") pod "b21ddc99-df08-4635-996d-872a7c3f6f3b" (UID: "b21ddc99-df08-4635-996d-872a7c3f6f3b"). InnerVolumeSpecName "kube-api-access-k2284". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:06:07 crc kubenswrapper[4965]: I0219 10:06:07.428253 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b21ddc99-df08-4635-996d-872a7c3f6f3b-scripts" (OuterVolumeSpecName: "scripts") pod "b21ddc99-df08-4635-996d-872a7c3f6f3b" (UID: "b21ddc99-df08-4635-996d-872a7c3f6f3b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:06:07 crc kubenswrapper[4965]: I0219 10:06:07.454926 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b21ddc99-df08-4635-996d-872a7c3f6f3b-config-data" (OuterVolumeSpecName: "config-data") pod "b21ddc99-df08-4635-996d-872a7c3f6f3b" (UID: "b21ddc99-df08-4635-996d-872a7c3f6f3b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:06:07 crc kubenswrapper[4965]: I0219 10:06:07.479976 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b21ddc99-df08-4635-996d-872a7c3f6f3b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b21ddc99-df08-4635-996d-872a7c3f6f3b" (UID: "b21ddc99-df08-4635-996d-872a7c3f6f3b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:06:07 crc kubenswrapper[4965]: I0219 10:06:07.527629 4965 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b21ddc99-df08-4635-996d-872a7c3f6f3b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:07 crc kubenswrapper[4965]: I0219 10:06:07.527669 4965 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b21ddc99-df08-4635-996d-872a7c3f6f3b-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:07 crc kubenswrapper[4965]: I0219 10:06:07.527680 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2284\" (UniqueName: \"kubernetes.io/projected/b21ddc99-df08-4635-996d-872a7c3f6f3b-kube-api-access-k2284\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:07 crc kubenswrapper[4965]: I0219 10:06:07.527689 4965 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b21ddc99-df08-4635-996d-872a7c3f6f3b-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:07 crc kubenswrapper[4965]: I0219 10:06:07.665041 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:06:07 crc kubenswrapper[4965]: I0219 10:06:07.665324 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-rdjn2" Feb 19 10:06:07 crc kubenswrapper[4965]: I0219 10:06:07.665790 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-rdjn2" event={"ID":"b21ddc99-df08-4635-996d-872a7c3f6f3b","Type":"ContainerDied","Data":"3a70d4c41e14babdcfedd057c95125d48952d790c4de722bb52ed922984020b0"} Feb 19 10:06:07 crc kubenswrapper[4965]: I0219 10:06:07.665819 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a70d4c41e14babdcfedd057c95125d48952d790c4de722bb52ed922984020b0" Feb 19 10:06:07 crc kubenswrapper[4965]: I0219 10:06:07.840092 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 10:06:07 crc kubenswrapper[4965]: I0219 10:06:07.840747 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6c20b024-6fec-4f85-8250-1238c52a6b95" containerName="nova-api-api" containerID="cri-o://c07acf87a868063127fde6f534eeea64fa997a8801dbc3d9e5191cb575d6041c" gracePeriod=30 Feb 19 10:06:07 crc kubenswrapper[4965]: I0219 10:06:07.840815 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6c20b024-6fec-4f85-8250-1238c52a6b95" containerName="nova-api-log" containerID="cri-o://047edf91c45c5b44f961f6ddc800f6bce583c1eb724fe25de58cdf079c27bb26" gracePeriod=30 Feb 19 10:06:07 crc kubenswrapper[4965]: I0219 10:06:07.854298 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 10:06:07 crc kubenswrapper[4965]: I0219 10:06:07.854577 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="edd94405-a4e9-4078-b7f7-d0fe27e28d69" containerName="nova-scheduler-scheduler" containerID="cri-o://e44bda06a835128697318ffaa90768ab207e29cafd57d5c0492c8732a45e70a7" gracePeriod=30 Feb 19 10:06:07 crc kubenswrapper[4965]: I0219 10:06:07.872282 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:06:07 crc kubenswrapper[4965]: I0219 10:06:07.872770 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="fd6d75f5-49d6-41d3-b812-e406dea5a4d1" containerName="nova-metadata-log" containerID="cri-o://06b8e45a13ff271edb63dc5141fa51d3cd108a2a4888b96e852541d8b583efcc" gracePeriod=30 Feb 19 10:06:07 crc kubenswrapper[4965]: I0219 10:06:07.872856 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="fd6d75f5-49d6-41d3-b812-e406dea5a4d1" containerName="nova-metadata-metadata" containerID="cri-o://475d7b0f8a76a385b34955f80982c1f0792b7806b49576cbe9b801197953d60e" gracePeriod=30 Feb 19 10:06:08 crc kubenswrapper[4965]: I0219 10:06:08.442263 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 10:06:08 crc kubenswrapper[4965]: I0219 10:06:08.547957 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c20b024-6fec-4f85-8250-1238c52a6b95-combined-ca-bundle\") pod \"6c20b024-6fec-4f85-8250-1238c52a6b95\" (UID: \"6c20b024-6fec-4f85-8250-1238c52a6b95\") " Feb 19 10:06:08 crc kubenswrapper[4965]: I0219 10:06:08.548086 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cs9x5\" (UniqueName: \"kubernetes.io/projected/6c20b024-6fec-4f85-8250-1238c52a6b95-kube-api-access-cs9x5\") pod \"6c20b024-6fec-4f85-8250-1238c52a6b95\" (UID: \"6c20b024-6fec-4f85-8250-1238c52a6b95\") " Feb 19 10:06:08 crc kubenswrapper[4965]: I0219 10:06:08.548173 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c20b024-6fec-4f85-8250-1238c52a6b95-public-tls-certs\") pod \"6c20b024-6fec-4f85-8250-1238c52a6b95\" (UID: \"6c20b024-6fec-4f85-8250-1238c52a6b95\") " Feb 19 10:06:08 crc kubenswrapper[4965]: I0219 10:06:08.548287 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c20b024-6fec-4f85-8250-1238c52a6b95-internal-tls-certs\") pod \"6c20b024-6fec-4f85-8250-1238c52a6b95\" (UID: \"6c20b024-6fec-4f85-8250-1238c52a6b95\") " Feb 19 10:06:08 crc kubenswrapper[4965]: I0219 10:06:08.548340 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c20b024-6fec-4f85-8250-1238c52a6b95-config-data\") pod \"6c20b024-6fec-4f85-8250-1238c52a6b95\" (UID: \"6c20b024-6fec-4f85-8250-1238c52a6b95\") " Feb 19 10:06:08 crc kubenswrapper[4965]: I0219 10:06:08.548376 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c20b024-6fec-4f85-8250-1238c52a6b95-logs\") pod \"6c20b024-6fec-4f85-8250-1238c52a6b95\" (UID: \"6c20b024-6fec-4f85-8250-1238c52a6b95\") " Feb 19 10:06:08 crc kubenswrapper[4965]: I0219 10:06:08.549306 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c20b024-6fec-4f85-8250-1238c52a6b95-logs" (OuterVolumeSpecName: "logs") pod "6c20b024-6fec-4f85-8250-1238c52a6b95" (UID: "6c20b024-6fec-4f85-8250-1238c52a6b95"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:06:08 crc kubenswrapper[4965]: I0219 10:06:08.561400 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c20b024-6fec-4f85-8250-1238c52a6b95-kube-api-access-cs9x5" (OuterVolumeSpecName: "kube-api-access-cs9x5") pod "6c20b024-6fec-4f85-8250-1238c52a6b95" (UID: "6c20b024-6fec-4f85-8250-1238c52a6b95"). InnerVolumeSpecName "kube-api-access-cs9x5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:06:08 crc kubenswrapper[4965]: I0219 10:06:08.580298 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c20b024-6fec-4f85-8250-1238c52a6b95-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c20b024-6fec-4f85-8250-1238c52a6b95" (UID: "6c20b024-6fec-4f85-8250-1238c52a6b95"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:06:08 crc kubenswrapper[4965]: I0219 10:06:08.592230 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c20b024-6fec-4f85-8250-1238c52a6b95-config-data" (OuterVolumeSpecName: "config-data") pod "6c20b024-6fec-4f85-8250-1238c52a6b95" (UID: "6c20b024-6fec-4f85-8250-1238c52a6b95"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:06:08 crc kubenswrapper[4965]: I0219 10:06:08.606592 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c20b024-6fec-4f85-8250-1238c52a6b95-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "6c20b024-6fec-4f85-8250-1238c52a6b95" (UID: "6c20b024-6fec-4f85-8250-1238c52a6b95"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:06:08 crc kubenswrapper[4965]: I0219 10:06:08.619899 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c20b024-6fec-4f85-8250-1238c52a6b95-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "6c20b024-6fec-4f85-8250-1238c52a6b95" (UID: "6c20b024-6fec-4f85-8250-1238c52a6b95"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:06:08 crc kubenswrapper[4965]: I0219 10:06:08.651603 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cs9x5\" (UniqueName: \"kubernetes.io/projected/6c20b024-6fec-4f85-8250-1238c52a6b95-kube-api-access-cs9x5\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:08 crc kubenswrapper[4965]: I0219 10:06:08.651653 4965 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c20b024-6fec-4f85-8250-1238c52a6b95-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:08 crc kubenswrapper[4965]: I0219 10:06:08.651666 4965 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c20b024-6fec-4f85-8250-1238c52a6b95-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:08 crc kubenswrapper[4965]: I0219 10:06:08.651681 4965 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c20b024-6fec-4f85-8250-1238c52a6b95-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:08 crc kubenswrapper[4965]: I0219 10:06:08.651694 4965 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c20b024-6fec-4f85-8250-1238c52a6b95-logs\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:08 crc kubenswrapper[4965]: I0219 10:06:08.651705 4965 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c20b024-6fec-4f85-8250-1238c52a6b95-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:08 crc kubenswrapper[4965]: I0219 10:06:08.683248 4965 generic.go:334] "Generic (PLEG): container finished" podID="fd6d75f5-49d6-41d3-b812-e406dea5a4d1" containerID="06b8e45a13ff271edb63dc5141fa51d3cd108a2a4888b96e852541d8b583efcc" exitCode=143 Feb 19 10:06:08 crc kubenswrapper[4965]: I0219 10:06:08.683316 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fd6d75f5-49d6-41d3-b812-e406dea5a4d1","Type":"ContainerDied","Data":"06b8e45a13ff271edb63dc5141fa51d3cd108a2a4888b96e852541d8b583efcc"} Feb 19 10:06:08 crc kubenswrapper[4965]: I0219 10:06:08.684995 4965 generic.go:334] "Generic (PLEG): container finished" podID="6c20b024-6fec-4f85-8250-1238c52a6b95" containerID="c07acf87a868063127fde6f534eeea64fa997a8801dbc3d9e5191cb575d6041c" exitCode=0 Feb 19 10:06:08 crc kubenswrapper[4965]: I0219 10:06:08.685011 4965 generic.go:334] "Generic (PLEG): container finished" podID="6c20b024-6fec-4f85-8250-1238c52a6b95" containerID="047edf91c45c5b44f961f6ddc800f6bce583c1eb724fe25de58cdf079c27bb26" exitCode=143 Feb 19 10:06:08 crc kubenswrapper[4965]: I0219 10:06:08.685046 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6c20b024-6fec-4f85-8250-1238c52a6b95","Type":"ContainerDied","Data":"c07acf87a868063127fde6f534eeea64fa997a8801dbc3d9e5191cb575d6041c"} Feb 19 10:06:08 crc kubenswrapper[4965]: I0219 10:06:08.685062 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6c20b024-6fec-4f85-8250-1238c52a6b95","Type":"ContainerDied","Data":"047edf91c45c5b44f961f6ddc800f6bce583c1eb724fe25de58cdf079c27bb26"} Feb 19 10:06:08 crc kubenswrapper[4965]: I0219 10:06:08.685072 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6c20b024-6fec-4f85-8250-1238c52a6b95","Type":"ContainerDied","Data":"ec8f8d02b1e036ebd6be4c564e3c312a0d37fa758fef9200546360ae82dbf710"} Feb 19 10:06:08 crc kubenswrapper[4965]: I0219 10:06:08.685087 4965 scope.go:117] "RemoveContainer" containerID="c07acf87a868063127fde6f534eeea64fa997a8801dbc3d9e5191cb575d6041c" Feb 19 10:06:08 crc kubenswrapper[4965]: I0219 10:06:08.685086 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 10:06:08 crc kubenswrapper[4965]: I0219 10:06:08.690646 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9","Type":"ContainerStarted","Data":"0a59b8bfb0fcf1bc1396d007c8c49f65f91d51d862f862b294efe759c8f9283f"} Feb 19 10:06:08 crc kubenswrapper[4965]: I0219 10:06:08.690674 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9","Type":"ContainerStarted","Data":"fce956585ee697aed044ef6906290498634746e440e46993a660d7a5c155ed71"} Feb 19 10:06:08 crc kubenswrapper[4965]: I0219 10:06:08.714817 4965 scope.go:117] "RemoveContainer" containerID="047edf91c45c5b44f961f6ddc800f6bce583c1eb724fe25de58cdf079c27bb26" Feb 19 10:06:08 crc kubenswrapper[4965]: I0219 10:06:08.748425 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 10:06:08 crc kubenswrapper[4965]: I0219 10:06:08.764935 4965 scope.go:117] "RemoveContainer" containerID="c07acf87a868063127fde6f534eeea64fa997a8801dbc3d9e5191cb575d6041c" Feb 19 10:06:08 crc kubenswrapper[4965]: I0219 10:06:08.767006 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 19 10:06:08 crc kubenswrapper[4965]: E0219 10:06:08.767575 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c07acf87a868063127fde6f534eeea64fa997a8801dbc3d9e5191cb575d6041c\": container with ID starting with c07acf87a868063127fde6f534eeea64fa997a8801dbc3d9e5191cb575d6041c not found: ID does not exist" containerID="c07acf87a868063127fde6f534eeea64fa997a8801dbc3d9e5191cb575d6041c" Feb 19 10:06:08 crc kubenswrapper[4965]: I0219 10:06:08.767623 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c07acf87a868063127fde6f534eeea64fa997a8801dbc3d9e5191cb575d6041c"} err="failed to get container status \"c07acf87a868063127fde6f534eeea64fa997a8801dbc3d9e5191cb575d6041c\": rpc error: code = NotFound desc = could not find container \"c07acf87a868063127fde6f534eeea64fa997a8801dbc3d9e5191cb575d6041c\": container with ID starting with c07acf87a868063127fde6f534eeea64fa997a8801dbc3d9e5191cb575d6041c not found: ID does not exist" Feb 19 10:06:08 crc kubenswrapper[4965]: I0219 10:06:08.767652 4965 scope.go:117] "RemoveContainer" containerID="047edf91c45c5b44f961f6ddc800f6bce583c1eb724fe25de58cdf079c27bb26" Feb 19 10:06:08 crc kubenswrapper[4965]: E0219 10:06:08.769040 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"047edf91c45c5b44f961f6ddc800f6bce583c1eb724fe25de58cdf079c27bb26\": container with ID starting with 047edf91c45c5b44f961f6ddc800f6bce583c1eb724fe25de58cdf079c27bb26 not found: ID does not exist" containerID="047edf91c45c5b44f961f6ddc800f6bce583c1eb724fe25de58cdf079c27bb26" Feb 19 10:06:08 crc kubenswrapper[4965]: I0219 10:06:08.769090 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"047edf91c45c5b44f961f6ddc800f6bce583c1eb724fe25de58cdf079c27bb26"} err="failed to get container status \"047edf91c45c5b44f961f6ddc800f6bce583c1eb724fe25de58cdf079c27bb26\": rpc error: code = NotFound desc = could not find container \"047edf91c45c5b44f961f6ddc800f6bce583c1eb724fe25de58cdf079c27bb26\": container with ID starting with 047edf91c45c5b44f961f6ddc800f6bce583c1eb724fe25de58cdf079c27bb26 not found: ID does not exist" Feb 19 10:06:08 crc kubenswrapper[4965]: I0219 10:06:08.769118 4965 scope.go:117] "RemoveContainer" containerID="c07acf87a868063127fde6f534eeea64fa997a8801dbc3d9e5191cb575d6041c" Feb 19 10:06:08 crc kubenswrapper[4965]: I0219 10:06:08.769472 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c07acf87a868063127fde6f534eeea64fa997a8801dbc3d9e5191cb575d6041c"} err="failed to get container status \"c07acf87a868063127fde6f534eeea64fa997a8801dbc3d9e5191cb575d6041c\": rpc error: code = NotFound desc = could not find container \"c07acf87a868063127fde6f534eeea64fa997a8801dbc3d9e5191cb575d6041c\": container with ID starting with c07acf87a868063127fde6f534eeea64fa997a8801dbc3d9e5191cb575d6041c not found: ID does not exist" Feb 19 10:06:08 crc kubenswrapper[4965]: I0219 10:06:08.769506 4965 scope.go:117] "RemoveContainer" containerID="047edf91c45c5b44f961f6ddc800f6bce583c1eb724fe25de58cdf079c27bb26" Feb 19 10:06:08 crc kubenswrapper[4965]: I0219 10:06:08.769880 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"047edf91c45c5b44f961f6ddc800f6bce583c1eb724fe25de58cdf079c27bb26"} err="failed to get container status \"047edf91c45c5b44f961f6ddc800f6bce583c1eb724fe25de58cdf079c27bb26\": rpc error: code = NotFound desc = could not find container \"047edf91c45c5b44f961f6ddc800f6bce583c1eb724fe25de58cdf079c27bb26\": container with ID starting with 047edf91c45c5b44f961f6ddc800f6bce583c1eb724fe25de58cdf079c27bb26 not found: ID does not exist" Feb 19 10:06:08 crc kubenswrapper[4965]: I0219 10:06:08.803025 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 10:06:08 crc kubenswrapper[4965]: E0219 10:06:08.803907 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c20b024-6fec-4f85-8250-1238c52a6b95" containerName="nova-api-log" Feb 19 10:06:08 crc kubenswrapper[4965]: I0219 10:06:08.803926 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c20b024-6fec-4f85-8250-1238c52a6b95" containerName="nova-api-log" Feb 19 10:06:08 crc kubenswrapper[4965]: E0219 10:06:08.803962 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b21ddc99-df08-4635-996d-872a7c3f6f3b" containerName="nova-manage" Feb 19 10:06:08 crc kubenswrapper[4965]: I0219 10:06:08.803996 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="b21ddc99-df08-4635-996d-872a7c3f6f3b" containerName="nova-manage" Feb 19 10:06:08 crc kubenswrapper[4965]: E0219 10:06:08.804002 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c20b024-6fec-4f85-8250-1238c52a6b95" containerName="nova-api-api" Feb 19 10:06:08 crc kubenswrapper[4965]: I0219 10:06:08.804008 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c20b024-6fec-4f85-8250-1238c52a6b95" containerName="nova-api-api" Feb 19 10:06:08 crc kubenswrapper[4965]: I0219 10:06:08.804485 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c20b024-6fec-4f85-8250-1238c52a6b95" containerName="nova-api-api" Feb 19 10:06:08 crc kubenswrapper[4965]: I0219 10:06:08.804509 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c20b024-6fec-4f85-8250-1238c52a6b95" containerName="nova-api-log" Feb 19 10:06:08 crc kubenswrapper[4965]: I0219 10:06:08.804559 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="b21ddc99-df08-4635-996d-872a7c3f6f3b" containerName="nova-manage" Feb 19 10:06:08 crc kubenswrapper[4965]: I0219 10:06:08.806236 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 10:06:08 crc kubenswrapper[4965]: I0219 10:06:08.807266 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 10:06:08 crc kubenswrapper[4965]: I0219 10:06:08.808552 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 10:06:08 crc kubenswrapper[4965]: I0219 10:06:08.808574 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 19 10:06:08 crc kubenswrapper[4965]: I0219 10:06:08.808759 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 19 10:06:08 crc kubenswrapper[4965]: I0219 10:06:08.958305 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b71fda7-2162-4dda-a5ba-053eb96e59a9-logs\") pod \"nova-api-0\" (UID: \"4b71fda7-2162-4dda-a5ba-053eb96e59a9\") " pod="openstack/nova-api-0" Feb 19 10:06:08 crc kubenswrapper[4965]: I0219 10:06:08.958753 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb7nh\" (UniqueName: \"kubernetes.io/projected/4b71fda7-2162-4dda-a5ba-053eb96e59a9-kube-api-access-jb7nh\") pod \"nova-api-0\" (UID: \"4b71fda7-2162-4dda-a5ba-053eb96e59a9\") " pod="openstack/nova-api-0" Feb 19 10:06:08 crc kubenswrapper[4965]: I0219 10:06:08.958834 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b71fda7-2162-4dda-a5ba-053eb96e59a9-public-tls-certs\") pod \"nova-api-0\" (UID: \"4b71fda7-2162-4dda-a5ba-053eb96e59a9\") " pod="openstack/nova-api-0" Feb 19 10:06:08 crc kubenswrapper[4965]: I0219 10:06:08.958939 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b71fda7-2162-4dda-a5ba-053eb96e59a9-config-data\") pod \"nova-api-0\" (UID: \"4b71fda7-2162-4dda-a5ba-053eb96e59a9\") " pod="openstack/nova-api-0" Feb 19 10:06:08 crc kubenswrapper[4965]: I0219 10:06:08.959005 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b71fda7-2162-4dda-a5ba-053eb96e59a9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4b71fda7-2162-4dda-a5ba-053eb96e59a9\") " pod="openstack/nova-api-0" Feb 19 10:06:08 crc kubenswrapper[4965]: I0219 10:06:08.959037 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b71fda7-2162-4dda-a5ba-053eb96e59a9-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4b71fda7-2162-4dda-a5ba-053eb96e59a9\") " pod="openstack/nova-api-0" Feb 19 10:06:09 crc kubenswrapper[4965]: I0219 10:06:09.060705 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jb7nh\" (UniqueName: \"kubernetes.io/projected/4b71fda7-2162-4dda-a5ba-053eb96e59a9-kube-api-access-jb7nh\") pod \"nova-api-0\" (UID: \"4b71fda7-2162-4dda-a5ba-053eb96e59a9\") " pod="openstack/nova-api-0" Feb 19 10:06:09 crc kubenswrapper[4965]: I0219 10:06:09.060766 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b71fda7-2162-4dda-a5ba-053eb96e59a9-public-tls-certs\") pod \"nova-api-0\" (UID: \"4b71fda7-2162-4dda-a5ba-053eb96e59a9\") " pod="openstack/nova-api-0" Feb 19 10:06:09 crc kubenswrapper[4965]: I0219 10:06:09.060824 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b71fda7-2162-4dda-a5ba-053eb96e59a9-config-data\") pod \"nova-api-0\" (UID: \"4b71fda7-2162-4dda-a5ba-053eb96e59a9\") " pod="openstack/nova-api-0" Feb 19 10:06:09 crc kubenswrapper[4965]: I0219 10:06:09.060854 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b71fda7-2162-4dda-a5ba-053eb96e59a9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4b71fda7-2162-4dda-a5ba-053eb96e59a9\") " pod="openstack/nova-api-0" Feb 19 10:06:09 crc kubenswrapper[4965]: I0219 10:06:09.060875 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b71fda7-2162-4dda-a5ba-053eb96e59a9-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4b71fda7-2162-4dda-a5ba-053eb96e59a9\") " pod="openstack/nova-api-0" Feb 19 10:06:09 crc kubenswrapper[4965]: I0219 10:06:09.060904 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b71fda7-2162-4dda-a5ba-053eb96e59a9-logs\") pod \"nova-api-0\" (UID: \"4b71fda7-2162-4dda-a5ba-053eb96e59a9\") " pod="openstack/nova-api-0" Feb 19 10:06:09 crc kubenswrapper[4965]: I0219 10:06:09.061354 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b71fda7-2162-4dda-a5ba-053eb96e59a9-logs\") pod \"nova-api-0\" (UID: \"4b71fda7-2162-4dda-a5ba-053eb96e59a9\") " pod="openstack/nova-api-0" Feb 19 10:06:09 crc kubenswrapper[4965]: I0219 10:06:09.066955 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b71fda7-2162-4dda-a5ba-053eb96e59a9-config-data\") pod \"nova-api-0\" (UID: \"4b71fda7-2162-4dda-a5ba-053eb96e59a9\") " pod="openstack/nova-api-0" Feb 19 10:06:09 crc kubenswrapper[4965]: I0219 10:06:09.066968 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b71fda7-2162-4dda-a5ba-053eb96e59a9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4b71fda7-2162-4dda-a5ba-053eb96e59a9\") " pod="openstack/nova-api-0" Feb 19 10:06:09 crc kubenswrapper[4965]: I0219 10:06:09.081789 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jb7nh\" (UniqueName: \"kubernetes.io/projected/4b71fda7-2162-4dda-a5ba-053eb96e59a9-kube-api-access-jb7nh\") pod \"nova-api-0\" (UID: \"4b71fda7-2162-4dda-a5ba-053eb96e59a9\") " pod="openstack/nova-api-0" Feb 19 10:06:09 crc kubenswrapper[4965]: I0219 10:06:09.083101 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b71fda7-2162-4dda-a5ba-053eb96e59a9-public-tls-certs\") pod \"nova-api-0\" (UID: \"4b71fda7-2162-4dda-a5ba-053eb96e59a9\") " pod="openstack/nova-api-0" Feb 19 10:06:09 crc kubenswrapper[4965]: I0219 10:06:09.083494 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b71fda7-2162-4dda-a5ba-053eb96e59a9-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4b71fda7-2162-4dda-a5ba-053eb96e59a9\") " pod="openstack/nova-api-0" Feb 19 10:06:09 crc kubenswrapper[4965]: I0219 10:06:09.210784 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c20b024-6fec-4f85-8250-1238c52a6b95" path="/var/lib/kubelet/pods/6c20b024-6fec-4f85-8250-1238c52a6b95/volumes" Feb 19 10:06:09 crc kubenswrapper[4965]: I0219 10:06:09.248481 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 10:06:09 crc kubenswrapper[4965]: I0219 10:06:09.669771 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dgqlg" Feb 19 10:06:09 crc kubenswrapper[4965]: I0219 10:06:09.670178 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dgqlg" Feb 19 10:06:09 crc kubenswrapper[4965]: I0219 10:06:09.716822 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9","Type":"ContainerStarted","Data":"f763afd4009aef571150e49eb6198f8953d32b0f01ea8df09fc11cd260b60575"} Feb 19 10:06:09 crc kubenswrapper[4965]: I0219 10:06:09.716863 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9","Type":"ContainerStarted","Data":"0c915eeac916d8f043b9c03c9230a5f9f582f7942b094d386243f48a2cc41f6d"} Feb 19 10:06:09 crc kubenswrapper[4965]: I0219 10:06:09.724287 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dgqlg" Feb 19 10:06:09 crc kubenswrapper[4965]: I0219 10:06:09.745263 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 10:06:09 crc kubenswrapper[4965]: I0219 10:06:09.790309 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dgqlg" Feb 19 10:06:09 crc kubenswrapper[4965]: I0219 10:06:09.965568 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dgqlg"] Feb 19 10:06:10 crc kubenswrapper[4965]: I0219 10:06:10.739576 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4b71fda7-2162-4dda-a5ba-053eb96e59a9","Type":"ContainerStarted","Data":"144d2c2ce86455d16867fd1e283e8ef8559e01b129ae18964d9a5366105791e5"} Feb 19 10:06:10 crc kubenswrapper[4965]: I0219 10:06:10.739920 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4b71fda7-2162-4dda-a5ba-053eb96e59a9","Type":"ContainerStarted","Data":"adb1b515c6c759ac5f9aa6b9534b2e6d4e2e64da3a25f53c6e9dea61cbb85084"} Feb 19 10:06:10 crc kubenswrapper[4965]: I0219 10:06:10.739936 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4b71fda7-2162-4dda-a5ba-053eb96e59a9","Type":"ContainerStarted","Data":"15f3c4d71c14203edb2e3ed9dacfa10d865909d94373a13c1a7ed1954fc36cb4"} Feb 19 10:06:10 crc kubenswrapper[4965]: I0219 10:06:10.774492 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.774475127 podStartE2EDuration="2.774475127s" podCreationTimestamp="2026-02-19 10:06:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:06:10.76265732 +0000 UTC m=+1426.383978640" watchObservedRunningTime="2026-02-19 10:06:10.774475127 +0000 UTC m=+1426.395796437" Feb 19 10:06:11 crc kubenswrapper[4965]: E0219 10:06:11.064640 4965 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e44bda06a835128697318ffaa90768ab207e29cafd57d5c0492c8732a45e70a7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 10:06:11 crc kubenswrapper[4965]: E0219 10:06:11.068615 4965 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e44bda06a835128697318ffaa90768ab207e29cafd57d5c0492c8732a45e70a7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 10:06:11 crc kubenswrapper[4965]: E0219 10:06:11.070332 4965 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e44bda06a835128697318ffaa90768ab207e29cafd57d5c0492c8732a45e70a7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 10:06:11 crc kubenswrapper[4965]: E0219 10:06:11.070382 4965 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="edd94405-a4e9-4078-b7f7-d0fe27e28d69" containerName="nova-scheduler-scheduler" Feb 19 10:06:11 crc kubenswrapper[4965]: I0219 10:06:11.084364 4965 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="fd6d75f5-49d6-41d3-b812-e406dea5a4d1" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.222:8775/\": read tcp 10.217.0.2:57962->10.217.0.222:8775: read: connection reset by peer" Feb 19 10:06:11 crc kubenswrapper[4965]: I0219 10:06:11.084404 4965 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="fd6d75f5-49d6-41d3-b812-e406dea5a4d1" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.222:8775/\": read tcp 10.217.0.2:57976->10.217.0.222:8775: read: connection reset by peer" Feb 19 10:06:11 crc kubenswrapper[4965]: I0219 10:06:11.774049 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9","Type":"ContainerStarted","Data":"b8a34ae6a66fb788288150631b15fe3ee7ca5e33aab2bee3bb7ec1698231b11c"} Feb 19 10:06:11 crc kubenswrapper[4965]: I0219 10:06:11.774811 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 10:06:11 crc kubenswrapper[4965]: I0219 10:06:11.776160 4965 generic.go:334] "Generic (PLEG): container finished" podID="fd6d75f5-49d6-41d3-b812-e406dea5a4d1" containerID="475d7b0f8a76a385b34955f80982c1f0792b7806b49576cbe9b801197953d60e" exitCode=0 Feb 19 10:06:11 crc kubenswrapper[4965]: I0219 10:06:11.776986 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fd6d75f5-49d6-41d3-b812-e406dea5a4d1","Type":"ContainerDied","Data":"475d7b0f8a76a385b34955f80982c1f0792b7806b49576cbe9b801197953d60e"} Feb 19 10:06:11 crc kubenswrapper[4965]: I0219 10:06:11.777010 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fd6d75f5-49d6-41d3-b812-e406dea5a4d1","Type":"ContainerDied","Data":"70b85840ec315f8dbc682c29904b04455ae5a9a48bb9663c18e8d6cc978a7ca7"} Feb 19 10:06:11 crc kubenswrapper[4965]: I0219 10:06:11.777025 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70b85840ec315f8dbc682c29904b04455ae5a9a48bb9663c18e8d6cc978a7ca7" Feb 19 10:06:11 crc kubenswrapper[4965]: I0219 10:06:11.777252 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dgqlg" podUID="0b81c5e4-6bba-49a8-8687-dcff16739800" containerName="registry-server" containerID="cri-o://3a1c87ed15711e87c59eac7ac5171dbaf79f27a23dfc23c77b7391db5101d5e5" gracePeriod=2 Feb 19 10:06:11 crc kubenswrapper[4965]: I0219 10:06:11.801869 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.298174837 podStartE2EDuration="5.801856261s" podCreationTimestamp="2026-02-19 10:06:06 +0000 UTC" firstStartedPulling="2026-02-19 10:06:07.663785905 +0000 UTC m=+1423.285107215" lastFinishedPulling="2026-02-19 10:06:11.167467329 +0000 UTC m=+1426.788788639" observedRunningTime="2026-02-19 10:06:11.794747449 +0000 UTC m=+1427.416068759" watchObservedRunningTime="2026-02-19 10:06:11.801856261 +0000 UTC m=+1427.423177571" Feb 19 10:06:11 crc kubenswrapper[4965]: I0219 10:06:11.888591 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 10:06:12 crc kubenswrapper[4965]: I0219 10:06:12.036975 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd6d75f5-49d6-41d3-b812-e406dea5a4d1-logs\") pod \"fd6d75f5-49d6-41d3-b812-e406dea5a4d1\" (UID: \"fd6d75f5-49d6-41d3-b812-e406dea5a4d1\") " Feb 19 10:06:12 crc kubenswrapper[4965]: I0219 10:06:12.037022 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd6d75f5-49d6-41d3-b812-e406dea5a4d1-config-data\") pod \"fd6d75f5-49d6-41d3-b812-e406dea5a4d1\" (UID: \"fd6d75f5-49d6-41d3-b812-e406dea5a4d1\") " Feb 19 10:06:12 crc kubenswrapper[4965]: I0219 10:06:12.037073 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd6d75f5-49d6-41d3-b812-e406dea5a4d1-nova-metadata-tls-certs\") pod \"fd6d75f5-49d6-41d3-b812-e406dea5a4d1\" (UID: \"fd6d75f5-49d6-41d3-b812-e406dea5a4d1\") " Feb 19 10:06:12 crc kubenswrapper[4965]: I0219 10:06:12.037098 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd6d75f5-49d6-41d3-b812-e406dea5a4d1-combined-ca-bundle\") pod \"fd6d75f5-49d6-41d3-b812-e406dea5a4d1\" (UID: \"fd6d75f5-49d6-41d3-b812-e406dea5a4d1\") " Feb 19 10:06:12 crc kubenswrapper[4965]: I0219 10:06:12.037144 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsj8\" (UniqueName: \"kubernetes.io/projected/fd6d75f5-49d6-41d3-b812-e406dea5a4d1-kube-api-access-fqsj8\") pod \"fd6d75f5-49d6-41d3-b812-e406dea5a4d1\" (UID: \"fd6d75f5-49d6-41d3-b812-e406dea5a4d1\") " Feb 19 10:06:12 crc kubenswrapper[4965]: I0219 10:06:12.037445 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd6d75f5-49d6-41d3-b812-e406dea5a4d1-logs" (OuterVolumeSpecName: "logs") pod "fd6d75f5-49d6-41d3-b812-e406dea5a4d1" (UID: "fd6d75f5-49d6-41d3-b812-e406dea5a4d1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:06:12 crc kubenswrapper[4965]: I0219 10:06:12.037999 4965 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd6d75f5-49d6-41d3-b812-e406dea5a4d1-logs\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:12 crc kubenswrapper[4965]: I0219 10:06:12.042380 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd6d75f5-49d6-41d3-b812-e406dea5a4d1-kube-api-access-fqsj8" (OuterVolumeSpecName: "kube-api-access-fqsj8") pod "fd6d75f5-49d6-41d3-b812-e406dea5a4d1" (UID: "fd6d75f5-49d6-41d3-b812-e406dea5a4d1"). InnerVolumeSpecName "kube-api-access-fqsj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:06:12 crc kubenswrapper[4965]: I0219 10:06:12.068390 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd6d75f5-49d6-41d3-b812-e406dea5a4d1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd6d75f5-49d6-41d3-b812-e406dea5a4d1" (UID: "fd6d75f5-49d6-41d3-b812-e406dea5a4d1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:06:12 crc kubenswrapper[4965]: I0219 10:06:12.077865 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd6d75f5-49d6-41d3-b812-e406dea5a4d1-config-data" (OuterVolumeSpecName: "config-data") pod "fd6d75f5-49d6-41d3-b812-e406dea5a4d1" (UID: "fd6d75f5-49d6-41d3-b812-e406dea5a4d1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:06:12 crc kubenswrapper[4965]: I0219 10:06:12.107376 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd6d75f5-49d6-41d3-b812-e406dea5a4d1-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "fd6d75f5-49d6-41d3-b812-e406dea5a4d1" (UID: "fd6d75f5-49d6-41d3-b812-e406dea5a4d1"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:06:12 crc kubenswrapper[4965]: I0219 10:06:12.146787 4965 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd6d75f5-49d6-41d3-b812-e406dea5a4d1-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:12 crc kubenswrapper[4965]: I0219 10:06:12.146818 4965 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd6d75f5-49d6-41d3-b812-e406dea5a4d1-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:12 crc kubenswrapper[4965]: I0219 10:06:12.146829 4965 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd6d75f5-49d6-41d3-b812-e406dea5a4d1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:12 crc kubenswrapper[4965]: I0219 10:06:12.146837 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsj8\" (UniqueName: \"kubernetes.io/projected/fd6d75f5-49d6-41d3-b812-e406dea5a4d1-kube-api-access-fqsj8\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:12 crc kubenswrapper[4965]: I0219 10:06:12.445832 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dgqlg" Feb 19 10:06:12 crc kubenswrapper[4965]: I0219 10:06:12.557962 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngk6f\" (UniqueName: \"kubernetes.io/projected/0b81c5e4-6bba-49a8-8687-dcff16739800-kube-api-access-ngk6f\") pod \"0b81c5e4-6bba-49a8-8687-dcff16739800\" (UID: \"0b81c5e4-6bba-49a8-8687-dcff16739800\") " Feb 19 10:06:12 crc kubenswrapper[4965]: I0219 10:06:12.558097 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b81c5e4-6bba-49a8-8687-dcff16739800-utilities\") pod \"0b81c5e4-6bba-49a8-8687-dcff16739800\" (UID: \"0b81c5e4-6bba-49a8-8687-dcff16739800\") " Feb 19 10:06:12 crc kubenswrapper[4965]: I0219 10:06:12.558290 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b81c5e4-6bba-49a8-8687-dcff16739800-catalog-content\") pod \"0b81c5e4-6bba-49a8-8687-dcff16739800\" (UID: \"0b81c5e4-6bba-49a8-8687-dcff16739800\") " Feb 19 10:06:12 crc kubenswrapper[4965]: I0219 10:06:12.560466 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b81c5e4-6bba-49a8-8687-dcff16739800-utilities" (OuterVolumeSpecName: "utilities") pod "0b81c5e4-6bba-49a8-8687-dcff16739800" (UID: "0b81c5e4-6bba-49a8-8687-dcff16739800"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:06:12 crc kubenswrapper[4965]: I0219 10:06:12.568401 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b81c5e4-6bba-49a8-8687-dcff16739800-kube-api-access-ngk6f" (OuterVolumeSpecName: "kube-api-access-ngk6f") pod "0b81c5e4-6bba-49a8-8687-dcff16739800" (UID: "0b81c5e4-6bba-49a8-8687-dcff16739800"). InnerVolumeSpecName "kube-api-access-ngk6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:06:12 crc kubenswrapper[4965]: I0219 10:06:12.627514 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b81c5e4-6bba-49a8-8687-dcff16739800-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0b81c5e4-6bba-49a8-8687-dcff16739800" (UID: "0b81c5e4-6bba-49a8-8687-dcff16739800"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:06:12 crc kubenswrapper[4965]: I0219 10:06:12.660763 4965 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b81c5e4-6bba-49a8-8687-dcff16739800-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:12 crc kubenswrapper[4965]: I0219 10:06:12.660797 4965 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b81c5e4-6bba-49a8-8687-dcff16739800-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:12 crc kubenswrapper[4965]: I0219 10:06:12.660807 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngk6f\" (UniqueName: \"kubernetes.io/projected/0b81c5e4-6bba-49a8-8687-dcff16739800-kube-api-access-ngk6f\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:12 crc kubenswrapper[4965]: I0219 10:06:12.788271 4965 generic.go:334] "Generic (PLEG): container finished" podID="0b81c5e4-6bba-49a8-8687-dcff16739800" containerID="3a1c87ed15711e87c59eac7ac5171dbaf79f27a23dfc23c77b7391db5101d5e5" exitCode=0 Feb 19 10:06:12 crc kubenswrapper[4965]: I0219 10:06:12.788347 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dgqlg" Feb 19 10:06:12 crc kubenswrapper[4965]: I0219 10:06:12.788346 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dgqlg" event={"ID":"0b81c5e4-6bba-49a8-8687-dcff16739800","Type":"ContainerDied","Data":"3a1c87ed15711e87c59eac7ac5171dbaf79f27a23dfc23c77b7391db5101d5e5"} Feb 19 10:06:12 crc kubenswrapper[4965]: I0219 10:06:12.788406 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 10:06:12 crc kubenswrapper[4965]: I0219 10:06:12.788448 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dgqlg" event={"ID":"0b81c5e4-6bba-49a8-8687-dcff16739800","Type":"ContainerDied","Data":"87d893adf10a45da6e5e80ebbc3aa9017b64d9c834458f61a767fc755c997128"} Feb 19 10:06:12 crc kubenswrapper[4965]: I0219 10:06:12.788473 4965 scope.go:117] "RemoveContainer" containerID="3a1c87ed15711e87c59eac7ac5171dbaf79f27a23dfc23c77b7391db5101d5e5" Feb 19 10:06:12 crc kubenswrapper[4965]: I0219 10:06:12.812468 4965 scope.go:117] "RemoveContainer" containerID="074fcc96c41a59b4ac3aba10ce4f4fa544af07529f7e544e778362ad5bf77c09" Feb 19 10:06:12 crc kubenswrapper[4965]: I0219 10:06:12.841641 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dgqlg"] Feb 19 10:06:12 crc kubenswrapper[4965]: I0219 10:06:12.856559 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dgqlg"] Feb 19 10:06:12 crc kubenswrapper[4965]: I0219 10:06:12.857253 4965 scope.go:117] "RemoveContainer" containerID="486ddbfedbe8fe310b840a1694c4e556a2012aac749642592c90d0d6aca1a316" Feb 19 10:06:12 crc kubenswrapper[4965]: I0219 10:06:12.887258 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:06:12 crc kubenswrapper[4965]: I0219 10:06:12.930122 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:06:12 crc kubenswrapper[4965]: I0219 10:06:12.944074 4965 scope.go:117] "RemoveContainer" containerID="3a1c87ed15711e87c59eac7ac5171dbaf79f27a23dfc23c77b7391db5101d5e5" Feb 19 10:06:12 crc kubenswrapper[4965]: I0219 10:06:12.945359 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:06:12 crc kubenswrapper[4965]: E0219 10:06:12.945946 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd6d75f5-49d6-41d3-b812-e406dea5a4d1" containerName="nova-metadata-log" Feb 19 10:06:12 crc kubenswrapper[4965]: I0219 10:06:12.945963 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd6d75f5-49d6-41d3-b812-e406dea5a4d1" containerName="nova-metadata-log" Feb 19 10:06:12 crc kubenswrapper[4965]: E0219 10:06:12.945986 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b81c5e4-6bba-49a8-8687-dcff16739800" containerName="extract-content" Feb 19 10:06:12 crc kubenswrapper[4965]: I0219 10:06:12.945995 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b81c5e4-6bba-49a8-8687-dcff16739800" containerName="extract-content" Feb 19 10:06:12 crc kubenswrapper[4965]: E0219 10:06:12.946024 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b81c5e4-6bba-49a8-8687-dcff16739800" containerName="extract-utilities" Feb 19 10:06:12 crc kubenswrapper[4965]: I0219 10:06:12.946032 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b81c5e4-6bba-49a8-8687-dcff16739800" containerName="extract-utilities" Feb 19 10:06:12 crc kubenswrapper[4965]: E0219 10:06:12.946059 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd6d75f5-49d6-41d3-b812-e406dea5a4d1" containerName="nova-metadata-metadata" Feb 19 10:06:12 crc kubenswrapper[4965]: I0219 10:06:12.946067 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd6d75f5-49d6-41d3-b812-e406dea5a4d1" containerName="nova-metadata-metadata" Feb 19 10:06:12 crc kubenswrapper[4965]: E0219 10:06:12.946078 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b81c5e4-6bba-49a8-8687-dcff16739800" containerName="registry-server" Feb 19 10:06:12 crc kubenswrapper[4965]: I0219 10:06:12.946085 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b81c5e4-6bba-49a8-8687-dcff16739800" containerName="registry-server" Feb 19 10:06:12 crc kubenswrapper[4965]: E0219 10:06:12.946824 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a1c87ed15711e87c59eac7ac5171dbaf79f27a23dfc23c77b7391db5101d5e5\": container with ID starting with 3a1c87ed15711e87c59eac7ac5171dbaf79f27a23dfc23c77b7391db5101d5e5 not found: ID does not exist" containerID="3a1c87ed15711e87c59eac7ac5171dbaf79f27a23dfc23c77b7391db5101d5e5" Feb 19 10:06:12 crc kubenswrapper[4965]: I0219 10:06:12.946863 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a1c87ed15711e87c59eac7ac5171dbaf79f27a23dfc23c77b7391db5101d5e5"} err="failed to get container status \"3a1c87ed15711e87c59eac7ac5171dbaf79f27a23dfc23c77b7391db5101d5e5\": rpc error: code = NotFound desc = could not find container \"3a1c87ed15711e87c59eac7ac5171dbaf79f27a23dfc23c77b7391db5101d5e5\": container with ID starting with 3a1c87ed15711e87c59eac7ac5171dbaf79f27a23dfc23c77b7391db5101d5e5 not found: ID does not exist" Feb 19 10:06:12 crc kubenswrapper[4965]: I0219 10:06:12.946895 4965 scope.go:117] "RemoveContainer" containerID="074fcc96c41a59b4ac3aba10ce4f4fa544af07529f7e544e778362ad5bf77c09" Feb 19 10:06:12 crc kubenswrapper[4965]: I0219 10:06:12.947336 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd6d75f5-49d6-41d3-b812-e406dea5a4d1" containerName="nova-metadata-metadata" Feb 19 10:06:12 crc kubenswrapper[4965]: I0219 10:06:12.947464 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd6d75f5-49d6-41d3-b812-e406dea5a4d1" containerName="nova-metadata-log" Feb 19 10:06:12 crc kubenswrapper[4965]: I0219 10:06:12.947580 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b81c5e4-6bba-49a8-8687-dcff16739800" containerName="registry-server" Feb 19 10:06:12 crc kubenswrapper[4965]: E0219 10:06:12.947392 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"074fcc96c41a59b4ac3aba10ce4f4fa544af07529f7e544e778362ad5bf77c09\": container with ID starting with 074fcc96c41a59b4ac3aba10ce4f4fa544af07529f7e544e778362ad5bf77c09 not found: ID does not exist" containerID="074fcc96c41a59b4ac3aba10ce4f4fa544af07529f7e544e778362ad5bf77c09" Feb 19 10:06:12 crc kubenswrapper[4965]: I0219 10:06:12.947996 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"074fcc96c41a59b4ac3aba10ce4f4fa544af07529f7e544e778362ad5bf77c09"} err="failed to get container status \"074fcc96c41a59b4ac3aba10ce4f4fa544af07529f7e544e778362ad5bf77c09\": rpc error: code = NotFound desc = could not find container \"074fcc96c41a59b4ac3aba10ce4f4fa544af07529f7e544e778362ad5bf77c09\": container with ID starting with 074fcc96c41a59b4ac3aba10ce4f4fa544af07529f7e544e778362ad5bf77c09 not found: ID does not exist" Feb 19 10:06:12 crc kubenswrapper[4965]: I0219 10:06:12.948025 4965 scope.go:117] "RemoveContainer" containerID="486ddbfedbe8fe310b840a1694c4e556a2012aac749642592c90d0d6aca1a316" Feb 19 10:06:12 crc kubenswrapper[4965]: E0219 10:06:12.950587 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"486ddbfedbe8fe310b840a1694c4e556a2012aac749642592c90d0d6aca1a316\": container with ID starting with 486ddbfedbe8fe310b840a1694c4e556a2012aac749642592c90d0d6aca1a316 not found: ID does not exist" containerID="486ddbfedbe8fe310b840a1694c4e556a2012aac749642592c90d0d6aca1a316" Feb 19 10:06:12 crc kubenswrapper[4965]: I0219 10:06:12.950924 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"486ddbfedbe8fe310b840a1694c4e556a2012aac749642592c90d0d6aca1a316"} err="failed to get container status \"486ddbfedbe8fe310b840a1694c4e556a2012aac749642592c90d0d6aca1a316\": rpc error: code = NotFound desc = could not find container \"486ddbfedbe8fe310b840a1694c4e556a2012aac749642592c90d0d6aca1a316\": container with ID starting with 486ddbfedbe8fe310b840a1694c4e556a2012aac749642592c90d0d6aca1a316 not found: ID does not exist" Feb 19 10:06:12 crc kubenswrapper[4965]: I0219 10:06:12.952060 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 10:06:12 crc kubenswrapper[4965]: I0219 10:06:12.957822 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:06:12 crc kubenswrapper[4965]: I0219 10:06:12.958422 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 19 10:06:12 crc kubenswrapper[4965]: I0219 10:06:12.958485 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 19 10:06:13 crc kubenswrapper[4965]: I0219 10:06:13.073377 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b29094ed-8036-44ed-a882-7ad1d5ad4cc3-config-data\") pod \"nova-metadata-0\" (UID: \"b29094ed-8036-44ed-a882-7ad1d5ad4cc3\") " pod="openstack/nova-metadata-0" Feb 19 10:06:13 crc kubenswrapper[4965]: I0219 10:06:13.073550 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpcfg\" (UniqueName: \"kubernetes.io/projected/b29094ed-8036-44ed-a882-7ad1d5ad4cc3-kube-api-access-zpcfg\") pod \"nova-metadata-0\" (UID: \"b29094ed-8036-44ed-a882-7ad1d5ad4cc3\") " pod="openstack/nova-metadata-0" Feb 19 10:06:13 crc kubenswrapper[4965]: I0219 10:06:13.073620 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b29094ed-8036-44ed-a882-7ad1d5ad4cc3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b29094ed-8036-44ed-a882-7ad1d5ad4cc3\") " pod="openstack/nova-metadata-0" Feb 19 10:06:13 crc kubenswrapper[4965]: I0219 10:06:13.073685 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b29094ed-8036-44ed-a882-7ad1d5ad4cc3-logs\") pod \"nova-metadata-0\" (UID: \"b29094ed-8036-44ed-a882-7ad1d5ad4cc3\") " pod="openstack/nova-metadata-0" Feb 19 10:06:13 crc kubenswrapper[4965]: I0219 10:06:13.073718 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b29094ed-8036-44ed-a882-7ad1d5ad4cc3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b29094ed-8036-44ed-a882-7ad1d5ad4cc3\") " pod="openstack/nova-metadata-0" Feb 19 10:06:13 crc kubenswrapper[4965]: I0219 10:06:13.175818 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b29094ed-8036-44ed-a882-7ad1d5ad4cc3-config-data\") pod \"nova-metadata-0\" (UID: \"b29094ed-8036-44ed-a882-7ad1d5ad4cc3\") " pod="openstack/nova-metadata-0" Feb 19 10:06:13 crc kubenswrapper[4965]: I0219 10:06:13.175950 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpcfg\" (UniqueName: \"kubernetes.io/projected/b29094ed-8036-44ed-a882-7ad1d5ad4cc3-kube-api-access-zpcfg\") pod \"nova-metadata-0\" (UID: \"b29094ed-8036-44ed-a882-7ad1d5ad4cc3\") " pod="openstack/nova-metadata-0" Feb 19 10:06:13 crc kubenswrapper[4965]: I0219 10:06:13.176001 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b29094ed-8036-44ed-a882-7ad1d5ad4cc3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b29094ed-8036-44ed-a882-7ad1d5ad4cc3\") " pod="openstack/nova-metadata-0" Feb 19 10:06:13 crc kubenswrapper[4965]: I0219 10:06:13.176044 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b29094ed-8036-44ed-a882-7ad1d5ad4cc3-logs\") pod \"nova-metadata-0\" (UID: \"b29094ed-8036-44ed-a882-7ad1d5ad4cc3\") " pod="openstack/nova-metadata-0" Feb 19 10:06:13 crc kubenswrapper[4965]: I0219 10:06:13.176064 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b29094ed-8036-44ed-a882-7ad1d5ad4cc3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b29094ed-8036-44ed-a882-7ad1d5ad4cc3\") " pod="openstack/nova-metadata-0" Feb 19 10:06:13 crc kubenswrapper[4965]: I0219 10:06:13.176837 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b29094ed-8036-44ed-a882-7ad1d5ad4cc3-logs\") pod \"nova-metadata-0\" (UID: \"b29094ed-8036-44ed-a882-7ad1d5ad4cc3\") " pod="openstack/nova-metadata-0" Feb 19 10:06:13 crc kubenswrapper[4965]: I0219 10:06:13.182720 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b29094ed-8036-44ed-a882-7ad1d5ad4cc3-config-data\") pod \"nova-metadata-0\" (UID: \"b29094ed-8036-44ed-a882-7ad1d5ad4cc3\") " pod="openstack/nova-metadata-0" Feb 19 10:06:13 crc kubenswrapper[4965]: I0219 10:06:13.184447 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b29094ed-8036-44ed-a882-7ad1d5ad4cc3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b29094ed-8036-44ed-a882-7ad1d5ad4cc3\") " pod="openstack/nova-metadata-0" Feb 19 10:06:13 crc kubenswrapper[4965]: I0219 10:06:13.185679 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b29094ed-8036-44ed-a882-7ad1d5ad4cc3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b29094ed-8036-44ed-a882-7ad1d5ad4cc3\") " pod="openstack/nova-metadata-0" Feb 19 10:06:13 crc kubenswrapper[4965]: I0219 10:06:13.193653 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpcfg\" (UniqueName: \"kubernetes.io/projected/b29094ed-8036-44ed-a882-7ad1d5ad4cc3-kube-api-access-zpcfg\") pod \"nova-metadata-0\" (UID: \"b29094ed-8036-44ed-a882-7ad1d5ad4cc3\") " pod="openstack/nova-metadata-0" Feb 19 10:06:13 crc kubenswrapper[4965]: I0219 10:06:13.210291 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b81c5e4-6bba-49a8-8687-dcff16739800" path="/var/lib/kubelet/pods/0b81c5e4-6bba-49a8-8687-dcff16739800/volumes" Feb 19 10:06:13 crc kubenswrapper[4965]: I0219 10:06:13.211112 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd6d75f5-49d6-41d3-b812-e406dea5a4d1" path="/var/lib/kubelet/pods/fd6d75f5-49d6-41d3-b812-e406dea5a4d1/volumes" Feb 19 10:06:13 crc kubenswrapper[4965]: I0219 10:06:13.332366 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 10:06:13 crc kubenswrapper[4965]: I0219 10:06:13.501080 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 10:06:13 crc kubenswrapper[4965]: I0219 10:06:13.584284 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edd94405-a4e9-4078-b7f7-d0fe27e28d69-config-data\") pod \"edd94405-a4e9-4078-b7f7-d0fe27e28d69\" (UID: \"edd94405-a4e9-4078-b7f7-d0fe27e28d69\") " Feb 19 10:06:13 crc kubenswrapper[4965]: I0219 10:06:13.584357 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg5b4\" (UniqueName: \"kubernetes.io/projected/edd94405-a4e9-4078-b7f7-d0fe27e28d69-kube-api-access-zg5b4\") pod \"edd94405-a4e9-4078-b7f7-d0fe27e28d69\" (UID: \"edd94405-a4e9-4078-b7f7-d0fe27e28d69\") " Feb 19 10:06:13 crc kubenswrapper[4965]: I0219 10:06:13.584621 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edd94405-a4e9-4078-b7f7-d0fe27e28d69-combined-ca-bundle\") pod \"edd94405-a4e9-4078-b7f7-d0fe27e28d69\" (UID: \"edd94405-a4e9-4078-b7f7-d0fe27e28d69\") " Feb 19 10:06:13 crc kubenswrapper[4965]: I0219 10:06:13.602182 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edd94405-a4e9-4078-b7f7-d0fe27e28d69-kube-api-access-zg5b4" (OuterVolumeSpecName: "kube-api-access-zg5b4") pod "edd94405-a4e9-4078-b7f7-d0fe27e28d69" (UID: "edd94405-a4e9-4078-b7f7-d0fe27e28d69"). InnerVolumeSpecName "kube-api-access-zg5b4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:06:13 crc kubenswrapper[4965]: I0219 10:06:13.631476 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edd94405-a4e9-4078-b7f7-d0fe27e28d69-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "edd94405-a4e9-4078-b7f7-d0fe27e28d69" (UID: "edd94405-a4e9-4078-b7f7-d0fe27e28d69"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:06:13 crc kubenswrapper[4965]: I0219 10:06:13.642391 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edd94405-a4e9-4078-b7f7-d0fe27e28d69-config-data" (OuterVolumeSpecName: "config-data") pod "edd94405-a4e9-4078-b7f7-d0fe27e28d69" (UID: "edd94405-a4e9-4078-b7f7-d0fe27e28d69"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:06:13 crc kubenswrapper[4965]: I0219 10:06:13.687566 4965 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edd94405-a4e9-4078-b7f7-d0fe27e28d69-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:13 crc kubenswrapper[4965]: I0219 10:06:13.687608 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zg5b4\" (UniqueName: \"kubernetes.io/projected/edd94405-a4e9-4078-b7f7-d0fe27e28d69-kube-api-access-zg5b4\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:13 crc kubenswrapper[4965]: I0219 10:06:13.687623 4965 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edd94405-a4e9-4078-b7f7-d0fe27e28d69-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:13 crc kubenswrapper[4965]: I0219 10:06:13.800205 4965 generic.go:334] "Generic (PLEG): container finished" podID="edd94405-a4e9-4078-b7f7-d0fe27e28d69" containerID="e44bda06a835128697318ffaa90768ab207e29cafd57d5c0492c8732a45e70a7" exitCode=0 Feb 19 10:06:13 crc kubenswrapper[4965]: I0219 10:06:13.800250 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 10:06:13 crc kubenswrapper[4965]: I0219 10:06:13.800255 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"edd94405-a4e9-4078-b7f7-d0fe27e28d69","Type":"ContainerDied","Data":"e44bda06a835128697318ffaa90768ab207e29cafd57d5c0492c8732a45e70a7"} Feb 19 10:06:13 crc kubenswrapper[4965]: I0219 10:06:13.800359 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"edd94405-a4e9-4078-b7f7-d0fe27e28d69","Type":"ContainerDied","Data":"a3e3a728c0049958984849ebb8d60a71986860c7637dcbf547179f303ac3e246"} Feb 19 10:06:13 crc kubenswrapper[4965]: I0219 10:06:13.800377 4965 scope.go:117] "RemoveContainer" containerID="e44bda06a835128697318ffaa90768ab207e29cafd57d5c0492c8732a45e70a7" Feb 19 10:06:13 crc kubenswrapper[4965]: I0219 10:06:13.840964 4965 scope.go:117] "RemoveContainer" containerID="e44bda06a835128697318ffaa90768ab207e29cafd57d5c0492c8732a45e70a7" Feb 19 10:06:13 crc kubenswrapper[4965]: E0219 10:06:13.846647 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e44bda06a835128697318ffaa90768ab207e29cafd57d5c0492c8732a45e70a7\": container with ID starting with e44bda06a835128697318ffaa90768ab207e29cafd57d5c0492c8732a45e70a7 not found: ID does not exist" containerID="e44bda06a835128697318ffaa90768ab207e29cafd57d5c0492c8732a45e70a7" Feb 19 10:06:13 crc kubenswrapper[4965]: I0219 10:06:13.846738 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e44bda06a835128697318ffaa90768ab207e29cafd57d5c0492c8732a45e70a7"} err="failed to get container status \"e44bda06a835128697318ffaa90768ab207e29cafd57d5c0492c8732a45e70a7\": rpc error: code = NotFound desc = could not find container \"e44bda06a835128697318ffaa90768ab207e29cafd57d5c0492c8732a45e70a7\": container with ID starting with e44bda06a835128697318ffaa90768ab207e29cafd57d5c0492c8732a45e70a7 not found: ID does not exist" Feb 19 10:06:13 crc kubenswrapper[4965]: I0219 10:06:13.847018 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:06:13 crc kubenswrapper[4965]: I0219 10:06:13.877666 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 10:06:13 crc kubenswrapper[4965]: I0219 10:06:13.887409 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 10:06:13 crc kubenswrapper[4965]: I0219 10:06:13.909111 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 10:06:13 crc kubenswrapper[4965]: E0219 10:06:13.909688 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edd94405-a4e9-4078-b7f7-d0fe27e28d69" containerName="nova-scheduler-scheduler" Feb 19 10:06:13 crc kubenswrapper[4965]: I0219 10:06:13.909711 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="edd94405-a4e9-4078-b7f7-d0fe27e28d69" containerName="nova-scheduler-scheduler" Feb 19 10:06:13 crc kubenswrapper[4965]: I0219 10:06:13.909903 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="edd94405-a4e9-4078-b7f7-d0fe27e28d69" containerName="nova-scheduler-scheduler" Feb 19 10:06:13 crc kubenswrapper[4965]: I0219 10:06:13.910733 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 10:06:13 crc kubenswrapper[4965]: I0219 10:06:13.913163 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 19 10:06:13 crc kubenswrapper[4965]: I0219 10:06:13.921084 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 10:06:13 crc kubenswrapper[4965]: I0219 10:06:13.994358 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bc77d18-18ba-4f28-ab8c-a1d4e77996f3-config-data\") pod \"nova-scheduler-0\" (UID: \"6bc77d18-18ba-4f28-ab8c-a1d4e77996f3\") " pod="openstack/nova-scheduler-0" Feb 19 10:06:13 crc kubenswrapper[4965]: I0219 10:06:13.994426 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bc77d18-18ba-4f28-ab8c-a1d4e77996f3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6bc77d18-18ba-4f28-ab8c-a1d4e77996f3\") " pod="openstack/nova-scheduler-0" Feb 19 10:06:13 crc kubenswrapper[4965]: I0219 10:06:13.994458 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pnms\" (UniqueName: \"kubernetes.io/projected/6bc77d18-18ba-4f28-ab8c-a1d4e77996f3-kube-api-access-4pnms\") pod \"nova-scheduler-0\" (UID: \"6bc77d18-18ba-4f28-ab8c-a1d4e77996f3\") " pod="openstack/nova-scheduler-0" Feb 19 10:06:14 crc kubenswrapper[4965]: I0219 10:06:14.096864 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bc77d18-18ba-4f28-ab8c-a1d4e77996f3-config-data\") pod \"nova-scheduler-0\" (UID: \"6bc77d18-18ba-4f28-ab8c-a1d4e77996f3\") " pod="openstack/nova-scheduler-0" Feb 19 10:06:14 crc kubenswrapper[4965]: I0219 10:06:14.096929 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bc77d18-18ba-4f28-ab8c-a1d4e77996f3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6bc77d18-18ba-4f28-ab8c-a1d4e77996f3\") " pod="openstack/nova-scheduler-0" Feb 19 10:06:14 crc kubenswrapper[4965]: I0219 10:06:14.096962 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pnms\" (UniqueName: \"kubernetes.io/projected/6bc77d18-18ba-4f28-ab8c-a1d4e77996f3-kube-api-access-4pnms\") pod \"nova-scheduler-0\" (UID: \"6bc77d18-18ba-4f28-ab8c-a1d4e77996f3\") " pod="openstack/nova-scheduler-0" Feb 19 10:06:14 crc kubenswrapper[4965]: I0219 10:06:14.100258 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bc77d18-18ba-4f28-ab8c-a1d4e77996f3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6bc77d18-18ba-4f28-ab8c-a1d4e77996f3\") " pod="openstack/nova-scheduler-0" Feb 19 10:06:14 crc kubenswrapper[4965]: I0219 10:06:14.100503 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bc77d18-18ba-4f28-ab8c-a1d4e77996f3-config-data\") pod \"nova-scheduler-0\" (UID: \"6bc77d18-18ba-4f28-ab8c-a1d4e77996f3\") " pod="openstack/nova-scheduler-0" Feb 19 10:06:14 crc kubenswrapper[4965]: I0219 10:06:14.116384 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pnms\" (UniqueName: \"kubernetes.io/projected/6bc77d18-18ba-4f28-ab8c-a1d4e77996f3-kube-api-access-4pnms\") pod \"nova-scheduler-0\" (UID: \"6bc77d18-18ba-4f28-ab8c-a1d4e77996f3\") " pod="openstack/nova-scheduler-0" Feb 19 10:06:14 crc kubenswrapper[4965]: I0219 10:06:14.253243 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 10:06:14 crc kubenswrapper[4965]: I0219 10:06:14.700665 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 10:06:14 crc kubenswrapper[4965]: I0219 10:06:14.821159 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6bc77d18-18ba-4f28-ab8c-a1d4e77996f3","Type":"ContainerStarted","Data":"60e467de81139122c4237b02a6c8abd24b24f0e14ba55e06b2e8a326eaaaf4f9"} Feb 19 10:06:14 crc kubenswrapper[4965]: I0219 10:06:14.825523 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b29094ed-8036-44ed-a882-7ad1d5ad4cc3","Type":"ContainerStarted","Data":"5e012c62b955584d0a82f0fa8e0dc2f8987f44a6deea5a2df9b7711d62138fef"} Feb 19 10:06:14 crc kubenswrapper[4965]: I0219 10:06:14.825578 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b29094ed-8036-44ed-a882-7ad1d5ad4cc3","Type":"ContainerStarted","Data":"f308b5d1d04101eba3561fe41ba70187b6b7fa72bdec736be52dc27797530342"} Feb 19 10:06:14 crc kubenswrapper[4965]: I0219 10:06:14.825595 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b29094ed-8036-44ed-a882-7ad1d5ad4cc3","Type":"ContainerStarted","Data":"6b4e52d6bf29b96c2038b16e413e77a2eafa66c0fb86702351f37140fcb510b7"} Feb 19 10:06:14 crc kubenswrapper[4965]: I0219 10:06:14.846473 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.846453861 podStartE2EDuration="2.846453861s" podCreationTimestamp="2026-02-19 10:06:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:06:14.844125034 +0000 UTC m=+1430.465446394" watchObservedRunningTime="2026-02-19 10:06:14.846453861 +0000 UTC m=+1430.467775171" Feb 19 10:06:15 crc kubenswrapper[4965]: I0219 10:06:15.221571 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edd94405-a4e9-4078-b7f7-d0fe27e28d69" path="/var/lib/kubelet/pods/edd94405-a4e9-4078-b7f7-d0fe27e28d69/volumes" Feb 19 10:06:15 crc kubenswrapper[4965]: I0219 10:06:15.856499 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6bc77d18-18ba-4f28-ab8c-a1d4e77996f3","Type":"ContainerStarted","Data":"46f6230b3c88b7d82b4b1f5e4c39272d5d159738d367e015e51f0c2fa0f9e2a4"} Feb 19 10:06:15 crc kubenswrapper[4965]: I0219 10:06:15.901451 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.901410455 podStartE2EDuration="2.901410455s" podCreationTimestamp="2026-02-19 10:06:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:06:15.882010553 +0000 UTC m=+1431.503331873" watchObservedRunningTime="2026-02-19 10:06:15.901410455 +0000 UTC m=+1431.522731825" Feb 19 10:06:18 crc kubenswrapper[4965]: I0219 10:06:18.333296 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 10:06:18 crc kubenswrapper[4965]: I0219 10:06:18.333721 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 10:06:19 crc kubenswrapper[4965]: I0219 10:06:19.248932 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 10:06:19 crc kubenswrapper[4965]: I0219 10:06:19.249368 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 10:06:19 crc kubenswrapper[4965]: I0219 10:06:19.254020 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 19 10:06:20 crc kubenswrapper[4965]: I0219 10:06:20.298593 4965 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4b71fda7-2162-4dda-a5ba-053eb96e59a9" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.233:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 10:06:20 crc kubenswrapper[4965]: I0219 10:06:20.298601 4965 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4b71fda7-2162-4dda-a5ba-053eb96e59a9" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.233:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 10:06:23 crc kubenswrapper[4965]: I0219 10:06:23.333606 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 10:06:23 crc kubenswrapper[4965]: I0219 10:06:23.333898 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 10:06:24 crc kubenswrapper[4965]: I0219 10:06:24.253619 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 19 10:06:24 crc kubenswrapper[4965]: I0219 10:06:24.283755 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 19 10:06:24 crc kubenswrapper[4965]: I0219 10:06:24.348345 4965 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b29094ed-8036-44ed-a882-7ad1d5ad4cc3" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.234:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 10:06:24 crc kubenswrapper[4965]: I0219 10:06:24.348612 4965 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b29094ed-8036-44ed-a882-7ad1d5ad4cc3" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.234:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 10:06:24 crc kubenswrapper[4965]: I0219 10:06:24.975272 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 19 10:06:29 crc kubenswrapper[4965]: I0219 10:06:29.256171 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 10:06:29 crc kubenswrapper[4965]: I0219 10:06:29.257851 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 10:06:29 crc kubenswrapper[4965]: I0219 10:06:29.261593 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 10:06:29 crc kubenswrapper[4965]: I0219 10:06:29.262722 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 10:06:29 crc kubenswrapper[4965]: I0219 10:06:29.993572 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 10:06:29 crc kubenswrapper[4965]: I0219 10:06:29.999874 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 10:06:33 crc kubenswrapper[4965]: I0219 10:06:33.339334 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 19 10:06:33 crc kubenswrapper[4965]: I0219 10:06:33.340277 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 19 10:06:33 crc kubenswrapper[4965]: I0219 10:06:33.347129 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 19 10:06:33 crc kubenswrapper[4965]: I0219 10:06:33.347835 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 19 10:06:37 crc kubenswrapper[4965]: I0219 10:06:37.169934 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 19 10:06:48 crc kubenswrapper[4965]: I0219 10:06:48.344348 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-db-sync-wh9q9"] Feb 19 10:06:48 crc kubenswrapper[4965]: I0219 10:06:48.357974 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-db-sync-wh9q9"] Feb 19 10:06:48 crc kubenswrapper[4965]: I0219 10:06:48.412401 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-db-sync-nk4c7"] Feb 19 10:06:48 crc kubenswrapper[4965]: I0219 10:06:48.415547 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-nk4c7" Feb 19 10:06:48 crc kubenswrapper[4965]: I0219 10:06:48.418333 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 19 10:06:48 crc kubenswrapper[4965]: I0219 10:06:48.423380 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-nk4c7"] Feb 19 10:06:48 crc kubenswrapper[4965]: I0219 10:06:48.571514 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/cc70a082-b592-4eb9-80f9-481a1bd9fe0c-certs\") pod \"cloudkitty-db-sync-nk4c7\" (UID: \"cc70a082-b592-4eb9-80f9-481a1bd9fe0c\") " pod="openstack/cloudkitty-db-sync-nk4c7" Feb 19 10:06:48 crc kubenswrapper[4965]: I0219 10:06:48.572096 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc70a082-b592-4eb9-80f9-481a1bd9fe0c-scripts\") pod \"cloudkitty-db-sync-nk4c7\" (UID: \"cc70a082-b592-4eb9-80f9-481a1bd9fe0c\") " pod="openstack/cloudkitty-db-sync-nk4c7" Feb 19 10:06:48 crc kubenswrapper[4965]: I0219 10:06:48.572273 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc70a082-b592-4eb9-80f9-481a1bd9fe0c-config-data\") pod \"cloudkitty-db-sync-nk4c7\" (UID: \"cc70a082-b592-4eb9-80f9-481a1bd9fe0c\") " pod="openstack/cloudkitty-db-sync-nk4c7" Feb 19 10:06:48 crc kubenswrapper[4965]: I0219 10:06:48.572450 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrjfc\" (UniqueName: \"kubernetes.io/projected/cc70a082-b592-4eb9-80f9-481a1bd9fe0c-kube-api-access-jrjfc\") pod \"cloudkitty-db-sync-nk4c7\" (UID: \"cc70a082-b592-4eb9-80f9-481a1bd9fe0c\") " pod="openstack/cloudkitty-db-sync-nk4c7" Feb 19 10:06:48 crc kubenswrapper[4965]: I0219 10:06:48.572583 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc70a082-b592-4eb9-80f9-481a1bd9fe0c-combined-ca-bundle\") pod \"cloudkitty-db-sync-nk4c7\" (UID: \"cc70a082-b592-4eb9-80f9-481a1bd9fe0c\") " pod="openstack/cloudkitty-db-sync-nk4c7" Feb 19 10:06:48 crc kubenswrapper[4965]: I0219 10:06:48.674906 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc70a082-b592-4eb9-80f9-481a1bd9fe0c-scripts\") pod \"cloudkitty-db-sync-nk4c7\" (UID: \"cc70a082-b592-4eb9-80f9-481a1bd9fe0c\") " pod="openstack/cloudkitty-db-sync-nk4c7" Feb 19 10:06:48 crc kubenswrapper[4965]: I0219 10:06:48.675006 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc70a082-b592-4eb9-80f9-481a1bd9fe0c-config-data\") pod \"cloudkitty-db-sync-nk4c7\" (UID: \"cc70a082-b592-4eb9-80f9-481a1bd9fe0c\") " pod="openstack/cloudkitty-db-sync-nk4c7" Feb 19 10:06:48 crc kubenswrapper[4965]: I0219 10:06:48.675119 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrjfc\" (UniqueName: \"kubernetes.io/projected/cc70a082-b592-4eb9-80f9-481a1bd9fe0c-kube-api-access-jrjfc\") pod \"cloudkitty-db-sync-nk4c7\" (UID: \"cc70a082-b592-4eb9-80f9-481a1bd9fe0c\") " pod="openstack/cloudkitty-db-sync-nk4c7" Feb 19 10:06:48 crc kubenswrapper[4965]: I0219 10:06:48.675175 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc70a082-b592-4eb9-80f9-481a1bd9fe0c-combined-ca-bundle\") pod \"cloudkitty-db-sync-nk4c7\" (UID: \"cc70a082-b592-4eb9-80f9-481a1bd9fe0c\") " pod="openstack/cloudkitty-db-sync-nk4c7" Feb 19 10:06:48 crc kubenswrapper[4965]: I0219 10:06:48.675276 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/cc70a082-b592-4eb9-80f9-481a1bd9fe0c-certs\") pod \"cloudkitty-db-sync-nk4c7\" (UID: \"cc70a082-b592-4eb9-80f9-481a1bd9fe0c\") " pod="openstack/cloudkitty-db-sync-nk4c7" Feb 19 10:06:48 crc kubenswrapper[4965]: I0219 10:06:48.681817 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc70a082-b592-4eb9-80f9-481a1bd9fe0c-combined-ca-bundle\") pod \"cloudkitty-db-sync-nk4c7\" (UID: \"cc70a082-b592-4eb9-80f9-481a1bd9fe0c\") " pod="openstack/cloudkitty-db-sync-nk4c7" Feb 19 10:06:48 crc kubenswrapper[4965]: I0219 10:06:48.682295 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/cc70a082-b592-4eb9-80f9-481a1bd9fe0c-certs\") pod \"cloudkitty-db-sync-nk4c7\" (UID: \"cc70a082-b592-4eb9-80f9-481a1bd9fe0c\") " pod="openstack/cloudkitty-db-sync-nk4c7" Feb 19 10:06:48 crc kubenswrapper[4965]: I0219 10:06:48.682680 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc70a082-b592-4eb9-80f9-481a1bd9fe0c-scripts\") pod \"cloudkitty-db-sync-nk4c7\" (UID: \"cc70a082-b592-4eb9-80f9-481a1bd9fe0c\") " pod="openstack/cloudkitty-db-sync-nk4c7" Feb 19 10:06:48 crc kubenswrapper[4965]: I0219 10:06:48.686288 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc70a082-b592-4eb9-80f9-481a1bd9fe0c-config-data\") pod \"cloudkitty-db-sync-nk4c7\" (UID: \"cc70a082-b592-4eb9-80f9-481a1bd9fe0c\") " pod="openstack/cloudkitty-db-sync-nk4c7" Feb 19 10:06:48 crc kubenswrapper[4965]: I0219 10:06:48.695401 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrjfc\" (UniqueName: \"kubernetes.io/projected/cc70a082-b592-4eb9-80f9-481a1bd9fe0c-kube-api-access-jrjfc\") pod \"cloudkitty-db-sync-nk4c7\" (UID: \"cc70a082-b592-4eb9-80f9-481a1bd9fe0c\") " pod="openstack/cloudkitty-db-sync-nk4c7" Feb 19 10:06:48 crc kubenswrapper[4965]: I0219 10:06:48.749353 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-nk4c7" Feb 19 10:06:49 crc kubenswrapper[4965]: I0219 10:06:49.219035 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4e3779f-9f25-4334-97f9-a3778bd78d5e" path="/var/lib/kubelet/pods/e4e3779f-9f25-4334-97f9-a3778bd78d5e/volumes" Feb 19 10:06:49 crc kubenswrapper[4965]: I0219 10:06:49.255411 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-nk4c7"] Feb 19 10:06:50 crc kubenswrapper[4965]: I0219 10:06:50.216250 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-nk4c7" event={"ID":"cc70a082-b592-4eb9-80f9-481a1bd9fe0c","Type":"ContainerStarted","Data":"5c703a6bcc904ca4ef8f7c4cf2370e88d7291a0744a11e85b47b9d8529927454"} Feb 19 10:06:50 crc kubenswrapper[4965]: I0219 10:06:50.216693 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-nk4c7" event={"ID":"cc70a082-b592-4eb9-80f9-481a1bd9fe0c","Type":"ContainerStarted","Data":"9ce9ae533ae0f2c7fde33954f87572d9d03e14c8533a94a94744f5ace45f4eb8"} Feb 19 10:06:50 crc kubenswrapper[4965]: I0219 10:06:50.477922 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:06:50 crc kubenswrapper[4965]: I0219 10:06:50.478294 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9" containerName="ceilometer-central-agent" containerID="cri-o://0a59b8bfb0fcf1bc1396d007c8c49f65f91d51d862f862b294efe759c8f9283f" gracePeriod=30 Feb 19 10:06:50 crc kubenswrapper[4965]: I0219 10:06:50.478363 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9" containerName="proxy-httpd" containerID="cri-o://b8a34ae6a66fb788288150631b15fe3ee7ca5e33aab2bee3bb7ec1698231b11c" gracePeriod=30 Feb 19 10:06:50 crc kubenswrapper[4965]: I0219 10:06:50.478398 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9" containerName="sg-core" containerID="cri-o://f763afd4009aef571150e49eb6198f8953d32b0f01ea8df09fc11cd260b60575" gracePeriod=30 Feb 19 10:06:50 crc kubenswrapper[4965]: I0219 10:06:50.478469 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9" containerName="ceilometer-notification-agent" containerID="cri-o://0c915eeac916d8f043b9c03c9230a5f9f582f7942b094d386243f48a2cc41f6d" gracePeriod=30 Feb 19 10:06:50 crc kubenswrapper[4965]: I0219 10:06:50.507108 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-db-sync-nk4c7" podStartSLOduration=2.309952296 podStartE2EDuration="2.507068472s" podCreationTimestamp="2026-02-19 10:06:48 +0000 UTC" firstStartedPulling="2026-02-19 10:06:49.255213468 +0000 UTC m=+1464.876534778" lastFinishedPulling="2026-02-19 10:06:49.452329644 +0000 UTC m=+1465.073650954" observedRunningTime="2026-02-19 10:06:50.486639127 +0000 UTC m=+1466.107960437" watchObservedRunningTime="2026-02-19 10:06:50.507068472 +0000 UTC m=+1466.128389782" Feb 19 10:06:51 crc kubenswrapper[4965]: I0219 10:06:51.238309 4965 generic.go:334] "Generic (PLEG): container finished" podID="e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9" containerID="b8a34ae6a66fb788288150631b15fe3ee7ca5e33aab2bee3bb7ec1698231b11c" exitCode=0 Feb 19 10:06:51 crc kubenswrapper[4965]: I0219 10:06:51.238624 4965 generic.go:334] "Generic (PLEG): container finished" podID="e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9" containerID="f763afd4009aef571150e49eb6198f8953d32b0f01ea8df09fc11cd260b60575" exitCode=2 Feb 19 10:06:51 crc kubenswrapper[4965]: I0219 10:06:51.238632 4965 generic.go:334] "Generic (PLEG): container finished" podID="e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9" containerID="0a59b8bfb0fcf1bc1396d007c8c49f65f91d51d862f862b294efe759c8f9283f" exitCode=0 Feb 19 10:06:51 crc kubenswrapper[4965]: I0219 10:06:51.238398 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9","Type":"ContainerDied","Data":"b8a34ae6a66fb788288150631b15fe3ee7ca5e33aab2bee3bb7ec1698231b11c"} Feb 19 10:06:51 crc kubenswrapper[4965]: I0219 10:06:51.238722 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9","Type":"ContainerDied","Data":"f763afd4009aef571150e49eb6198f8953d32b0f01ea8df09fc11cd260b60575"} Feb 19 10:06:51 crc kubenswrapper[4965]: I0219 10:06:51.238736 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9","Type":"ContainerDied","Data":"0a59b8bfb0fcf1bc1396d007c8c49f65f91d51d862f862b294efe759c8f9283f"} Feb 19 10:06:52 crc kubenswrapper[4965]: I0219 10:06:52.145976 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 10:06:52 crc kubenswrapper[4965]: I0219 10:06:52.261279 4965 generic.go:334] "Generic (PLEG): container finished" podID="e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9" containerID="0c915eeac916d8f043b9c03c9230a5f9f582f7942b094d386243f48a2cc41f6d" exitCode=0 Feb 19 10:06:52 crc kubenswrapper[4965]: I0219 10:06:52.261318 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9","Type":"ContainerDied","Data":"0c915eeac916d8f043b9c03c9230a5f9f582f7942b094d386243f48a2cc41f6d"} Feb 19 10:06:52 crc kubenswrapper[4965]: I0219 10:06:52.276589 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 10:06:52 crc kubenswrapper[4965]: I0219 10:06:52.858460 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:06:52 crc kubenswrapper[4965]: I0219 10:06:52.979730 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9-combined-ca-bundle\") pod \"e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9\" (UID: \"e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9\") " Feb 19 10:06:52 crc kubenswrapper[4965]: I0219 10:06:52.979879 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9-run-httpd\") pod \"e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9\" (UID: \"e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9\") " Feb 19 10:06:52 crc kubenswrapper[4965]: I0219 10:06:52.979937 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9-scripts\") pod \"e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9\" (UID: \"e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9\") " Feb 19 10:06:52 crc kubenswrapper[4965]: I0219 10:06:52.979961 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9-config-data\") pod \"e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9\" (UID: \"e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9\") " Feb 19 10:06:52 crc kubenswrapper[4965]: I0219 10:06:52.980180 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9-log-httpd\") pod \"e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9\" (UID: \"e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9\") " Feb 19 10:06:52 crc kubenswrapper[4965]: I0219 10:06:52.980274 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hprq8\" (UniqueName: \"kubernetes.io/projected/e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9-kube-api-access-hprq8\") pod \"e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9\" (UID: \"e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9\") " Feb 19 10:06:52 crc kubenswrapper[4965]: I0219 10:06:52.980347 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9-sg-core-conf-yaml\") pod \"e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9\" (UID: \"e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9\") " Feb 19 10:06:52 crc kubenswrapper[4965]: I0219 10:06:52.980529 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9-ceilometer-tls-certs\") pod \"e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9\" (UID: \"e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9\") " Feb 19 10:06:52 crc kubenswrapper[4965]: I0219 10:06:52.992519 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9" (UID: "e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:06:53 crc kubenswrapper[4965]: I0219 10:06:53.006993 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9" (UID: "e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:06:53 crc kubenswrapper[4965]: I0219 10:06:53.022132 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9-kube-api-access-hprq8" (OuterVolumeSpecName: "kube-api-access-hprq8") pod "e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9" (UID: "e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9"). InnerVolumeSpecName "kube-api-access-hprq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:06:53 crc kubenswrapper[4965]: I0219 10:06:53.030963 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9-scripts" (OuterVolumeSpecName: "scripts") pod "e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9" (UID: "e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:06:53 crc kubenswrapper[4965]: I0219 10:06:53.095169 4965 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:53 crc kubenswrapper[4965]: I0219 10:06:53.095274 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hprq8\" (UniqueName: \"kubernetes.io/projected/e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9-kube-api-access-hprq8\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:53 crc kubenswrapper[4965]: I0219 10:06:53.095286 4965 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:53 crc kubenswrapper[4965]: I0219 10:06:53.095296 4965 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:53 crc kubenswrapper[4965]: I0219 10:06:53.124360 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9" (UID: "e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:06:53 crc kubenswrapper[4965]: I0219 10:06:53.200773 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9-config-data" (OuterVolumeSpecName: "config-data") pod "e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9" (UID: "e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:06:53 crc kubenswrapper[4965]: I0219 10:06:53.206731 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9" (UID: "e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:06:53 crc kubenswrapper[4965]: I0219 10:06:53.209490 4965 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:53 crc kubenswrapper[4965]: I0219 10:06:53.209838 4965 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:53 crc kubenswrapper[4965]: I0219 10:06:53.210249 4965 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:53 crc kubenswrapper[4965]: I0219 10:06:53.213353 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9" (UID: "e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:06:53 crc kubenswrapper[4965]: I0219 10:06:53.274472 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9","Type":"ContainerDied","Data":"fce956585ee697aed044ef6906290498634746e440e46993a660d7a5c155ed71"} Feb 19 10:06:53 crc kubenswrapper[4965]: I0219 10:06:53.274532 4965 scope.go:117] "RemoveContainer" containerID="b8a34ae6a66fb788288150631b15fe3ee7ca5e33aab2bee3bb7ec1698231b11c" Feb 19 10:06:53 crc kubenswrapper[4965]: I0219 10:06:53.274729 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:06:53 crc kubenswrapper[4965]: I0219 10:06:53.310685 4965 scope.go:117] "RemoveContainer" containerID="f763afd4009aef571150e49eb6198f8953d32b0f01ea8df09fc11cd260b60575" Feb 19 10:06:53 crc kubenswrapper[4965]: I0219 10:06:53.312640 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:06:53 crc kubenswrapper[4965]: I0219 10:06:53.313585 4965 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:53 crc kubenswrapper[4965]: I0219 10:06:53.355010 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:06:53 crc kubenswrapper[4965]: I0219 10:06:53.357346 4965 scope.go:117] "RemoveContainer" containerID="0c915eeac916d8f043b9c03c9230a5f9f582f7942b094d386243f48a2cc41f6d" Feb 19 10:06:53 crc kubenswrapper[4965]: I0219 10:06:53.383978 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:06:53 crc kubenswrapper[4965]: E0219 10:06:53.389377 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9" containerName="sg-core" Feb 19 10:06:53 crc kubenswrapper[4965]: I0219 10:06:53.389432 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9" containerName="sg-core" Feb 19 10:06:53 crc kubenswrapper[4965]: E0219 10:06:53.389442 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9" containerName="ceilometer-notification-agent" Feb 19 10:06:53 crc kubenswrapper[4965]: I0219 10:06:53.389448 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9" containerName="ceilometer-notification-agent" Feb 19 10:06:53 crc kubenswrapper[4965]: E0219 10:06:53.389461 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9" containerName="proxy-httpd" Feb 19 10:06:53 crc kubenswrapper[4965]: I0219 10:06:53.389467 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9" containerName="proxy-httpd" Feb 19 10:06:53 crc kubenswrapper[4965]: E0219 10:06:53.389484 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9" containerName="ceilometer-central-agent" Feb 19 10:06:53 crc kubenswrapper[4965]: I0219 10:06:53.389506 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9" containerName="ceilometer-central-agent" Feb 19 10:06:53 crc kubenswrapper[4965]: I0219 10:06:53.389747 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9" containerName="sg-core" Feb 19 10:06:53 crc kubenswrapper[4965]: I0219 10:06:53.389764 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9" containerName="proxy-httpd" Feb 19 10:06:53 crc kubenswrapper[4965]: I0219 10:06:53.389771 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9" containerName="ceilometer-central-agent" Feb 19 10:06:53 crc kubenswrapper[4965]: I0219 10:06:53.389788 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9" containerName="ceilometer-notification-agent" Feb 19 10:06:53 crc kubenswrapper[4965]: I0219 10:06:53.392161 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:06:53 crc kubenswrapper[4965]: I0219 10:06:53.399055 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 19 10:06:53 crc kubenswrapper[4965]: I0219 10:06:53.399248 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 10:06:53 crc kubenswrapper[4965]: I0219 10:06:53.399372 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 10:06:53 crc kubenswrapper[4965]: I0219 10:06:53.411406 4965 scope.go:117] "RemoveContainer" containerID="0a59b8bfb0fcf1bc1396d007c8c49f65f91d51d862f862b294efe759c8f9283f" Feb 19 10:06:53 crc kubenswrapper[4965]: I0219 10:06:53.449266 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:06:53 crc kubenswrapper[4965]: I0219 10:06:53.519363 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2a094c76-7174-4b58-8b32-12020982c63b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2a094c76-7174-4b58-8b32-12020982c63b\") " pod="openstack/ceilometer-0" Feb 19 10:06:53 crc kubenswrapper[4965]: I0219 10:06:53.519441 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a094c76-7174-4b58-8b32-12020982c63b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2a094c76-7174-4b58-8b32-12020982c63b\") " pod="openstack/ceilometer-0" Feb 19 10:06:53 crc kubenswrapper[4965]: I0219 10:06:53.519468 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grnj5\" (UniqueName: \"kubernetes.io/projected/2a094c76-7174-4b58-8b32-12020982c63b-kube-api-access-grnj5\") pod \"ceilometer-0\" (UID: \"2a094c76-7174-4b58-8b32-12020982c63b\") " pod="openstack/ceilometer-0" Feb 19 10:06:53 crc kubenswrapper[4965]: I0219 10:06:53.519527 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a094c76-7174-4b58-8b32-12020982c63b-config-data\") pod \"ceilometer-0\" (UID: \"2a094c76-7174-4b58-8b32-12020982c63b\") " pod="openstack/ceilometer-0" Feb 19 10:06:53 crc kubenswrapper[4965]: I0219 10:06:53.519563 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a094c76-7174-4b58-8b32-12020982c63b-scripts\") pod \"ceilometer-0\" (UID: \"2a094c76-7174-4b58-8b32-12020982c63b\") " pod="openstack/ceilometer-0" Feb 19 10:06:53 crc kubenswrapper[4965]: I0219 10:06:53.519610 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a094c76-7174-4b58-8b32-12020982c63b-run-httpd\") pod \"ceilometer-0\" (UID: \"2a094c76-7174-4b58-8b32-12020982c63b\") " pod="openstack/ceilometer-0" Feb 19 10:06:53 crc kubenswrapper[4965]: I0219 10:06:53.519658 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a094c76-7174-4b58-8b32-12020982c63b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2a094c76-7174-4b58-8b32-12020982c63b\") " pod="openstack/ceilometer-0" Feb 19 10:06:53 crc kubenswrapper[4965]: I0219 10:06:53.519681 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a094c76-7174-4b58-8b32-12020982c63b-log-httpd\") pod \"ceilometer-0\" (UID: \"2a094c76-7174-4b58-8b32-12020982c63b\") " pod="openstack/ceilometer-0" Feb 19 10:06:53 crc kubenswrapper[4965]: I0219 10:06:53.621023 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a094c76-7174-4b58-8b32-12020982c63b-config-data\") pod \"ceilometer-0\" (UID: \"2a094c76-7174-4b58-8b32-12020982c63b\") " pod="openstack/ceilometer-0" Feb 19 10:06:53 crc kubenswrapper[4965]: I0219 10:06:53.621097 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a094c76-7174-4b58-8b32-12020982c63b-scripts\") pod \"ceilometer-0\" (UID: \"2a094c76-7174-4b58-8b32-12020982c63b\") " pod="openstack/ceilometer-0" Feb 19 10:06:53 crc kubenswrapper[4965]: I0219 10:06:53.621149 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a094c76-7174-4b58-8b32-12020982c63b-run-httpd\") pod \"ceilometer-0\" (UID: \"2a094c76-7174-4b58-8b32-12020982c63b\") " pod="openstack/ceilometer-0" Feb 19 10:06:53 crc kubenswrapper[4965]: I0219 10:06:53.621210 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a094c76-7174-4b58-8b32-12020982c63b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2a094c76-7174-4b58-8b32-12020982c63b\") " pod="openstack/ceilometer-0" Feb 19 10:06:53 crc kubenswrapper[4965]: I0219 10:06:53.621235 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a094c76-7174-4b58-8b32-12020982c63b-log-httpd\") pod \"ceilometer-0\" (UID: \"2a094c76-7174-4b58-8b32-12020982c63b\") " pod="openstack/ceilometer-0" Feb 19 10:06:53 crc kubenswrapper[4965]: I0219 10:06:53.621262 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2a094c76-7174-4b58-8b32-12020982c63b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2a094c76-7174-4b58-8b32-12020982c63b\") " pod="openstack/ceilometer-0" Feb 19 10:06:53 crc kubenswrapper[4965]: I0219 10:06:53.621300 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a094c76-7174-4b58-8b32-12020982c63b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2a094c76-7174-4b58-8b32-12020982c63b\") " pod="openstack/ceilometer-0" Feb 19 10:06:53 crc kubenswrapper[4965]: I0219 10:06:53.621329 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grnj5\" (UniqueName: \"kubernetes.io/projected/2a094c76-7174-4b58-8b32-12020982c63b-kube-api-access-grnj5\") pod \"ceilometer-0\" (UID: \"2a094c76-7174-4b58-8b32-12020982c63b\") " pod="openstack/ceilometer-0" Feb 19 10:06:53 crc kubenswrapper[4965]: I0219 10:06:53.621987 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a094c76-7174-4b58-8b32-12020982c63b-log-httpd\") pod \"ceilometer-0\" (UID: \"2a094c76-7174-4b58-8b32-12020982c63b\") " pod="openstack/ceilometer-0" Feb 19 10:06:53 crc kubenswrapper[4965]: I0219 10:06:53.622065 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a094c76-7174-4b58-8b32-12020982c63b-run-httpd\") pod \"ceilometer-0\" (UID: \"2a094c76-7174-4b58-8b32-12020982c63b\") " pod="openstack/ceilometer-0" Feb 19 10:06:53 crc kubenswrapper[4965]: I0219 10:06:53.625824 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a094c76-7174-4b58-8b32-12020982c63b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2a094c76-7174-4b58-8b32-12020982c63b\") " pod="openstack/ceilometer-0" Feb 19 10:06:53 crc kubenswrapper[4965]: I0219 10:06:53.626949 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a094c76-7174-4b58-8b32-12020982c63b-scripts\") pod \"ceilometer-0\" (UID: \"2a094c76-7174-4b58-8b32-12020982c63b\") " pod="openstack/ceilometer-0" Feb 19 10:06:53 crc kubenswrapper[4965]: I0219 10:06:53.627817 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2a094c76-7174-4b58-8b32-12020982c63b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2a094c76-7174-4b58-8b32-12020982c63b\") " pod="openstack/ceilometer-0" Feb 19 10:06:53 crc kubenswrapper[4965]: I0219 10:06:53.629919 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a094c76-7174-4b58-8b32-12020982c63b-config-data\") pod \"ceilometer-0\" (UID: \"2a094c76-7174-4b58-8b32-12020982c63b\") " pod="openstack/ceilometer-0" Feb 19 10:06:53 crc kubenswrapper[4965]: I0219 10:06:53.631102 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a094c76-7174-4b58-8b32-12020982c63b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2a094c76-7174-4b58-8b32-12020982c63b\") " pod="openstack/ceilometer-0" Feb 19 10:06:53 crc kubenswrapper[4965]: I0219 10:06:53.654055 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grnj5\" (UniqueName: \"kubernetes.io/projected/2a094c76-7174-4b58-8b32-12020982c63b-kube-api-access-grnj5\") pod \"ceilometer-0\" (UID: \"2a094c76-7174-4b58-8b32-12020982c63b\") " pod="openstack/ceilometer-0" Feb 19 10:06:53 crc kubenswrapper[4965]: I0219 10:06:53.719764 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:06:54 crc kubenswrapper[4965]: I0219 10:06:54.295068 4965 generic.go:334] "Generic (PLEG): container finished" podID="cc70a082-b592-4eb9-80f9-481a1bd9fe0c" containerID="5c703a6bcc904ca4ef8f7c4cf2370e88d7291a0744a11e85b47b9d8529927454" exitCode=0 Feb 19 10:06:54 crc kubenswrapper[4965]: I0219 10:06:54.295127 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-nk4c7" event={"ID":"cc70a082-b592-4eb9-80f9-481a1bd9fe0c","Type":"ContainerDied","Data":"5c703a6bcc904ca4ef8f7c4cf2370e88d7291a0744a11e85b47b9d8529927454"} Feb 19 10:06:54 crc kubenswrapper[4965]: I0219 10:06:54.325660 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:06:55 crc kubenswrapper[4965]: I0219 10:06:55.214657 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9" path="/var/lib/kubelet/pods/e9e3bb27-a2a4-4c55-a340-d75dcb0e4df9/volumes" Feb 19 10:06:55 crc kubenswrapper[4965]: I0219 10:06:55.315860 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2a094c76-7174-4b58-8b32-12020982c63b","Type":"ContainerStarted","Data":"d1784585ceb01cdfe9718e60d8a00bd6a4704a24c2d4a4404e0830d3173676c0"} Feb 19 10:06:55 crc kubenswrapper[4965]: I0219 10:06:55.870028 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-nk4c7" Feb 19 10:06:55 crc kubenswrapper[4965]: I0219 10:06:55.976911 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/cc70a082-b592-4eb9-80f9-481a1bd9fe0c-certs\") pod \"cc70a082-b592-4eb9-80f9-481a1bd9fe0c\" (UID: \"cc70a082-b592-4eb9-80f9-481a1bd9fe0c\") " Feb 19 10:06:55 crc kubenswrapper[4965]: I0219 10:06:55.977072 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc70a082-b592-4eb9-80f9-481a1bd9fe0c-scripts\") pod \"cc70a082-b592-4eb9-80f9-481a1bd9fe0c\" (UID: \"cc70a082-b592-4eb9-80f9-481a1bd9fe0c\") " Feb 19 10:06:55 crc kubenswrapper[4965]: I0219 10:06:55.977129 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc70a082-b592-4eb9-80f9-481a1bd9fe0c-combined-ca-bundle\") pod \"cc70a082-b592-4eb9-80f9-481a1bd9fe0c\" (UID: \"cc70a082-b592-4eb9-80f9-481a1bd9fe0c\") " Feb 19 10:06:55 crc kubenswrapper[4965]: I0219 10:06:55.977445 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrjfc\" (UniqueName: \"kubernetes.io/projected/cc70a082-b592-4eb9-80f9-481a1bd9fe0c-kube-api-access-jrjfc\") pod \"cc70a082-b592-4eb9-80f9-481a1bd9fe0c\" (UID: \"cc70a082-b592-4eb9-80f9-481a1bd9fe0c\") " Feb 19 10:06:55 crc kubenswrapper[4965]: I0219 10:06:55.977591 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc70a082-b592-4eb9-80f9-481a1bd9fe0c-config-data\") pod \"cc70a082-b592-4eb9-80f9-481a1bd9fe0c\" (UID: \"cc70a082-b592-4eb9-80f9-481a1bd9fe0c\") " Feb 19 10:06:56 crc kubenswrapper[4965]: I0219 10:06:56.003468 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc70a082-b592-4eb9-80f9-481a1bd9fe0c-kube-api-access-jrjfc" (OuterVolumeSpecName: "kube-api-access-jrjfc") pod "cc70a082-b592-4eb9-80f9-481a1bd9fe0c" (UID: "cc70a082-b592-4eb9-80f9-481a1bd9fe0c"). InnerVolumeSpecName "kube-api-access-jrjfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:06:56 crc kubenswrapper[4965]: I0219 10:06:56.003606 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc70a082-b592-4eb9-80f9-481a1bd9fe0c-scripts" (OuterVolumeSpecName: "scripts") pod "cc70a082-b592-4eb9-80f9-481a1bd9fe0c" (UID: "cc70a082-b592-4eb9-80f9-481a1bd9fe0c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:06:56 crc kubenswrapper[4965]: I0219 10:06:56.008081 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc70a082-b592-4eb9-80f9-481a1bd9fe0c-certs" (OuterVolumeSpecName: "certs") pod "cc70a082-b592-4eb9-80f9-481a1bd9fe0c" (UID: "cc70a082-b592-4eb9-80f9-481a1bd9fe0c"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:06:56 crc kubenswrapper[4965]: I0219 10:06:56.048349 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc70a082-b592-4eb9-80f9-481a1bd9fe0c-config-data" (OuterVolumeSpecName: "config-data") pod "cc70a082-b592-4eb9-80f9-481a1bd9fe0c" (UID: "cc70a082-b592-4eb9-80f9-481a1bd9fe0c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:06:56 crc kubenswrapper[4965]: I0219 10:06:56.063839 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc70a082-b592-4eb9-80f9-481a1bd9fe0c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cc70a082-b592-4eb9-80f9-481a1bd9fe0c" (UID: "cc70a082-b592-4eb9-80f9-481a1bd9fe0c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:06:56 crc kubenswrapper[4965]: I0219 10:06:56.082174 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrjfc\" (UniqueName: \"kubernetes.io/projected/cc70a082-b592-4eb9-80f9-481a1bd9fe0c-kube-api-access-jrjfc\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:56 crc kubenswrapper[4965]: I0219 10:06:56.082334 4965 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc70a082-b592-4eb9-80f9-481a1bd9fe0c-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:56 crc kubenswrapper[4965]: I0219 10:06:56.082430 4965 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/cc70a082-b592-4eb9-80f9-481a1bd9fe0c-certs\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:56 crc kubenswrapper[4965]: I0219 10:06:56.082508 4965 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc70a082-b592-4eb9-80f9-481a1bd9fe0c-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:56 crc kubenswrapper[4965]: I0219 10:06:56.082560 4965 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc70a082-b592-4eb9-80f9-481a1bd9fe0c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:06:56 crc kubenswrapper[4965]: I0219 10:06:56.327455 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-nk4c7" event={"ID":"cc70a082-b592-4eb9-80f9-481a1bd9fe0c","Type":"ContainerDied","Data":"9ce9ae533ae0f2c7fde33954f87572d9d03e14c8533a94a94744f5ace45f4eb8"} Feb 19 10:06:56 crc kubenswrapper[4965]: I0219 10:06:56.327493 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ce9ae533ae0f2c7fde33954f87572d9d03e14c8533a94a94744f5ace45f4eb8" Feb 19 10:06:56 crc kubenswrapper[4965]: I0219 10:06:56.328446 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-nk4c7" Feb 19 10:06:56 crc kubenswrapper[4965]: I0219 10:06:56.427550 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-storageinit-vtxxb"] Feb 19 10:06:56 crc kubenswrapper[4965]: I0219 10:06:56.438835 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-storageinit-vtxxb"] Feb 19 10:06:56 crc kubenswrapper[4965]: I0219 10:06:56.554679 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-storageinit-7mdwh"] Feb 19 10:06:56 crc kubenswrapper[4965]: E0219 10:06:56.555083 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc70a082-b592-4eb9-80f9-481a1bd9fe0c" containerName="cloudkitty-db-sync" Feb 19 10:06:56 crc kubenswrapper[4965]: I0219 10:06:56.555104 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc70a082-b592-4eb9-80f9-481a1bd9fe0c" containerName="cloudkitty-db-sync" Feb 19 10:06:56 crc kubenswrapper[4965]: I0219 10:06:56.555348 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc70a082-b592-4eb9-80f9-481a1bd9fe0c" containerName="cloudkitty-db-sync" Feb 19 10:06:56 crc kubenswrapper[4965]: I0219 10:06:56.556064 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-7mdwh" Feb 19 10:06:56 crc kubenswrapper[4965]: I0219 10:06:56.559594 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 19 10:06:56 crc kubenswrapper[4965]: I0219 10:06:56.568421 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-storageinit-7mdwh"] Feb 19 10:06:56 crc kubenswrapper[4965]: I0219 10:06:56.592442 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4c84601-c0b3-46eb-8323-08b550442026-scripts\") pod \"cloudkitty-storageinit-7mdwh\" (UID: \"e4c84601-c0b3-46eb-8323-08b550442026\") " pod="openstack/cloudkitty-storageinit-7mdwh" Feb 19 10:06:56 crc kubenswrapper[4965]: I0219 10:06:56.592529 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4c84601-c0b3-46eb-8323-08b550442026-combined-ca-bundle\") pod \"cloudkitty-storageinit-7mdwh\" (UID: \"e4c84601-c0b3-46eb-8323-08b550442026\") " pod="openstack/cloudkitty-storageinit-7mdwh" Feb 19 10:06:56 crc kubenswrapper[4965]: I0219 10:06:56.592564 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4c84601-c0b3-46eb-8323-08b550442026-config-data\") pod \"cloudkitty-storageinit-7mdwh\" (UID: \"e4c84601-c0b3-46eb-8323-08b550442026\") " pod="openstack/cloudkitty-storageinit-7mdwh" Feb 19 10:06:56 crc kubenswrapper[4965]: I0219 10:06:56.592633 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6xpf\" (UniqueName: \"kubernetes.io/projected/e4c84601-c0b3-46eb-8323-08b550442026-kube-api-access-r6xpf\") pod \"cloudkitty-storageinit-7mdwh\" (UID: \"e4c84601-c0b3-46eb-8323-08b550442026\") " pod="openstack/cloudkitty-storageinit-7mdwh" Feb 19 10:06:56 crc kubenswrapper[4965]: I0219 10:06:56.592670 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/e4c84601-c0b3-46eb-8323-08b550442026-certs\") pod \"cloudkitty-storageinit-7mdwh\" (UID: \"e4c84601-c0b3-46eb-8323-08b550442026\") " pod="openstack/cloudkitty-storageinit-7mdwh" Feb 19 10:06:56 crc kubenswrapper[4965]: I0219 10:06:56.694391 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6xpf\" (UniqueName: \"kubernetes.io/projected/e4c84601-c0b3-46eb-8323-08b550442026-kube-api-access-r6xpf\") pod \"cloudkitty-storageinit-7mdwh\" (UID: \"e4c84601-c0b3-46eb-8323-08b550442026\") " pod="openstack/cloudkitty-storageinit-7mdwh" Feb 19 10:06:56 crc kubenswrapper[4965]: I0219 10:06:56.694450 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/e4c84601-c0b3-46eb-8323-08b550442026-certs\") pod \"cloudkitty-storageinit-7mdwh\" (UID: \"e4c84601-c0b3-46eb-8323-08b550442026\") " pod="openstack/cloudkitty-storageinit-7mdwh" Feb 19 10:06:56 crc kubenswrapper[4965]: I0219 10:06:56.694560 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4c84601-c0b3-46eb-8323-08b550442026-scripts\") pod \"cloudkitty-storageinit-7mdwh\" (UID: \"e4c84601-c0b3-46eb-8323-08b550442026\") " pod="openstack/cloudkitty-storageinit-7mdwh" Feb 19 10:06:56 crc kubenswrapper[4965]: I0219 10:06:56.694603 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4c84601-c0b3-46eb-8323-08b550442026-combined-ca-bundle\") pod \"cloudkitty-storageinit-7mdwh\" (UID: \"e4c84601-c0b3-46eb-8323-08b550442026\") " pod="openstack/cloudkitty-storageinit-7mdwh" Feb 19 10:06:56 crc kubenswrapper[4965]: I0219 10:06:56.694624 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4c84601-c0b3-46eb-8323-08b550442026-config-data\") pod \"cloudkitty-storageinit-7mdwh\" (UID: \"e4c84601-c0b3-46eb-8323-08b550442026\") " pod="openstack/cloudkitty-storageinit-7mdwh" Feb 19 10:06:56 crc kubenswrapper[4965]: I0219 10:06:56.698847 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/e4c84601-c0b3-46eb-8323-08b550442026-certs\") pod \"cloudkitty-storageinit-7mdwh\" (UID: \"e4c84601-c0b3-46eb-8323-08b550442026\") " pod="openstack/cloudkitty-storageinit-7mdwh" Feb 19 10:06:56 crc kubenswrapper[4965]: I0219 10:06:56.702893 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4c84601-c0b3-46eb-8323-08b550442026-combined-ca-bundle\") pod \"cloudkitty-storageinit-7mdwh\" (UID: \"e4c84601-c0b3-46eb-8323-08b550442026\") " pod="openstack/cloudkitty-storageinit-7mdwh" Feb 19 10:06:56 crc kubenswrapper[4965]: I0219 10:06:56.703034 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4c84601-c0b3-46eb-8323-08b550442026-config-data\") pod \"cloudkitty-storageinit-7mdwh\" (UID: \"e4c84601-c0b3-46eb-8323-08b550442026\") " pod="openstack/cloudkitty-storageinit-7mdwh" Feb 19 10:06:56 crc kubenswrapper[4965]: I0219 10:06:56.706556 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4c84601-c0b3-46eb-8323-08b550442026-scripts\") pod \"cloudkitty-storageinit-7mdwh\" (UID: \"e4c84601-c0b3-46eb-8323-08b550442026\") " pod="openstack/cloudkitty-storageinit-7mdwh" Feb 19 10:06:56 crc kubenswrapper[4965]: I0219 10:06:56.711426 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6xpf\" (UniqueName: \"kubernetes.io/projected/e4c84601-c0b3-46eb-8323-08b550442026-kube-api-access-r6xpf\") pod \"cloudkitty-storageinit-7mdwh\" (UID: \"e4c84601-c0b3-46eb-8323-08b550442026\") " pod="openstack/cloudkitty-storageinit-7mdwh" Feb 19 10:06:56 crc kubenswrapper[4965]: I0219 10:06:56.871250 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-7mdwh" Feb 19 10:06:56 crc kubenswrapper[4965]: I0219 10:06:56.975778 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="bbd64606-53f8-484e-b8d2-c0fef4acb1bd" containerName="rabbitmq" containerID="cri-o://8fcc1af5d793f894765ce202de58987a948a9b64eb12e1b6d30caabf8608dd9d" gracePeriod=604796 Feb 19 10:06:57 crc kubenswrapper[4965]: I0219 10:06:57.233986 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="712a9147-94b7-45f8-97a9-3c0a988f748d" path="/var/lib/kubelet/pods/712a9147-94b7-45f8-97a9-3c0a988f748d/volumes" Feb 19 10:06:57 crc kubenswrapper[4965]: I0219 10:06:57.265767 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="305a32d6-c9f8-4494-b356-75d6c54c7467" containerName="rabbitmq" containerID="cri-o://b1bf88bec9e961815551bc486611bc2a7542f58e859808003d94fcd9f5c7cb21" gracePeriod=604795 Feb 19 10:06:59 crc kubenswrapper[4965]: I0219 10:06:59.493736 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-storageinit-7mdwh"] Feb 19 10:06:59 crc kubenswrapper[4965]: W0219 10:06:59.498348 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4c84601_c0b3_46eb_8323_08b550442026.slice/crio-bc6a50fe510afc268f1aedc25492fb966cfb7e5505e5fb55486219c1a1e5de46 WatchSource:0}: Error finding container bc6a50fe510afc268f1aedc25492fb966cfb7e5505e5fb55486219c1a1e5de46: Status 404 returned error can't find the container with id bc6a50fe510afc268f1aedc25492fb966cfb7e5505e5fb55486219c1a1e5de46 Feb 19 10:07:00 crc kubenswrapper[4965]: I0219 10:07:00.369456 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2a094c76-7174-4b58-8b32-12020982c63b","Type":"ContainerStarted","Data":"a602d6d47bd27b0353736186a02a094c9b1e14ff435a69768cbc98d60f4d08fc"} Feb 19 10:07:00 crc kubenswrapper[4965]: I0219 10:07:00.369762 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2a094c76-7174-4b58-8b32-12020982c63b","Type":"ContainerStarted","Data":"a3d70d4da191ace2768d5dff43bbf9ad163f40c602833bd37ac54e23b3c98589"} Feb 19 10:07:00 crc kubenswrapper[4965]: I0219 10:07:00.371124 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-7mdwh" event={"ID":"e4c84601-c0b3-46eb-8323-08b550442026","Type":"ContainerStarted","Data":"0e4ccd393278b74e5792acea06d6e0a4617cf02ba8e34034eed444c716de21e0"} Feb 19 10:07:00 crc kubenswrapper[4965]: I0219 10:07:00.371180 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-7mdwh" event={"ID":"e4c84601-c0b3-46eb-8323-08b550442026","Type":"ContainerStarted","Data":"bc6a50fe510afc268f1aedc25492fb966cfb7e5505e5fb55486219c1a1e5de46"} Feb 19 10:07:00 crc kubenswrapper[4965]: I0219 10:07:00.388099 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-storageinit-7mdwh" podStartSLOduration=4.388079864 podStartE2EDuration="4.388079864s" podCreationTimestamp="2026-02-19 10:06:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:07:00.385936682 +0000 UTC m=+1476.007257992" watchObservedRunningTime="2026-02-19 10:07:00.388079864 +0000 UTC m=+1476.009401174" Feb 19 10:07:01 crc kubenswrapper[4965]: I0219 10:07:01.418719 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2a094c76-7174-4b58-8b32-12020982c63b","Type":"ContainerStarted","Data":"703f6452920e8fb376a513fc4c3e57c01b2917a5f38b643c9f92e5f38f3edf2d"} Feb 19 10:07:01 crc kubenswrapper[4965]: I0219 10:07:01.422989 4965 generic.go:334] "Generic (PLEG): container finished" podID="e4c84601-c0b3-46eb-8323-08b550442026" containerID="0e4ccd393278b74e5792acea06d6e0a4617cf02ba8e34034eed444c716de21e0" exitCode=0 Feb 19 10:07:01 crc kubenswrapper[4965]: I0219 10:07:01.423073 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-7mdwh" event={"ID":"e4c84601-c0b3-46eb-8323-08b550442026","Type":"ContainerDied","Data":"0e4ccd393278b74e5792acea06d6e0a4617cf02ba8e34034eed444c716de21e0"} Feb 19 10:07:03 crc kubenswrapper[4965]: I0219 10:07:03.242013 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-7mdwh" Feb 19 10:07:03 crc kubenswrapper[4965]: E0219 10:07:03.332829 4965 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbbd64606_53f8_484e_b8d2_c0fef4acb1bd.slice/crio-conmon-8fcc1af5d793f894765ce202de58987a948a9b64eb12e1b6d30caabf8608dd9d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbbd64606_53f8_484e_b8d2_c0fef4acb1bd.slice/crio-8fcc1af5d793f894765ce202de58987a948a9b64eb12e1b6d30caabf8608dd9d.scope\": RecentStats: unable to find data in memory cache]" Feb 19 10:07:03 crc kubenswrapper[4965]: I0219 10:07:03.396087 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6xpf\" (UniqueName: \"kubernetes.io/projected/e4c84601-c0b3-46eb-8323-08b550442026-kube-api-access-r6xpf\") pod \"e4c84601-c0b3-46eb-8323-08b550442026\" (UID: \"e4c84601-c0b3-46eb-8323-08b550442026\") " Feb 19 10:07:03 crc kubenswrapper[4965]: I0219 10:07:03.396611 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4c84601-c0b3-46eb-8323-08b550442026-scripts\") pod \"e4c84601-c0b3-46eb-8323-08b550442026\" (UID: \"e4c84601-c0b3-46eb-8323-08b550442026\") " Feb 19 10:07:03 crc kubenswrapper[4965]: I0219 10:07:03.396689 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4c84601-c0b3-46eb-8323-08b550442026-combined-ca-bundle\") pod \"e4c84601-c0b3-46eb-8323-08b550442026\" (UID: \"e4c84601-c0b3-46eb-8323-08b550442026\") " Feb 19 10:07:03 crc kubenswrapper[4965]: I0219 10:07:03.396744 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4c84601-c0b3-46eb-8323-08b550442026-config-data\") pod \"e4c84601-c0b3-46eb-8323-08b550442026\" (UID: \"e4c84601-c0b3-46eb-8323-08b550442026\") " Feb 19 10:07:03 crc kubenswrapper[4965]: I0219 10:07:03.396789 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/e4c84601-c0b3-46eb-8323-08b550442026-certs\") pod \"e4c84601-c0b3-46eb-8323-08b550442026\" (UID: \"e4c84601-c0b3-46eb-8323-08b550442026\") " Feb 19 10:07:03 crc kubenswrapper[4965]: I0219 10:07:03.401616 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4c84601-c0b3-46eb-8323-08b550442026-certs" (OuterVolumeSpecName: "certs") pod "e4c84601-c0b3-46eb-8323-08b550442026" (UID: "e4c84601-c0b3-46eb-8323-08b550442026"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:07:03 crc kubenswrapper[4965]: I0219 10:07:03.402085 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4c84601-c0b3-46eb-8323-08b550442026-kube-api-access-r6xpf" (OuterVolumeSpecName: "kube-api-access-r6xpf") pod "e4c84601-c0b3-46eb-8323-08b550442026" (UID: "e4c84601-c0b3-46eb-8323-08b550442026"). InnerVolumeSpecName "kube-api-access-r6xpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:07:03 crc kubenswrapper[4965]: I0219 10:07:03.403720 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4c84601-c0b3-46eb-8323-08b550442026-scripts" (OuterVolumeSpecName: "scripts") pod "e4c84601-c0b3-46eb-8323-08b550442026" (UID: "e4c84601-c0b3-46eb-8323-08b550442026"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:07:03 crc kubenswrapper[4965]: I0219 10:07:03.433229 4965 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="305a32d6-c9f8-4494-b356-75d6c54c7467" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.107:5671: connect: connection refused" Feb 19 10:07:03 crc kubenswrapper[4965]: I0219 10:07:03.440004 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4c84601-c0b3-46eb-8323-08b550442026-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e4c84601-c0b3-46eb-8323-08b550442026" (UID: "e4c84601-c0b3-46eb-8323-08b550442026"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:07:03 crc kubenswrapper[4965]: I0219 10:07:03.446675 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-7mdwh" Feb 19 10:07:03 crc kubenswrapper[4965]: I0219 10:07:03.446702 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-7mdwh" event={"ID":"e4c84601-c0b3-46eb-8323-08b550442026","Type":"ContainerDied","Data":"bc6a50fe510afc268f1aedc25492fb966cfb7e5505e5fb55486219c1a1e5de46"} Feb 19 10:07:03 crc kubenswrapper[4965]: I0219 10:07:03.446779 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc6a50fe510afc268f1aedc25492fb966cfb7e5505e5fb55486219c1a1e5de46" Feb 19 10:07:03 crc kubenswrapper[4965]: I0219 10:07:03.455364 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2a094c76-7174-4b58-8b32-12020982c63b","Type":"ContainerStarted","Data":"e11b4006f3a25036ca0e66493c912b3fe71f1f411e07dcf614c4524e26151d6f"} Feb 19 10:07:03 crc kubenswrapper[4965]: I0219 10:07:03.455641 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 10:07:03 crc kubenswrapper[4965]: I0219 10:07:03.460372 4965 generic.go:334] "Generic (PLEG): container finished" podID="bbd64606-53f8-484e-b8d2-c0fef4acb1bd" containerID="8fcc1af5d793f894765ce202de58987a948a9b64eb12e1b6d30caabf8608dd9d" exitCode=0 Feb 19 10:07:03 crc kubenswrapper[4965]: I0219 10:07:03.460410 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bbd64606-53f8-484e-b8d2-c0fef4acb1bd","Type":"ContainerDied","Data":"8fcc1af5d793f894765ce202de58987a948a9b64eb12e1b6d30caabf8608dd9d"} Feb 19 10:07:03 crc kubenswrapper[4965]: I0219 10:07:03.465326 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4c84601-c0b3-46eb-8323-08b550442026-config-data" (OuterVolumeSpecName: "config-data") pod "e4c84601-c0b3-46eb-8323-08b550442026" (UID: "e4c84601-c0b3-46eb-8323-08b550442026"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:07:03 crc kubenswrapper[4965]: I0219 10:07:03.485062 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.8552788420000002 podStartE2EDuration="10.485038825s" podCreationTimestamp="2026-02-19 10:06:53 +0000 UTC" firstStartedPulling="2026-02-19 10:06:54.334697783 +0000 UTC m=+1469.956019093" lastFinishedPulling="2026-02-19 10:07:02.964457766 +0000 UTC m=+1478.585779076" observedRunningTime="2026-02-19 10:07:03.48280233 +0000 UTC m=+1479.104123650" watchObservedRunningTime="2026-02-19 10:07:03.485038825 +0000 UTC m=+1479.106360135" Feb 19 10:07:03 crc kubenswrapper[4965]: I0219 10:07:03.500061 4965 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4c84601-c0b3-46eb-8323-08b550442026-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:07:03 crc kubenswrapper[4965]: I0219 10:07:03.500091 4965 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/e4c84601-c0b3-46eb-8323-08b550442026-certs\") on node \"crc\" DevicePath \"\"" Feb 19 10:07:03 crc kubenswrapper[4965]: I0219 10:07:03.500102 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6xpf\" (UniqueName: \"kubernetes.io/projected/e4c84601-c0b3-46eb-8323-08b550442026-kube-api-access-r6xpf\") on node \"crc\" DevicePath \"\"" Feb 19 10:07:03 crc kubenswrapper[4965]: I0219 10:07:03.500112 4965 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4c84601-c0b3-46eb-8323-08b550442026-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:07:03 crc kubenswrapper[4965]: I0219 10:07:03.500121 4965 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4c84601-c0b3-46eb-8323-08b550442026-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:07:03 crc kubenswrapper[4965]: I0219 10:07:03.551903 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:07:03 crc kubenswrapper[4965]: I0219 10:07:03.601641 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bbd64606-53f8-484e-b8d2-c0fef4acb1bd-config-data\") pod \"bbd64606-53f8-484e-b8d2-c0fef4acb1bd\" (UID: \"bbd64606-53f8-484e-b8d2-c0fef4acb1bd\") " Feb 19 10:07:03 crc kubenswrapper[4965]: I0219 10:07:03.601720 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bbd64606-53f8-484e-b8d2-c0fef4acb1bd-rabbitmq-erlang-cookie\") pod \"bbd64606-53f8-484e-b8d2-c0fef4acb1bd\" (UID: \"bbd64606-53f8-484e-b8d2-c0fef4acb1bd\") " Feb 19 10:07:03 crc kubenswrapper[4965]: I0219 10:07:03.601775 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bbd64606-53f8-484e-b8d2-c0fef4acb1bd-server-conf\") pod \"bbd64606-53f8-484e-b8d2-c0fef4acb1bd\" (UID: \"bbd64606-53f8-484e-b8d2-c0fef4acb1bd\") " Feb 19 10:07:03 crc kubenswrapper[4965]: I0219 10:07:03.601807 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bbd64606-53f8-484e-b8d2-c0fef4acb1bd-rabbitmq-tls\") pod \"bbd64606-53f8-484e-b8d2-c0fef4acb1bd\" (UID: \"bbd64606-53f8-484e-b8d2-c0fef4acb1bd\") " Feb 19 10:07:03 crc kubenswrapper[4965]: I0219 10:07:03.601861 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bbd64606-53f8-484e-b8d2-c0fef4acb1bd-rabbitmq-plugins\") pod \"bbd64606-53f8-484e-b8d2-c0fef4acb1bd\" (UID: \"bbd64606-53f8-484e-b8d2-c0fef4acb1bd\") " Feb 19 10:07:03 crc kubenswrapper[4965]: I0219 10:07:03.601896 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bbd64606-53f8-484e-b8d2-c0fef4acb1bd-rabbitmq-confd\") pod \"bbd64606-53f8-484e-b8d2-c0fef4acb1bd\" (UID: \"bbd64606-53f8-484e-b8d2-c0fef4acb1bd\") " Feb 19 10:07:03 crc kubenswrapper[4965]: I0219 10:07:03.601924 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bbd64606-53f8-484e-b8d2-c0fef4acb1bd-pod-info\") pod \"bbd64606-53f8-484e-b8d2-c0fef4acb1bd\" (UID: \"bbd64606-53f8-484e-b8d2-c0fef4acb1bd\") " Feb 19 10:07:03 crc kubenswrapper[4965]: I0219 10:07:03.601963 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bbd64606-53f8-484e-b8d2-c0fef4acb1bd-erlang-cookie-secret\") pod \"bbd64606-53f8-484e-b8d2-c0fef4acb1bd\" (UID: \"bbd64606-53f8-484e-b8d2-c0fef4acb1bd\") " Feb 19 10:07:03 crc kubenswrapper[4965]: I0219 10:07:03.602046 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bbd64606-53f8-484e-b8d2-c0fef4acb1bd-plugins-conf\") pod \"bbd64606-53f8-484e-b8d2-c0fef4acb1bd\" (UID: \"bbd64606-53f8-484e-b8d2-c0fef4acb1bd\") " Feb 19 10:07:03 crc kubenswrapper[4965]: I0219 10:07:03.603748 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbd64606-53f8-484e-b8d2-c0fef4acb1bd-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "bbd64606-53f8-484e-b8d2-c0fef4acb1bd" (UID: "bbd64606-53f8-484e-b8d2-c0fef4acb1bd"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:07:03 crc kubenswrapper[4965]: I0219 10:07:03.604242 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbd64606-53f8-484e-b8d2-c0fef4acb1bd-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "bbd64606-53f8-484e-b8d2-c0fef4acb1bd" (UID: "bbd64606-53f8-484e-b8d2-c0fef4acb1bd"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:07:03 crc kubenswrapper[4965]: I0219 10:07:03.606587 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbd64606-53f8-484e-b8d2-c0fef4acb1bd-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "bbd64606-53f8-484e-b8d2-c0fef4acb1bd" (UID: "bbd64606-53f8-484e-b8d2-c0fef4acb1bd"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:07:03 crc kubenswrapper[4965]: I0219 10:07:03.611647 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbd64606-53f8-484e-b8d2-c0fef4acb1bd-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "bbd64606-53f8-484e-b8d2-c0fef4acb1bd" (UID: "bbd64606-53f8-484e-b8d2-c0fef4acb1bd"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:07:03 crc kubenswrapper[4965]: I0219 10:07:03.613442 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/bbd64606-53f8-484e-b8d2-c0fef4acb1bd-pod-info" (OuterVolumeSpecName: "pod-info") pod "bbd64606-53f8-484e-b8d2-c0fef4acb1bd" (UID: "bbd64606-53f8-484e-b8d2-c0fef4acb1bd"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 19 10:07:03 crc kubenswrapper[4965]: I0219 10:07:03.615545 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-207bdf8b-fa4d-47cb-a97b-effc8257e554\") pod \"bbd64606-53f8-484e-b8d2-c0fef4acb1bd\" (UID: \"bbd64606-53f8-484e-b8d2-c0fef4acb1bd\") " Feb 19 10:07:03 crc kubenswrapper[4965]: I0219 10:07:03.615622 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzvj4\" (UniqueName: \"kubernetes.io/projected/bbd64606-53f8-484e-b8d2-c0fef4acb1bd-kube-api-access-fzvj4\") pod \"bbd64606-53f8-484e-b8d2-c0fef4acb1bd\" (UID: \"bbd64606-53f8-484e-b8d2-c0fef4acb1bd\") " Feb 19 10:07:03 crc kubenswrapper[4965]: I0219 10:07:03.618107 4965 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bbd64606-53f8-484e-b8d2-c0fef4acb1bd-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 19 10:07:03 crc kubenswrapper[4965]: I0219 10:07:03.618137 4965 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bbd64606-53f8-484e-b8d2-c0fef4acb1bd-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 19 10:07:03 crc kubenswrapper[4965]: I0219 10:07:03.618153 4965 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bbd64606-53f8-484e-b8d2-c0fef4acb1bd-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 19 10:07:03 crc kubenswrapper[4965]: I0219 10:07:03.618164 4965 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bbd64606-53f8-484e-b8d2-c0fef4acb1bd-pod-info\") on node \"crc\" DevicePath \"\"" Feb 19 10:07:03 crc kubenswrapper[4965]: I0219 10:07:03.618175 4965 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bbd64606-53f8-484e-b8d2-c0fef4acb1bd-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 19 10:07:03 crc kubenswrapper[4965]: I0219 10:07:03.621138 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbd64606-53f8-484e-b8d2-c0fef4acb1bd-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "bbd64606-53f8-484e-b8d2-c0fef4acb1bd" (UID: "bbd64606-53f8-484e-b8d2-c0fef4acb1bd"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:07:03 crc kubenswrapper[4965]: I0219 10:07:03.621992 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbd64606-53f8-484e-b8d2-c0fef4acb1bd-kube-api-access-fzvj4" (OuterVolumeSpecName: "kube-api-access-fzvj4") pod "bbd64606-53f8-484e-b8d2-c0fef4acb1bd" (UID: "bbd64606-53f8-484e-b8d2-c0fef4acb1bd"). InnerVolumeSpecName "kube-api-access-fzvj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:07:03 crc kubenswrapper[4965]: I0219 10:07:03.648831 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbd64606-53f8-484e-b8d2-c0fef4acb1bd-config-data" (OuterVolumeSpecName: "config-data") pod "bbd64606-53f8-484e-b8d2-c0fef4acb1bd" (UID: "bbd64606-53f8-484e-b8d2-c0fef4acb1bd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:07:03 crc kubenswrapper[4965]: I0219 10:07:03.652351 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-207bdf8b-fa4d-47cb-a97b-effc8257e554" (OuterVolumeSpecName: "persistence") pod "bbd64606-53f8-484e-b8d2-c0fef4acb1bd" (UID: "bbd64606-53f8-484e-b8d2-c0fef4acb1bd"). InnerVolumeSpecName "pvc-207bdf8b-fa4d-47cb-a97b-effc8257e554". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 10:07:03 crc kubenswrapper[4965]: I0219 10:07:03.723222 4965 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bbd64606-53f8-484e-b8d2-c0fef4acb1bd-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 19 10:07:03 crc kubenswrapper[4965]: I0219 10:07:03.723292 4965 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-207bdf8b-fa4d-47cb-a97b-effc8257e554\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-207bdf8b-fa4d-47cb-a97b-effc8257e554\") on node \"crc\" " Feb 19 10:07:03 crc kubenswrapper[4965]: I0219 10:07:03.723323 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzvj4\" (UniqueName: \"kubernetes.io/projected/bbd64606-53f8-484e-b8d2-c0fef4acb1bd-kube-api-access-fzvj4\") on node \"crc\" DevicePath \"\"" Feb 19 10:07:03 crc kubenswrapper[4965]: I0219 10:07:03.723335 4965 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bbd64606-53f8-484e-b8d2-c0fef4acb1bd-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:07:03 crc kubenswrapper[4965]: I0219 10:07:03.724880 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbd64606-53f8-484e-b8d2-c0fef4acb1bd-server-conf" (OuterVolumeSpecName: "server-conf") pod "bbd64606-53f8-484e-b8d2-c0fef4acb1bd" (UID: "bbd64606-53f8-484e-b8d2-c0fef4acb1bd"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:07:03 crc kubenswrapper[4965]: I0219 10:07:03.825047 4965 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bbd64606-53f8-484e-b8d2-c0fef4acb1bd-server-conf\") on node \"crc\" DevicePath \"\"" Feb 19 10:07:03 crc kubenswrapper[4965]: I0219 10:07:03.954087 4965 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 19 10:07:03 crc kubenswrapper[4965]: I0219 10:07:03.954252 4965 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-207bdf8b-fa4d-47cb-a97b-effc8257e554" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-207bdf8b-fa4d-47cb-a97b-effc8257e554") on node "crc" Feb 19 10:07:03 crc kubenswrapper[4965]: I0219 10:07:03.954260 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbd64606-53f8-484e-b8d2-c0fef4acb1bd-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "bbd64606-53f8-484e-b8d2-c0fef4acb1bd" (UID: "bbd64606-53f8-484e-b8d2-c0fef4acb1bd"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.040522 4965 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bbd64606-53f8-484e-b8d2-c0fef4acb1bd-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.040554 4965 reconciler_common.go:293] "Volume detached for volume \"pvc-207bdf8b-fa4d-47cb-a97b-effc8257e554\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-207bdf8b-fa4d-47cb-a97b-effc8257e554\") on node \"crc\" DevicePath \"\"" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.065630 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.244135 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/305a32d6-c9f8-4494-b356-75d6c54c7467-config-data\") pod \"305a32d6-c9f8-4494-b356-75d6c54c7467\" (UID: \"305a32d6-c9f8-4494-b356-75d6c54c7467\") " Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.244228 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/305a32d6-c9f8-4494-b356-75d6c54c7467-plugins-conf\") pod \"305a32d6-c9f8-4494-b356-75d6c54c7467\" (UID: \"305a32d6-c9f8-4494-b356-75d6c54c7467\") " Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.244284 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/305a32d6-c9f8-4494-b356-75d6c54c7467-rabbitmq-plugins\") pod \"305a32d6-c9f8-4494-b356-75d6c54c7467\" (UID: \"305a32d6-c9f8-4494-b356-75d6c54c7467\") " Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.244330 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/305a32d6-c9f8-4494-b356-75d6c54c7467-server-conf\") pod \"305a32d6-c9f8-4494-b356-75d6c54c7467\" (UID: \"305a32d6-c9f8-4494-b356-75d6c54c7467\") " Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.244355 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/305a32d6-c9f8-4494-b356-75d6c54c7467-pod-info\") pod \"305a32d6-c9f8-4494-b356-75d6c54c7467\" (UID: \"305a32d6-c9f8-4494-b356-75d6c54c7467\") " Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.244374 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/305a32d6-c9f8-4494-b356-75d6c54c7467-rabbitmq-erlang-cookie\") pod \"305a32d6-c9f8-4494-b356-75d6c54c7467\" (UID: \"305a32d6-c9f8-4494-b356-75d6c54c7467\") " Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.244505 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvmrg\" (UniqueName: \"kubernetes.io/projected/305a32d6-c9f8-4494-b356-75d6c54c7467-kube-api-access-qvmrg\") pod \"305a32d6-c9f8-4494-b356-75d6c54c7467\" (UID: \"305a32d6-c9f8-4494-b356-75d6c54c7467\") " Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.244584 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/305a32d6-c9f8-4494-b356-75d6c54c7467-erlang-cookie-secret\") pod \"305a32d6-c9f8-4494-b356-75d6c54c7467\" (UID: \"305a32d6-c9f8-4494-b356-75d6c54c7467\") " Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.245390 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/305a32d6-c9f8-4494-b356-75d6c54c7467-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "305a32d6-c9f8-4494-b356-75d6c54c7467" (UID: "305a32d6-c9f8-4494-b356-75d6c54c7467"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.245869 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/305a32d6-c9f8-4494-b356-75d6c54c7467-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "305a32d6-c9f8-4494-b356-75d6c54c7467" (UID: "305a32d6-c9f8-4494-b356-75d6c54c7467"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.246333 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b53b8e9d-af36-445c-a8c8-07d5c566352c\") pod \"305a32d6-c9f8-4494-b356-75d6c54c7467\" (UID: \"305a32d6-c9f8-4494-b356-75d6c54c7467\") " Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.246383 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/305a32d6-c9f8-4494-b356-75d6c54c7467-rabbitmq-confd\") pod \"305a32d6-c9f8-4494-b356-75d6c54c7467\" (UID: \"305a32d6-c9f8-4494-b356-75d6c54c7467\") " Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.246416 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/305a32d6-c9f8-4494-b356-75d6c54c7467-rabbitmq-tls\") pod \"305a32d6-c9f8-4494-b356-75d6c54c7467\" (UID: \"305a32d6-c9f8-4494-b356-75d6c54c7467\") " Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.247046 4965 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/305a32d6-c9f8-4494-b356-75d6c54c7467-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.247064 4965 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/305a32d6-c9f8-4494-b356-75d6c54c7467-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.249138 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/305a32d6-c9f8-4494-b356-75d6c54c7467-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "305a32d6-c9f8-4494-b356-75d6c54c7467" (UID: "305a32d6-c9f8-4494-b356-75d6c54c7467"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.256413 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/305a32d6-c9f8-4494-b356-75d6c54c7467-kube-api-access-qvmrg" (OuterVolumeSpecName: "kube-api-access-qvmrg") pod "305a32d6-c9f8-4494-b356-75d6c54c7467" (UID: "305a32d6-c9f8-4494-b356-75d6c54c7467"). InnerVolumeSpecName "kube-api-access-qvmrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.256507 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/305a32d6-c9f8-4494-b356-75d6c54c7467-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "305a32d6-c9f8-4494-b356-75d6c54c7467" (UID: "305a32d6-c9f8-4494-b356-75d6c54c7467"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.256978 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/305a32d6-c9f8-4494-b356-75d6c54c7467-pod-info" (OuterVolumeSpecName: "pod-info") pod "305a32d6-c9f8-4494-b356-75d6c54c7467" (UID: "305a32d6-c9f8-4494-b356-75d6c54c7467"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.281488 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/305a32d6-c9f8-4494-b356-75d6c54c7467-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "305a32d6-c9f8-4494-b356-75d6c54c7467" (UID: "305a32d6-c9f8-4494-b356-75d6c54c7467"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.288570 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/305a32d6-c9f8-4494-b356-75d6c54c7467-config-data" (OuterVolumeSpecName: "config-data") pod "305a32d6-c9f8-4494-b356-75d6c54c7467" (UID: "305a32d6-c9f8-4494-b356-75d6c54c7467"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.309569 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b53b8e9d-af36-445c-a8c8-07d5c566352c" (OuterVolumeSpecName: "persistence") pod "305a32d6-c9f8-4494-b356-75d6c54c7467" (UID: "305a32d6-c9f8-4494-b356-75d6c54c7467"). InnerVolumeSpecName "pvc-b53b8e9d-af36-445c-a8c8-07d5c566352c". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.335605 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/305a32d6-c9f8-4494-b356-75d6c54c7467-server-conf" (OuterVolumeSpecName: "server-conf") pod "305a32d6-c9f8-4494-b356-75d6c54c7467" (UID: "305a32d6-c9f8-4494-b356-75d6c54c7467"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.349186 4965 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/305a32d6-c9f8-4494-b356-75d6c54c7467-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.349281 4965 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-b53b8e9d-af36-445c-a8c8-07d5c566352c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b53b8e9d-af36-445c-a8c8-07d5c566352c\") on node \"crc\" " Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.349303 4965 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/305a32d6-c9f8-4494-b356-75d6c54c7467-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.349316 4965 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/305a32d6-c9f8-4494-b356-75d6c54c7467-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.349327 4965 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/305a32d6-c9f8-4494-b356-75d6c54c7467-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.349338 4965 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/305a32d6-c9f8-4494-b356-75d6c54c7467-server-conf\") on node \"crc\" DevicePath \"\"" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.349349 4965 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/305a32d6-c9f8-4494-b356-75d6c54c7467-pod-info\") on node \"crc\" DevicePath \"\"" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.349361 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvmrg\" (UniqueName: \"kubernetes.io/projected/305a32d6-c9f8-4494-b356-75d6c54c7467-kube-api-access-qvmrg\") on node \"crc\" DevicePath \"\"" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.379333 4965 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.379480 4965 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-b53b8e9d-af36-445c-a8c8-07d5c566352c" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b53b8e9d-af36-445c-a8c8-07d5c566352c") on node "crc" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.447969 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/305a32d6-c9f8-4494-b356-75d6c54c7467-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "305a32d6-c9f8-4494-b356-75d6c54c7467" (UID: "305a32d6-c9f8-4494-b356-75d6c54c7467"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.448050 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.448300 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-proc-0" podUID="81c49478-306d-44e9-99bd-157057f0ed27" containerName="cloudkitty-proc" containerID="cri-o://479abf975e75efce1c54c3a226386f041c2ca5083e07761a46d776e30efe55e9" gracePeriod=30 Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.451931 4965 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/305a32d6-c9f8-4494-b356-75d6c54c7467-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.451958 4965 reconciler_common.go:293] "Volume detached for volume \"pvc-b53b8e9d-af36-445c-a8c8-07d5c566352c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b53b8e9d-af36-445c-a8c8-07d5c566352c\") on node \"crc\" DevicePath \"\"" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.463874 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.464173 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-api-0" podUID="f8ddfcb2-bbac-405a-beee-d6e4da23170d" containerName="cloudkitty-api-log" containerID="cri-o://332c594571f180116989e323bf9d780e8c755ff84d07fe3b12f0ac47d671441f" gracePeriod=30 Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.464240 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-api-0" podUID="f8ddfcb2-bbac-405a-beee-d6e4da23170d" containerName="cloudkitty-api" containerID="cri-o://69ca164c9f37f0ce0c37fa016da9e88aa4cd0b504bbdb26325c7128fca9264df" gracePeriod=30 Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.473958 4965 generic.go:334] "Generic (PLEG): container finished" podID="305a32d6-c9f8-4494-b356-75d6c54c7467" containerID="b1bf88bec9e961815551bc486611bc2a7542f58e859808003d94fcd9f5c7cb21" exitCode=0 Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.474026 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"305a32d6-c9f8-4494-b356-75d6c54c7467","Type":"ContainerDied","Data":"b1bf88bec9e961815551bc486611bc2a7542f58e859808003d94fcd9f5c7cb21"} Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.474056 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"305a32d6-c9f8-4494-b356-75d6c54c7467","Type":"ContainerDied","Data":"044219c30685117e651b3bff2a8dd282596bcb8cd3e142d14a7a2a6f0051de8d"} Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.474074 4965 scope.go:117] "RemoveContainer" containerID="b1bf88bec9e961815551bc486611bc2a7542f58e859808003d94fcd9f5c7cb21" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.474230 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.481130 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.485827 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bbd64606-53f8-484e-b8d2-c0fef4acb1bd","Type":"ContainerDied","Data":"1607d2fb6dd1dc2d9aa264e00ead79e3964938bdf143825f4964dc7e018ee31b"} Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.515796 4965 scope.go:117] "RemoveContainer" containerID="a1a65257004b242a15769f37a6af84d074317c1c847f8be584c5160b609c9964" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.530293 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.542062 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.569030 4965 scope.go:117] "RemoveContainer" containerID="b1bf88bec9e961815551bc486611bc2a7542f58e859808003d94fcd9f5c7cb21" Feb 19 10:07:04 crc kubenswrapper[4965]: E0219 10:07:04.570911 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1bf88bec9e961815551bc486611bc2a7542f58e859808003d94fcd9f5c7cb21\": container with ID starting with b1bf88bec9e961815551bc486611bc2a7542f58e859808003d94fcd9f5c7cb21 not found: ID does not exist" containerID="b1bf88bec9e961815551bc486611bc2a7542f58e859808003d94fcd9f5c7cb21" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.570960 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1bf88bec9e961815551bc486611bc2a7542f58e859808003d94fcd9f5c7cb21"} err="failed to get container status \"b1bf88bec9e961815551bc486611bc2a7542f58e859808003d94fcd9f5c7cb21\": rpc error: code = NotFound desc = could not find container \"b1bf88bec9e961815551bc486611bc2a7542f58e859808003d94fcd9f5c7cb21\": container with ID starting with b1bf88bec9e961815551bc486611bc2a7542f58e859808003d94fcd9f5c7cb21 not found: ID does not exist" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.570993 4965 scope.go:117] "RemoveContainer" containerID="a1a65257004b242a15769f37a6af84d074317c1c847f8be584c5160b609c9964" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.575134 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 10:07:04 crc kubenswrapper[4965]: E0219 10:07:04.588251 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1a65257004b242a15769f37a6af84d074317c1c847f8be584c5160b609c9964\": container with ID starting with a1a65257004b242a15769f37a6af84d074317c1c847f8be584c5160b609c9964 not found: ID does not exist" containerID="a1a65257004b242a15769f37a6af84d074317c1c847f8be584c5160b609c9964" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.588310 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1a65257004b242a15769f37a6af84d074317c1c847f8be584c5160b609c9964"} err="failed to get container status \"a1a65257004b242a15769f37a6af84d074317c1c847f8be584c5160b609c9964\": rpc error: code = NotFound desc = could not find container \"a1a65257004b242a15769f37a6af84d074317c1c847f8be584c5160b609c9964\": container with ID starting with a1a65257004b242a15769f37a6af84d074317c1c847f8be584c5160b609c9964 not found: ID does not exist" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.588340 4965 scope.go:117] "RemoveContainer" containerID="8fcc1af5d793f894765ce202de58987a948a9b64eb12e1b6d30caabf8608dd9d" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.656312 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.688841 4965 scope.go:117] "RemoveContainer" containerID="5a58c40e549604532d6c9dd9b699ffe7b8b46ce4c58064a2f3ecc8b63cbc14f1" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.694620 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 10:07:04 crc kubenswrapper[4965]: E0219 10:07:04.716289 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="305a32d6-c9f8-4494-b356-75d6c54c7467" containerName="setup-container" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.716337 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="305a32d6-c9f8-4494-b356-75d6c54c7467" containerName="setup-container" Feb 19 10:07:04 crc kubenswrapper[4965]: E0219 10:07:04.716373 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="305a32d6-c9f8-4494-b356-75d6c54c7467" containerName="rabbitmq" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.716385 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="305a32d6-c9f8-4494-b356-75d6c54c7467" containerName="rabbitmq" Feb 19 10:07:04 crc kubenswrapper[4965]: E0219 10:07:04.716423 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbd64606-53f8-484e-b8d2-c0fef4acb1bd" containerName="rabbitmq" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.716432 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbd64606-53f8-484e-b8d2-c0fef4acb1bd" containerName="rabbitmq" Feb 19 10:07:04 crc kubenswrapper[4965]: E0219 10:07:04.716447 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4c84601-c0b3-46eb-8323-08b550442026" containerName="cloudkitty-storageinit" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.716460 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4c84601-c0b3-46eb-8323-08b550442026" containerName="cloudkitty-storageinit" Feb 19 10:07:04 crc kubenswrapper[4965]: E0219 10:07:04.716489 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbd64606-53f8-484e-b8d2-c0fef4acb1bd" containerName="setup-container" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.716499 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbd64606-53f8-484e-b8d2-c0fef4acb1bd" containerName="setup-container" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.716964 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="305a32d6-c9f8-4494-b356-75d6c54c7467" containerName="rabbitmq" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.716980 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4c84601-c0b3-46eb-8323-08b550442026" containerName="cloudkitty-storageinit" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.716993 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbd64606-53f8-484e-b8d2-c0fef4acb1bd" containerName="rabbitmq" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.719330 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.728923 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.728989 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.729002 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-7ghtg" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.729238 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.729289 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.729600 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.739586 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.774129 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b53b8e9d-af36-445c-a8c8-07d5c566352c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b53b8e9d-af36-445c-a8c8-07d5c566352c\") pod \"rabbitmq-server-0\" (UID: \"e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa\") " pod="openstack/rabbitmq-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.774237 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa\") " pod="openstack/rabbitmq-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.774290 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa\") " pod="openstack/rabbitmq-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.774322 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa\") " pod="openstack/rabbitmq-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.774378 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa\") " pod="openstack/rabbitmq-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.774487 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa\") " pod="openstack/rabbitmq-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.774512 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa\") " pod="openstack/rabbitmq-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.774552 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qljrk\" (UniqueName: \"kubernetes.io/projected/e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa-kube-api-access-qljrk\") pod \"rabbitmq-server-0\" (UID: \"e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa\") " pod="openstack/rabbitmq-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.774664 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa\") " pod="openstack/rabbitmq-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.774716 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa-config-data\") pod \"rabbitmq-server-0\" (UID: \"e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa\") " pod="openstack/rabbitmq-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.774733 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa\") " pod="openstack/rabbitmq-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.786163 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.805691 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.810552 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.816139 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.816154 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.816260 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.816164 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.816352 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.816145 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.816626 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-xs2sm" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.823599 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.879550 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa\") " pod="openstack/rabbitmq-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.879593 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa\") " pod="openstack/rabbitmq-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.879622 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qljrk\" (UniqueName: \"kubernetes.io/projected/e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa-kube-api-access-qljrk\") pod \"rabbitmq-server-0\" (UID: \"e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa\") " pod="openstack/rabbitmq-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.879658 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8214d39f-90ff-4188-abbf-6a097f33eef0-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"8214d39f-90ff-4188-abbf-6a097f33eef0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.879680 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hth7\" (UniqueName: \"kubernetes.io/projected/8214d39f-90ff-4188-abbf-6a097f33eef0-kube-api-access-4hth7\") pod \"rabbitmq-cell1-server-0\" (UID: \"8214d39f-90ff-4188-abbf-6a097f33eef0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.879710 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8214d39f-90ff-4188-abbf-6a097f33eef0-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"8214d39f-90ff-4188-abbf-6a097f33eef0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.879726 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8214d39f-90ff-4188-abbf-6a097f33eef0-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"8214d39f-90ff-4188-abbf-6a097f33eef0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.879761 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa\") " pod="openstack/rabbitmq-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.879784 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8214d39f-90ff-4188-abbf-6a097f33eef0-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"8214d39f-90ff-4188-abbf-6a097f33eef0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.879798 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8214d39f-90ff-4188-abbf-6a097f33eef0-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8214d39f-90ff-4188-abbf-6a097f33eef0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.879827 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8214d39f-90ff-4188-abbf-6a097f33eef0-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8214d39f-90ff-4188-abbf-6a097f33eef0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.879846 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa-config-data\") pod \"rabbitmq-server-0\" (UID: \"e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa\") " pod="openstack/rabbitmq-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.879864 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa\") " pod="openstack/rabbitmq-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.879896 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b53b8e9d-af36-445c-a8c8-07d5c566352c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b53b8e9d-af36-445c-a8c8-07d5c566352c\") pod \"rabbitmq-server-0\" (UID: \"e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa\") " pod="openstack/rabbitmq-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.879925 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-207bdf8b-fa4d-47cb-a97b-effc8257e554\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-207bdf8b-fa4d-47cb-a97b-effc8257e554\") pod \"rabbitmq-cell1-server-0\" (UID: \"8214d39f-90ff-4188-abbf-6a097f33eef0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.879945 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8214d39f-90ff-4188-abbf-6a097f33eef0-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"8214d39f-90ff-4188-abbf-6a097f33eef0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.879960 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8214d39f-90ff-4188-abbf-6a097f33eef0-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"8214d39f-90ff-4188-abbf-6a097f33eef0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.879977 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa\") " pod="openstack/rabbitmq-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.880003 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa\") " pod="openstack/rabbitmq-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.880023 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa\") " pod="openstack/rabbitmq-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.880055 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa\") " pod="openstack/rabbitmq-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.880087 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8214d39f-90ff-4188-abbf-6a097f33eef0-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"8214d39f-90ff-4188-abbf-6a097f33eef0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.880532 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa\") " pod="openstack/rabbitmq-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.880617 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa\") " pod="openstack/rabbitmq-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.881157 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa-config-data\") pod \"rabbitmq-server-0\" (UID: \"e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa\") " pod="openstack/rabbitmq-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.882028 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa\") " pod="openstack/rabbitmq-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.885322 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa\") " pod="openstack/rabbitmq-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.887705 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa\") " pod="openstack/rabbitmq-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.890373 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa\") " pod="openstack/rabbitmq-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.893299 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa\") " pod="openstack/rabbitmq-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.893930 4965 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.893963 4965 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b53b8e9d-af36-445c-a8c8-07d5c566352c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b53b8e9d-af36-445c-a8c8-07d5c566352c\") pod \"rabbitmq-server-0\" (UID: \"e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a606e5eb2618d24d56413a1015b901d80936de0be271267d3eeee72120bb76ae/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.895043 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa\") " pod="openstack/rabbitmq-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.906785 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qljrk\" (UniqueName: \"kubernetes.io/projected/e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa-kube-api-access-qljrk\") pod \"rabbitmq-server-0\" (UID: \"e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa\") " pod="openstack/rabbitmq-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.967439 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b53b8e9d-af36-445c-a8c8-07d5c566352c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b53b8e9d-af36-445c-a8c8-07d5c566352c\") pod \"rabbitmq-server-0\" (UID: \"e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa\") " pod="openstack/rabbitmq-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.982833 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8214d39f-90ff-4188-abbf-6a097f33eef0-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"8214d39f-90ff-4188-abbf-6a097f33eef0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.982888 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8214d39f-90ff-4188-abbf-6a097f33eef0-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8214d39f-90ff-4188-abbf-6a097f33eef0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.982972 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8214d39f-90ff-4188-abbf-6a097f33eef0-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8214d39f-90ff-4188-abbf-6a097f33eef0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.983031 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8214d39f-90ff-4188-abbf-6a097f33eef0-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"8214d39f-90ff-4188-abbf-6a097f33eef0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.983056 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8214d39f-90ff-4188-abbf-6a097f33eef0-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"8214d39f-90ff-4188-abbf-6a097f33eef0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.983084 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-207bdf8b-fa4d-47cb-a97b-effc8257e554\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-207bdf8b-fa4d-47cb-a97b-effc8257e554\") pod \"rabbitmq-cell1-server-0\" (UID: \"8214d39f-90ff-4188-abbf-6a097f33eef0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.983187 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8214d39f-90ff-4188-abbf-6a097f33eef0-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"8214d39f-90ff-4188-abbf-6a097f33eef0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.983286 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8214d39f-90ff-4188-abbf-6a097f33eef0-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"8214d39f-90ff-4188-abbf-6a097f33eef0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.983963 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hth7\" (UniqueName: \"kubernetes.io/projected/8214d39f-90ff-4188-abbf-6a097f33eef0-kube-api-access-4hth7\") pod \"rabbitmq-cell1-server-0\" (UID: \"8214d39f-90ff-4188-abbf-6a097f33eef0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.984035 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8214d39f-90ff-4188-abbf-6a097f33eef0-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"8214d39f-90ff-4188-abbf-6a097f33eef0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.984072 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8214d39f-90ff-4188-abbf-6a097f33eef0-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"8214d39f-90ff-4188-abbf-6a097f33eef0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.984836 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8214d39f-90ff-4188-abbf-6a097f33eef0-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8214d39f-90ff-4188-abbf-6a097f33eef0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.984946 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8214d39f-90ff-4188-abbf-6a097f33eef0-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"8214d39f-90ff-4188-abbf-6a097f33eef0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.985265 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8214d39f-90ff-4188-abbf-6a097f33eef0-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"8214d39f-90ff-4188-abbf-6a097f33eef0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.985451 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8214d39f-90ff-4188-abbf-6a097f33eef0-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"8214d39f-90ff-4188-abbf-6a097f33eef0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.985866 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8214d39f-90ff-4188-abbf-6a097f33eef0-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"8214d39f-90ff-4188-abbf-6a097f33eef0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.986410 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8214d39f-90ff-4188-abbf-6a097f33eef0-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8214d39f-90ff-4188-abbf-6a097f33eef0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.987523 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8214d39f-90ff-4188-abbf-6a097f33eef0-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"8214d39f-90ff-4188-abbf-6a097f33eef0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.992534 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8214d39f-90ff-4188-abbf-6a097f33eef0-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"8214d39f-90ff-4188-abbf-6a097f33eef0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.993233 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8214d39f-90ff-4188-abbf-6a097f33eef0-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"8214d39f-90ff-4188-abbf-6a097f33eef0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.995808 4965 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 10:07:04 crc kubenswrapper[4965]: I0219 10:07:04.995855 4965 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-207bdf8b-fa4d-47cb-a97b-effc8257e554\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-207bdf8b-fa4d-47cb-a97b-effc8257e554\") pod \"rabbitmq-cell1-server-0\" (UID: \"8214d39f-90ff-4188-abbf-6a097f33eef0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c02e9c89efc1374aeb0c7995657ea859640e28776f8f5ffa07ca5ea0e348ba25/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:07:05 crc kubenswrapper[4965]: I0219 10:07:05.004603 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hth7\" (UniqueName: \"kubernetes.io/projected/8214d39f-90ff-4188-abbf-6a097f33eef0-kube-api-access-4hth7\") pod \"rabbitmq-cell1-server-0\" (UID: \"8214d39f-90ff-4188-abbf-6a097f33eef0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:07:05 crc kubenswrapper[4965]: I0219 10:07:05.065715 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-207bdf8b-fa4d-47cb-a97b-effc8257e554\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-207bdf8b-fa4d-47cb-a97b-effc8257e554\") pod \"rabbitmq-cell1-server-0\" (UID: \"8214d39f-90ff-4188-abbf-6a097f33eef0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:07:05 crc kubenswrapper[4965]: I0219 10:07:05.066425 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 10:07:05 crc kubenswrapper[4965]: I0219 10:07:05.130001 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:07:05 crc kubenswrapper[4965]: I0219 10:07:05.228488 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="305a32d6-c9f8-4494-b356-75d6c54c7467" path="/var/lib/kubelet/pods/305a32d6-c9f8-4494-b356-75d6c54c7467/volumes" Feb 19 10:07:05 crc kubenswrapper[4965]: I0219 10:07:05.229257 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbd64606-53f8-484e-b8d2-c0fef4acb1bd" path="/var/lib/kubelet/pods/bbd64606-53f8-484e-b8d2-c0fef4acb1bd/volumes" Feb 19 10:07:05 crc kubenswrapper[4965]: I0219 10:07:05.505129 4965 generic.go:334] "Generic (PLEG): container finished" podID="f8ddfcb2-bbac-405a-beee-d6e4da23170d" containerID="332c594571f180116989e323bf9d780e8c755ff84d07fe3b12f0ac47d671441f" exitCode=143 Feb 19 10:07:05 crc kubenswrapper[4965]: I0219 10:07:05.505186 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"f8ddfcb2-bbac-405a-beee-d6e4da23170d","Type":"ContainerDied","Data":"332c594571f180116989e323bf9d780e8c755ff84d07fe3b12f0ac47d671441f"} Feb 19 10:07:05 crc kubenswrapper[4965]: I0219 10:07:05.506878 4965 generic.go:334] "Generic (PLEG): container finished" podID="81c49478-306d-44e9-99bd-157057f0ed27" containerID="479abf975e75efce1c54c3a226386f041c2ca5083e07761a46d776e30efe55e9" exitCode=0 Feb 19 10:07:05 crc kubenswrapper[4965]: I0219 10:07:05.506903 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"81c49478-306d-44e9-99bd-157057f0ed27","Type":"ContainerDied","Data":"479abf975e75efce1c54c3a226386f041c2ca5083e07761a46d776e30efe55e9"} Feb 19 10:07:05 crc kubenswrapper[4965]: I0219 10:07:05.610053 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 10:07:05 crc kubenswrapper[4965]: W0219 10:07:05.808117 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8214d39f_90ff_4188_abbf_6a097f33eef0.slice/crio-b2f5f18a8ba12ad98f547f224a34ead716091a0aa55a746d4a8b24909bfbfdde WatchSource:0}: Error finding container b2f5f18a8ba12ad98f547f224a34ead716091a0aa55a746d4a8b24909bfbfdde: Status 404 returned error can't find the container with id b2f5f18a8ba12ad98f547f224a34ead716091a0aa55a746d4a8b24909bfbfdde Feb 19 10:07:05 crc kubenswrapper[4965]: I0219 10:07:05.836465 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.130697 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.240914 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/81c49478-306d-44e9-99bd-157057f0ed27-certs\") pod \"81c49478-306d-44e9-99bd-157057f0ed27\" (UID: \"81c49478-306d-44e9-99bd-157057f0ed27\") " Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.241350 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81c49478-306d-44e9-99bd-157057f0ed27-config-data\") pod \"81c49478-306d-44e9-99bd-157057f0ed27\" (UID: \"81c49478-306d-44e9-99bd-157057f0ed27\") " Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.241387 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81c49478-306d-44e9-99bd-157057f0ed27-combined-ca-bundle\") pod \"81c49478-306d-44e9-99bd-157057f0ed27\" (UID: \"81c49478-306d-44e9-99bd-157057f0ed27\") " Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.241548 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cq67l\" (UniqueName: \"kubernetes.io/projected/81c49478-306d-44e9-99bd-157057f0ed27-kube-api-access-cq67l\") pod \"81c49478-306d-44e9-99bd-157057f0ed27\" (UID: \"81c49478-306d-44e9-99bd-157057f0ed27\") " Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.241622 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81c49478-306d-44e9-99bd-157057f0ed27-scripts\") pod \"81c49478-306d-44e9-99bd-157057f0ed27\" (UID: \"81c49478-306d-44e9-99bd-157057f0ed27\") " Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.241718 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/81c49478-306d-44e9-99bd-157057f0ed27-config-data-custom\") pod \"81c49478-306d-44e9-99bd-157057f0ed27\" (UID: \"81c49478-306d-44e9-99bd-157057f0ed27\") " Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.253545 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81c49478-306d-44e9-99bd-157057f0ed27-certs" (OuterVolumeSpecName: "certs") pod "81c49478-306d-44e9-99bd-157057f0ed27" (UID: "81c49478-306d-44e9-99bd-157057f0ed27"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.275561 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81c49478-306d-44e9-99bd-157057f0ed27-scripts" (OuterVolumeSpecName: "scripts") pod "81c49478-306d-44e9-99bd-157057f0ed27" (UID: "81c49478-306d-44e9-99bd-157057f0ed27"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.275725 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81c49478-306d-44e9-99bd-157057f0ed27-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "81c49478-306d-44e9-99bd-157057f0ed27" (UID: "81c49478-306d-44e9-99bd-157057f0ed27"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.286541 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81c49478-306d-44e9-99bd-157057f0ed27-kube-api-access-cq67l" (OuterVolumeSpecName: "kube-api-access-cq67l") pod "81c49478-306d-44e9-99bd-157057f0ed27" (UID: "81c49478-306d-44e9-99bd-157057f0ed27"). InnerVolumeSpecName "kube-api-access-cq67l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.312510 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81c49478-306d-44e9-99bd-157057f0ed27-config-data" (OuterVolumeSpecName: "config-data") pod "81c49478-306d-44e9-99bd-157057f0ed27" (UID: "81c49478-306d-44e9-99bd-157057f0ed27"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.320596 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81c49478-306d-44e9-99bd-157057f0ed27-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "81c49478-306d-44e9-99bd-157057f0ed27" (UID: "81c49478-306d-44e9-99bd-157057f0ed27"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.344201 4965 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/81c49478-306d-44e9-99bd-157057f0ed27-certs\") on node \"crc\" DevicePath \"\"" Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.344226 4965 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81c49478-306d-44e9-99bd-157057f0ed27-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.344237 4965 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81c49478-306d-44e9-99bd-157057f0ed27-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.344261 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cq67l\" (UniqueName: \"kubernetes.io/projected/81c49478-306d-44e9-99bd-157057f0ed27-kube-api-access-cq67l\") on node \"crc\" DevicePath \"\"" Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.344271 4965 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81c49478-306d-44e9-99bd-157057f0ed27-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.344279 4965 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/81c49478-306d-44e9-99bd-157057f0ed27-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.520948 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa","Type":"ContainerStarted","Data":"ceab658b8a55d4a7eeb192a322770edf31d581a2635ea9165356c55abe93febe"} Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.521951 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8214d39f-90ff-4188-abbf-6a097f33eef0","Type":"ContainerStarted","Data":"b2f5f18a8ba12ad98f547f224a34ead716091a0aa55a746d4a8b24909bfbfdde"} Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.524110 4965 generic.go:334] "Generic (PLEG): container finished" podID="f8ddfcb2-bbac-405a-beee-d6e4da23170d" containerID="69ca164c9f37f0ce0c37fa016da9e88aa4cd0b504bbdb26325c7128fca9264df" exitCode=0 Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.524161 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"f8ddfcb2-bbac-405a-beee-d6e4da23170d","Type":"ContainerDied","Data":"69ca164c9f37f0ce0c37fa016da9e88aa4cd0b504bbdb26325c7128fca9264df"} Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.529191 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"81c49478-306d-44e9-99bd-157057f0ed27","Type":"ContainerDied","Data":"93daa53f64fb1a55c99e091e9add6cc4b99f3244d5ee77a32d638135b35e4519"} Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.529235 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.529243 4965 scope.go:117] "RemoveContainer" containerID="479abf975e75efce1c54c3a226386f041c2ca5083e07761a46d776e30efe55e9" Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.603359 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.610705 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.628298 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.644238 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 19 10:07:06 crc kubenswrapper[4965]: E0219 10:07:06.644839 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8ddfcb2-bbac-405a-beee-d6e4da23170d" containerName="cloudkitty-api" Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.644862 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8ddfcb2-bbac-405a-beee-d6e4da23170d" containerName="cloudkitty-api" Feb 19 10:07:06 crc kubenswrapper[4965]: E0219 10:07:06.644911 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8ddfcb2-bbac-405a-beee-d6e4da23170d" containerName="cloudkitty-api-log" Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.644921 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8ddfcb2-bbac-405a-beee-d6e4da23170d" containerName="cloudkitty-api-log" Feb 19 10:07:06 crc kubenswrapper[4965]: E0219 10:07:06.644946 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81c49478-306d-44e9-99bd-157057f0ed27" containerName="cloudkitty-proc" Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.644954 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="81c49478-306d-44e9-99bd-157057f0ed27" containerName="cloudkitty-proc" Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.645233 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8ddfcb2-bbac-405a-beee-d6e4da23170d" containerName="cloudkitty-api" Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.645254 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8ddfcb2-bbac-405a-beee-d6e4da23170d" containerName="cloudkitty-api-log" Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.645280 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="81c49478-306d-44e9-99bd-157057f0ed27" containerName="cloudkitty-proc" Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.650346 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.650381 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8ddfcb2-bbac-405a-beee-d6e4da23170d-public-tls-certs\") pod \"f8ddfcb2-bbac-405a-beee-d6e4da23170d\" (UID: \"f8ddfcb2-bbac-405a-beee-d6e4da23170d\") " Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.650522 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8ddfcb2-bbac-405a-beee-d6e4da23170d-logs\") pod \"f8ddfcb2-bbac-405a-beee-d6e4da23170d\" (UID: \"f8ddfcb2-bbac-405a-beee-d6e4da23170d\") " Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.650565 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-st4kl\" (UniqueName: \"kubernetes.io/projected/f8ddfcb2-bbac-405a-beee-d6e4da23170d-kube-api-access-st4kl\") pod \"f8ddfcb2-bbac-405a-beee-d6e4da23170d\" (UID: \"f8ddfcb2-bbac-405a-beee-d6e4da23170d\") " Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.650632 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8ddfcb2-bbac-405a-beee-d6e4da23170d-combined-ca-bundle\") pod \"f8ddfcb2-bbac-405a-beee-d6e4da23170d\" (UID: \"f8ddfcb2-bbac-405a-beee-d6e4da23170d\") " Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.650681 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8ddfcb2-bbac-405a-beee-d6e4da23170d-config-data\") pod \"f8ddfcb2-bbac-405a-beee-d6e4da23170d\" (UID: \"f8ddfcb2-bbac-405a-beee-d6e4da23170d\") " Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.650718 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f8ddfcb2-bbac-405a-beee-d6e4da23170d-config-data-custom\") pod \"f8ddfcb2-bbac-405a-beee-d6e4da23170d\" (UID: \"f8ddfcb2-bbac-405a-beee-d6e4da23170d\") " Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.650782 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8ddfcb2-bbac-405a-beee-d6e4da23170d-scripts\") pod \"f8ddfcb2-bbac-405a-beee-d6e4da23170d\" (UID: \"f8ddfcb2-bbac-405a-beee-d6e4da23170d\") " Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.650841 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8ddfcb2-bbac-405a-beee-d6e4da23170d-internal-tls-certs\") pod \"f8ddfcb2-bbac-405a-beee-d6e4da23170d\" (UID: \"f8ddfcb2-bbac-405a-beee-d6e4da23170d\") " Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.650866 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/f8ddfcb2-bbac-405a-beee-d6e4da23170d-certs\") pod \"f8ddfcb2-bbac-405a-beee-d6e4da23170d\" (UID: \"f8ddfcb2-bbac-405a-beee-d6e4da23170d\") " Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.653618 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-proc-config-data" Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.654535 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8ddfcb2-bbac-405a-beee-d6e4da23170d-logs" (OuterVolumeSpecName: "logs") pod "f8ddfcb2-bbac-405a-beee-d6e4da23170d" (UID: "f8ddfcb2-bbac-405a-beee-d6e4da23170d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.660691 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.661024 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8ddfcb2-bbac-405a-beee-d6e4da23170d-scripts" (OuterVolumeSpecName: "scripts") pod "f8ddfcb2-bbac-405a-beee-d6e4da23170d" (UID: "f8ddfcb2-bbac-405a-beee-d6e4da23170d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.665050 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8ddfcb2-bbac-405a-beee-d6e4da23170d-kube-api-access-st4kl" (OuterVolumeSpecName: "kube-api-access-st4kl") pod "f8ddfcb2-bbac-405a-beee-d6e4da23170d" (UID: "f8ddfcb2-bbac-405a-beee-d6e4da23170d"). InnerVolumeSpecName "kube-api-access-st4kl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.667140 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8ddfcb2-bbac-405a-beee-d6e4da23170d-certs" (OuterVolumeSpecName: "certs") pod "f8ddfcb2-bbac-405a-beee-d6e4da23170d" (UID: "f8ddfcb2-bbac-405a-beee-d6e4da23170d"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.672701 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8ddfcb2-bbac-405a-beee-d6e4da23170d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f8ddfcb2-bbac-405a-beee-d6e4da23170d" (UID: "f8ddfcb2-bbac-405a-beee-d6e4da23170d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.753919 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aa682c21-5c48-4518-9033-2f28eae7f24d-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"aa682c21-5c48-4518-9033-2f28eae7f24d\") " pod="openstack/cloudkitty-proc-0" Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.754022 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa682c21-5c48-4518-9033-2f28eae7f24d-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"aa682c21-5c48-4518-9033-2f28eae7f24d\") " pod="openstack/cloudkitty-proc-0" Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.754065 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa682c21-5c48-4518-9033-2f28eae7f24d-scripts\") pod \"cloudkitty-proc-0\" (UID: \"aa682c21-5c48-4518-9033-2f28eae7f24d\") " pod="openstack/cloudkitty-proc-0" Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.754104 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa682c21-5c48-4518-9033-2f28eae7f24d-config-data\") pod \"cloudkitty-proc-0\" (UID: \"aa682c21-5c48-4518-9033-2f28eae7f24d\") " pod="openstack/cloudkitty-proc-0" Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.754126 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zntph\" (UniqueName: \"kubernetes.io/projected/aa682c21-5c48-4518-9033-2f28eae7f24d-kube-api-access-zntph\") pod \"cloudkitty-proc-0\" (UID: \"aa682c21-5c48-4518-9033-2f28eae7f24d\") " pod="openstack/cloudkitty-proc-0" Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.754153 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/aa682c21-5c48-4518-9033-2f28eae7f24d-certs\") pod \"cloudkitty-proc-0\" (UID: \"aa682c21-5c48-4518-9033-2f28eae7f24d\") " pod="openstack/cloudkitty-proc-0" Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.754291 4965 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8ddfcb2-bbac-405a-beee-d6e4da23170d-logs\") on node \"crc\" DevicePath \"\"" Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.754310 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-st4kl\" (UniqueName: \"kubernetes.io/projected/f8ddfcb2-bbac-405a-beee-d6e4da23170d-kube-api-access-st4kl\") on node \"crc\" DevicePath \"\"" Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.754320 4965 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f8ddfcb2-bbac-405a-beee-d6e4da23170d-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.754329 4965 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8ddfcb2-bbac-405a-beee-d6e4da23170d-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.754337 4965 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/f8ddfcb2-bbac-405a-beee-d6e4da23170d-certs\") on node \"crc\" DevicePath \"\"" Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.817076 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8ddfcb2-bbac-405a-beee-d6e4da23170d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f8ddfcb2-bbac-405a-beee-d6e4da23170d" (UID: "f8ddfcb2-bbac-405a-beee-d6e4da23170d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.856413 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa682c21-5c48-4518-9033-2f28eae7f24d-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"aa682c21-5c48-4518-9033-2f28eae7f24d\") " pod="openstack/cloudkitty-proc-0" Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.856470 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa682c21-5c48-4518-9033-2f28eae7f24d-scripts\") pod \"cloudkitty-proc-0\" (UID: \"aa682c21-5c48-4518-9033-2f28eae7f24d\") " pod="openstack/cloudkitty-proc-0" Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.856512 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa682c21-5c48-4518-9033-2f28eae7f24d-config-data\") pod \"cloudkitty-proc-0\" (UID: \"aa682c21-5c48-4518-9033-2f28eae7f24d\") " pod="openstack/cloudkitty-proc-0" Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.856534 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zntph\" (UniqueName: \"kubernetes.io/projected/aa682c21-5c48-4518-9033-2f28eae7f24d-kube-api-access-zntph\") pod \"cloudkitty-proc-0\" (UID: \"aa682c21-5c48-4518-9033-2f28eae7f24d\") " pod="openstack/cloudkitty-proc-0" Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.856557 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/aa682c21-5c48-4518-9033-2f28eae7f24d-certs\") pod \"cloudkitty-proc-0\" (UID: \"aa682c21-5c48-4518-9033-2f28eae7f24d\") " pod="openstack/cloudkitty-proc-0" Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.856644 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aa682c21-5c48-4518-9033-2f28eae7f24d-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"aa682c21-5c48-4518-9033-2f28eae7f24d\") " pod="openstack/cloudkitty-proc-0" Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.856708 4965 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8ddfcb2-bbac-405a-beee-d6e4da23170d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.859695 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aa682c21-5c48-4518-9033-2f28eae7f24d-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"aa682c21-5c48-4518-9033-2f28eae7f24d\") " pod="openstack/cloudkitty-proc-0" Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.860154 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa682c21-5c48-4518-9033-2f28eae7f24d-config-data\") pod \"cloudkitty-proc-0\" (UID: \"aa682c21-5c48-4518-9033-2f28eae7f24d\") " pod="openstack/cloudkitty-proc-0" Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.861048 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa682c21-5c48-4518-9033-2f28eae7f24d-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"aa682c21-5c48-4518-9033-2f28eae7f24d\") " pod="openstack/cloudkitty-proc-0" Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.861489 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa682c21-5c48-4518-9033-2f28eae7f24d-scripts\") pod \"cloudkitty-proc-0\" (UID: \"aa682c21-5c48-4518-9033-2f28eae7f24d\") " pod="openstack/cloudkitty-proc-0" Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.862403 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/aa682c21-5c48-4518-9033-2f28eae7f24d-certs\") pod \"cloudkitty-proc-0\" (UID: \"aa682c21-5c48-4518-9033-2f28eae7f24d\") " pod="openstack/cloudkitty-proc-0" Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.875071 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zntph\" (UniqueName: \"kubernetes.io/projected/aa682c21-5c48-4518-9033-2f28eae7f24d-kube-api-access-zntph\") pod \"cloudkitty-proc-0\" (UID: \"aa682c21-5c48-4518-9033-2f28eae7f24d\") " pod="openstack/cloudkitty-proc-0" Feb 19 10:07:06 crc kubenswrapper[4965]: I0219 10:07:06.989828 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 19 10:07:07 crc kubenswrapper[4965]: I0219 10:07:07.227034 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81c49478-306d-44e9-99bd-157057f0ed27" path="/var/lib/kubelet/pods/81c49478-306d-44e9-99bd-157057f0ed27/volumes" Feb 19 10:07:07 crc kubenswrapper[4965]: I0219 10:07:07.292915 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8ddfcb2-bbac-405a-beee-d6e4da23170d-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f8ddfcb2-bbac-405a-beee-d6e4da23170d" (UID: "f8ddfcb2-bbac-405a-beee-d6e4da23170d"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:07:07 crc kubenswrapper[4965]: I0219 10:07:07.300181 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8ddfcb2-bbac-405a-beee-d6e4da23170d-config-data" (OuterVolumeSpecName: "config-data") pod "f8ddfcb2-bbac-405a-beee-d6e4da23170d" (UID: "f8ddfcb2-bbac-405a-beee-d6e4da23170d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:07:07 crc kubenswrapper[4965]: I0219 10:07:07.343918 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8ddfcb2-bbac-405a-beee-d6e4da23170d-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f8ddfcb2-bbac-405a-beee-d6e4da23170d" (UID: "f8ddfcb2-bbac-405a-beee-d6e4da23170d"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:07:07 crc kubenswrapper[4965]: I0219 10:07:07.372095 4965 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8ddfcb2-bbac-405a-beee-d6e4da23170d-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 10:07:07 crc kubenswrapper[4965]: I0219 10:07:07.372129 4965 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8ddfcb2-bbac-405a-beee-d6e4da23170d-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 10:07:07 crc kubenswrapper[4965]: I0219 10:07:07.372138 4965 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8ddfcb2-bbac-405a-beee-d6e4da23170d-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:07:07 crc kubenswrapper[4965]: W0219 10:07:07.483941 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa682c21_5c48_4518_9033_2f28eae7f24d.slice/crio-b6d2c95febda0b4e94f324a01a5810f1f37d76618035b2a2d98ddac01cd6a17e WatchSource:0}: Error finding container b6d2c95febda0b4e94f324a01a5810f1f37d76618035b2a2d98ddac01cd6a17e: Status 404 returned error can't find the container with id b6d2c95febda0b4e94f324a01a5810f1f37d76618035b2a2d98ddac01cd6a17e Feb 19 10:07:07 crc kubenswrapper[4965]: I0219 10:07:07.556290 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 19 10:07:07 crc kubenswrapper[4965]: I0219 10:07:07.635817 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 19 10:07:07 crc kubenswrapper[4965]: I0219 10:07:07.635856 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"aa682c21-5c48-4518-9033-2f28eae7f24d","Type":"ContainerStarted","Data":"b6d2c95febda0b4e94f324a01a5810f1f37d76618035b2a2d98ddac01cd6a17e"} Feb 19 10:07:07 crc kubenswrapper[4965]: I0219 10:07:07.635874 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa","Type":"ContainerStarted","Data":"be5ee0ae14d047024958c60742b7dfaaf670b250edfb859c820de895131b6b18"} Feb 19 10:07:07 crc kubenswrapper[4965]: I0219 10:07:07.635940 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"f8ddfcb2-bbac-405a-beee-d6e4da23170d","Type":"ContainerDied","Data":"19def9397b2c4afd8b2781c5dee81dbce3b165e6f509953a4036284fe63072e0"} Feb 19 10:07:07 crc kubenswrapper[4965]: I0219 10:07:07.635970 4965 scope.go:117] "RemoveContainer" containerID="69ca164c9f37f0ce0c37fa016da9e88aa4cd0b504bbdb26325c7128fca9264df" Feb 19 10:07:07 crc kubenswrapper[4965]: I0219 10:07:07.667350 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 19 10:07:07 crc kubenswrapper[4965]: I0219 10:07:07.678696 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 19 10:07:07 crc kubenswrapper[4965]: I0219 10:07:07.697433 4965 scope.go:117] "RemoveContainer" containerID="332c594571f180116989e323bf9d780e8c755ff84d07fe3b12f0ac47d671441f" Feb 19 10:07:07 crc kubenswrapper[4965]: I0219 10:07:07.702872 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-api-0"] Feb 19 10:07:07 crc kubenswrapper[4965]: I0219 10:07:07.704722 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 19 10:07:07 crc kubenswrapper[4965]: I0219 10:07:07.707216 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-api-config-data" Feb 19 10:07:07 crc kubenswrapper[4965]: I0219 10:07:07.708248 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-internal-svc" Feb 19 10:07:07 crc kubenswrapper[4965]: I0219 10:07:07.710155 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-public-svc" Feb 19 10:07:07 crc kubenswrapper[4965]: I0219 10:07:07.717095 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 19 10:07:07 crc kubenswrapper[4965]: I0219 10:07:07.781449 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ed10660-2674-4274-a62b-366af8d375da-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"3ed10660-2674-4274-a62b-366af8d375da\") " pod="openstack/cloudkitty-api-0" Feb 19 10:07:07 crc kubenswrapper[4965]: I0219 10:07:07.781833 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3ed10660-2674-4274-a62b-366af8d375da-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"3ed10660-2674-4274-a62b-366af8d375da\") " pod="openstack/cloudkitty-api-0" Feb 19 10:07:07 crc kubenswrapper[4965]: I0219 10:07:07.781929 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/3ed10660-2674-4274-a62b-366af8d375da-certs\") pod \"cloudkitty-api-0\" (UID: \"3ed10660-2674-4274-a62b-366af8d375da\") " pod="openstack/cloudkitty-api-0" Feb 19 10:07:07 crc kubenswrapper[4965]: I0219 10:07:07.782036 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ed10660-2674-4274-a62b-366af8d375da-config-data\") pod \"cloudkitty-api-0\" (UID: \"3ed10660-2674-4274-a62b-366af8d375da\") " pod="openstack/cloudkitty-api-0" Feb 19 10:07:07 crc kubenswrapper[4965]: I0219 10:07:07.782249 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6wkj\" (UniqueName: \"kubernetes.io/projected/3ed10660-2674-4274-a62b-366af8d375da-kube-api-access-m6wkj\") pod \"cloudkitty-api-0\" (UID: \"3ed10660-2674-4274-a62b-366af8d375da\") " pod="openstack/cloudkitty-api-0" Feb 19 10:07:07 crc kubenswrapper[4965]: I0219 10:07:07.782356 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ed10660-2674-4274-a62b-366af8d375da-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"3ed10660-2674-4274-a62b-366af8d375da\") " pod="openstack/cloudkitty-api-0" Feb 19 10:07:07 crc kubenswrapper[4965]: I0219 10:07:07.782464 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ed10660-2674-4274-a62b-366af8d375da-scripts\") pod \"cloudkitty-api-0\" (UID: \"3ed10660-2674-4274-a62b-366af8d375da\") " pod="openstack/cloudkitty-api-0" Feb 19 10:07:07 crc kubenswrapper[4965]: I0219 10:07:07.782623 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ed10660-2674-4274-a62b-366af8d375da-logs\") pod \"cloudkitty-api-0\" (UID: \"3ed10660-2674-4274-a62b-366af8d375da\") " pod="openstack/cloudkitty-api-0" Feb 19 10:07:07 crc kubenswrapper[4965]: I0219 10:07:07.782720 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ed10660-2674-4274-a62b-366af8d375da-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"3ed10660-2674-4274-a62b-366af8d375da\") " pod="openstack/cloudkitty-api-0" Feb 19 10:07:07 crc kubenswrapper[4965]: I0219 10:07:07.885237 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ed10660-2674-4274-a62b-366af8d375da-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"3ed10660-2674-4274-a62b-366af8d375da\") " pod="openstack/cloudkitty-api-0" Feb 19 10:07:07 crc kubenswrapper[4965]: I0219 10:07:07.885852 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3ed10660-2674-4274-a62b-366af8d375da-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"3ed10660-2674-4274-a62b-366af8d375da\") " pod="openstack/cloudkitty-api-0" Feb 19 10:07:07 crc kubenswrapper[4965]: I0219 10:07:07.885949 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/3ed10660-2674-4274-a62b-366af8d375da-certs\") pod \"cloudkitty-api-0\" (UID: \"3ed10660-2674-4274-a62b-366af8d375da\") " pod="openstack/cloudkitty-api-0" Feb 19 10:07:07 crc kubenswrapper[4965]: I0219 10:07:07.886073 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ed10660-2674-4274-a62b-366af8d375da-config-data\") pod \"cloudkitty-api-0\" (UID: \"3ed10660-2674-4274-a62b-366af8d375da\") " pod="openstack/cloudkitty-api-0" Feb 19 10:07:07 crc kubenswrapper[4965]: I0219 10:07:07.886218 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6wkj\" (UniqueName: \"kubernetes.io/projected/3ed10660-2674-4274-a62b-366af8d375da-kube-api-access-m6wkj\") pod \"cloudkitty-api-0\" (UID: \"3ed10660-2674-4274-a62b-366af8d375da\") " pod="openstack/cloudkitty-api-0" Feb 19 10:07:07 crc kubenswrapper[4965]: I0219 10:07:07.886335 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ed10660-2674-4274-a62b-366af8d375da-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"3ed10660-2674-4274-a62b-366af8d375da\") " pod="openstack/cloudkitty-api-0" Feb 19 10:07:07 crc kubenswrapper[4965]: I0219 10:07:07.886459 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ed10660-2674-4274-a62b-366af8d375da-scripts\") pod \"cloudkitty-api-0\" (UID: \"3ed10660-2674-4274-a62b-366af8d375da\") " pod="openstack/cloudkitty-api-0" Feb 19 10:07:07 crc kubenswrapper[4965]: I0219 10:07:07.886612 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ed10660-2674-4274-a62b-366af8d375da-logs\") pod \"cloudkitty-api-0\" (UID: \"3ed10660-2674-4274-a62b-366af8d375da\") " pod="openstack/cloudkitty-api-0" Feb 19 10:07:07 crc kubenswrapper[4965]: I0219 10:07:07.886707 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ed10660-2674-4274-a62b-366af8d375da-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"3ed10660-2674-4274-a62b-366af8d375da\") " pod="openstack/cloudkitty-api-0" Feb 19 10:07:07 crc kubenswrapper[4965]: I0219 10:07:07.890283 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ed10660-2674-4274-a62b-366af8d375da-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"3ed10660-2674-4274-a62b-366af8d375da\") " pod="openstack/cloudkitty-api-0" Feb 19 10:07:07 crc kubenswrapper[4965]: I0219 10:07:07.893360 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ed10660-2674-4274-a62b-366af8d375da-logs\") pod \"cloudkitty-api-0\" (UID: \"3ed10660-2674-4274-a62b-366af8d375da\") " pod="openstack/cloudkitty-api-0" Feb 19 10:07:07 crc kubenswrapper[4965]: I0219 10:07:07.896373 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3ed10660-2674-4274-a62b-366af8d375da-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"3ed10660-2674-4274-a62b-366af8d375da\") " pod="openstack/cloudkitty-api-0" Feb 19 10:07:07 crc kubenswrapper[4965]: I0219 10:07:07.897309 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ed10660-2674-4274-a62b-366af8d375da-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"3ed10660-2674-4274-a62b-366af8d375da\") " pod="openstack/cloudkitty-api-0" Feb 19 10:07:07 crc kubenswrapper[4965]: I0219 10:07:07.898169 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ed10660-2674-4274-a62b-366af8d375da-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"3ed10660-2674-4274-a62b-366af8d375da\") " pod="openstack/cloudkitty-api-0" Feb 19 10:07:07 crc kubenswrapper[4965]: I0219 10:07:07.898695 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ed10660-2674-4274-a62b-366af8d375da-config-data\") pod \"cloudkitty-api-0\" (UID: \"3ed10660-2674-4274-a62b-366af8d375da\") " pod="openstack/cloudkitty-api-0" Feb 19 10:07:07 crc kubenswrapper[4965]: I0219 10:07:07.899606 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ed10660-2674-4274-a62b-366af8d375da-scripts\") pod \"cloudkitty-api-0\" (UID: \"3ed10660-2674-4274-a62b-366af8d375da\") " pod="openstack/cloudkitty-api-0" Feb 19 10:07:07 crc kubenswrapper[4965]: I0219 10:07:07.899893 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/3ed10660-2674-4274-a62b-366af8d375da-certs\") pod \"cloudkitty-api-0\" (UID: \"3ed10660-2674-4274-a62b-366af8d375da\") " pod="openstack/cloudkitty-api-0" Feb 19 10:07:07 crc kubenswrapper[4965]: I0219 10:07:07.915726 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6wkj\" (UniqueName: \"kubernetes.io/projected/3ed10660-2674-4274-a62b-366af8d375da-kube-api-access-m6wkj\") pod \"cloudkitty-api-0\" (UID: \"3ed10660-2674-4274-a62b-366af8d375da\") " pod="openstack/cloudkitty-api-0" Feb 19 10:07:08 crc kubenswrapper[4965]: I0219 10:07:08.031693 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 19 10:07:08 crc kubenswrapper[4965]: I0219 10:07:08.551486 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 19 10:07:08 crc kubenswrapper[4965]: I0219 10:07:08.570585 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"3ed10660-2674-4274-a62b-366af8d375da","Type":"ContainerStarted","Data":"652f16b2af8993df56e1c5ebbb46fa187218e59525ef82ffd69bea69ba48bbe0"} Feb 19 10:07:08 crc kubenswrapper[4965]: I0219 10:07:08.572185 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8214d39f-90ff-4188-abbf-6a097f33eef0","Type":"ContainerStarted","Data":"dfae216125c0b56f6e0a48ada368f3c6903410e73a5a3508866a6ece3dab834e"} Feb 19 10:07:08 crc kubenswrapper[4965]: I0219 10:07:08.580225 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"aa682c21-5c48-4518-9033-2f28eae7f24d","Type":"ContainerStarted","Data":"15ade29d106014cb09e8b8d7da378429756d3d1349175fc2672af4852b6c1508"} Feb 19 10:07:08 crc kubenswrapper[4965]: I0219 10:07:08.658556 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-proc-0" podStartSLOduration=2.386852685 podStartE2EDuration="2.658536461s" podCreationTimestamp="2026-02-19 10:07:06 +0000 UTC" firstStartedPulling="2026-02-19 10:07:07.486752643 +0000 UTC m=+1483.108073953" lastFinishedPulling="2026-02-19 10:07:07.758436419 +0000 UTC m=+1483.379757729" observedRunningTime="2026-02-19 10:07:08.652411653 +0000 UTC m=+1484.273732963" watchObservedRunningTime="2026-02-19 10:07:08.658536461 +0000 UTC m=+1484.279857771" Feb 19 10:07:09 crc kubenswrapper[4965]: I0219 10:07:09.211408 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8ddfcb2-bbac-405a-beee-d6e4da23170d" path="/var/lib/kubelet/pods/f8ddfcb2-bbac-405a-beee-d6e4da23170d/volumes" Feb 19 10:07:09 crc kubenswrapper[4965]: I0219 10:07:09.423244 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-dc7c944bf-q7r8b"] Feb 19 10:07:09 crc kubenswrapper[4965]: I0219 10:07:09.425211 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc7c944bf-q7r8b" Feb 19 10:07:09 crc kubenswrapper[4965]: I0219 10:07:09.428108 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 19 10:07:09 crc kubenswrapper[4965]: I0219 10:07:09.573299 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dc7c944bf-q7r8b"] Feb 19 10:07:09 crc kubenswrapper[4965]: I0219 10:07:09.628881 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/93ff6407-c851-4dec-a6bd-383600a74940-openstack-edpm-ipam\") pod \"dnsmasq-dns-dc7c944bf-q7r8b\" (UID: \"93ff6407-c851-4dec-a6bd-383600a74940\") " pod="openstack/dnsmasq-dns-dc7c944bf-q7r8b" Feb 19 10:07:09 crc kubenswrapper[4965]: I0219 10:07:09.628952 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93ff6407-c851-4dec-a6bd-383600a74940-dns-svc\") pod \"dnsmasq-dns-dc7c944bf-q7r8b\" (UID: \"93ff6407-c851-4dec-a6bd-383600a74940\") " pod="openstack/dnsmasq-dns-dc7c944bf-q7r8b" Feb 19 10:07:09 crc kubenswrapper[4965]: I0219 10:07:09.628973 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93ff6407-c851-4dec-a6bd-383600a74940-ovsdbserver-sb\") pod \"dnsmasq-dns-dc7c944bf-q7r8b\" (UID: \"93ff6407-c851-4dec-a6bd-383600a74940\") " pod="openstack/dnsmasq-dns-dc7c944bf-q7r8b" Feb 19 10:07:09 crc kubenswrapper[4965]: I0219 10:07:09.628997 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc2c5\" (UniqueName: \"kubernetes.io/projected/93ff6407-c851-4dec-a6bd-383600a74940-kube-api-access-xc2c5\") pod \"dnsmasq-dns-dc7c944bf-q7r8b\" (UID: \"93ff6407-c851-4dec-a6bd-383600a74940\") " pod="openstack/dnsmasq-dns-dc7c944bf-q7r8b" Feb 19 10:07:09 crc kubenswrapper[4965]: I0219 10:07:09.629016 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/93ff6407-c851-4dec-a6bd-383600a74940-dns-swift-storage-0\") pod \"dnsmasq-dns-dc7c944bf-q7r8b\" (UID: \"93ff6407-c851-4dec-a6bd-383600a74940\") " pod="openstack/dnsmasq-dns-dc7c944bf-q7r8b" Feb 19 10:07:09 crc kubenswrapper[4965]: I0219 10:07:09.629054 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93ff6407-c851-4dec-a6bd-383600a74940-config\") pod \"dnsmasq-dns-dc7c944bf-q7r8b\" (UID: \"93ff6407-c851-4dec-a6bd-383600a74940\") " pod="openstack/dnsmasq-dns-dc7c944bf-q7r8b" Feb 19 10:07:09 crc kubenswrapper[4965]: I0219 10:07:09.629070 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93ff6407-c851-4dec-a6bd-383600a74940-ovsdbserver-nb\") pod \"dnsmasq-dns-dc7c944bf-q7r8b\" (UID: \"93ff6407-c851-4dec-a6bd-383600a74940\") " pod="openstack/dnsmasq-dns-dc7c944bf-q7r8b" Feb 19 10:07:09 crc kubenswrapper[4965]: I0219 10:07:09.664582 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"3ed10660-2674-4274-a62b-366af8d375da","Type":"ContainerStarted","Data":"6dcf22111f42825b3849e53374802f122c539608b2ee434e78a69a141fdf27de"} Feb 19 10:07:09 crc kubenswrapper[4965]: I0219 10:07:09.664630 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"3ed10660-2674-4274-a62b-366af8d375da","Type":"ContainerStarted","Data":"488e75abd9f4ab10987df00f417a3922cccc2fb1452586df6839dc70af9f6d5c"} Feb 19 10:07:09 crc kubenswrapper[4965]: I0219 10:07:09.665255 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-api-0" Feb 19 10:07:09 crc kubenswrapper[4965]: I0219 10:07:09.713511 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-api-0" podStartSLOduration=2.713486695 podStartE2EDuration="2.713486695s" podCreationTimestamp="2026-02-19 10:07:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:07:09.697572429 +0000 UTC m=+1485.318893749" watchObservedRunningTime="2026-02-19 10:07:09.713486695 +0000 UTC m=+1485.334808005" Feb 19 10:07:09 crc kubenswrapper[4965]: I0219 10:07:09.734655 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/93ff6407-c851-4dec-a6bd-383600a74940-openstack-edpm-ipam\") pod \"dnsmasq-dns-dc7c944bf-q7r8b\" (UID: \"93ff6407-c851-4dec-a6bd-383600a74940\") " pod="openstack/dnsmasq-dns-dc7c944bf-q7r8b" Feb 19 10:07:09 crc kubenswrapper[4965]: I0219 10:07:09.734752 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93ff6407-c851-4dec-a6bd-383600a74940-dns-svc\") pod \"dnsmasq-dns-dc7c944bf-q7r8b\" (UID: \"93ff6407-c851-4dec-a6bd-383600a74940\") " pod="openstack/dnsmasq-dns-dc7c944bf-q7r8b" Feb 19 10:07:09 crc kubenswrapper[4965]: I0219 10:07:09.734784 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93ff6407-c851-4dec-a6bd-383600a74940-ovsdbserver-sb\") pod \"dnsmasq-dns-dc7c944bf-q7r8b\" (UID: \"93ff6407-c851-4dec-a6bd-383600a74940\") " pod="openstack/dnsmasq-dns-dc7c944bf-q7r8b" Feb 19 10:07:09 crc kubenswrapper[4965]: I0219 10:07:09.734819 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xc2c5\" (UniqueName: \"kubernetes.io/projected/93ff6407-c851-4dec-a6bd-383600a74940-kube-api-access-xc2c5\") pod \"dnsmasq-dns-dc7c944bf-q7r8b\" (UID: \"93ff6407-c851-4dec-a6bd-383600a74940\") " pod="openstack/dnsmasq-dns-dc7c944bf-q7r8b" Feb 19 10:07:09 crc kubenswrapper[4965]: I0219 10:07:09.734846 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/93ff6407-c851-4dec-a6bd-383600a74940-dns-swift-storage-0\") pod \"dnsmasq-dns-dc7c944bf-q7r8b\" (UID: \"93ff6407-c851-4dec-a6bd-383600a74940\") " pod="openstack/dnsmasq-dns-dc7c944bf-q7r8b" Feb 19 10:07:09 crc kubenswrapper[4965]: I0219 10:07:09.734898 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93ff6407-c851-4dec-a6bd-383600a74940-config\") pod \"dnsmasq-dns-dc7c944bf-q7r8b\" (UID: \"93ff6407-c851-4dec-a6bd-383600a74940\") " pod="openstack/dnsmasq-dns-dc7c944bf-q7r8b" Feb 19 10:07:09 crc kubenswrapper[4965]: I0219 10:07:09.734924 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93ff6407-c851-4dec-a6bd-383600a74940-ovsdbserver-nb\") pod \"dnsmasq-dns-dc7c944bf-q7r8b\" (UID: \"93ff6407-c851-4dec-a6bd-383600a74940\") " pod="openstack/dnsmasq-dns-dc7c944bf-q7r8b" Feb 19 10:07:09 crc kubenswrapper[4965]: I0219 10:07:09.736168 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93ff6407-c851-4dec-a6bd-383600a74940-ovsdbserver-nb\") pod \"dnsmasq-dns-dc7c944bf-q7r8b\" (UID: \"93ff6407-c851-4dec-a6bd-383600a74940\") " pod="openstack/dnsmasq-dns-dc7c944bf-q7r8b" Feb 19 10:07:09 crc kubenswrapper[4965]: I0219 10:07:09.743867 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93ff6407-c851-4dec-a6bd-383600a74940-ovsdbserver-sb\") pod \"dnsmasq-dns-dc7c944bf-q7r8b\" (UID: \"93ff6407-c851-4dec-a6bd-383600a74940\") " pod="openstack/dnsmasq-dns-dc7c944bf-q7r8b" Feb 19 10:07:09 crc kubenswrapper[4965]: I0219 10:07:09.750881 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93ff6407-c851-4dec-a6bd-383600a74940-dns-svc\") pod \"dnsmasq-dns-dc7c944bf-q7r8b\" (UID: \"93ff6407-c851-4dec-a6bd-383600a74940\") " pod="openstack/dnsmasq-dns-dc7c944bf-q7r8b" Feb 19 10:07:09 crc kubenswrapper[4965]: I0219 10:07:09.751379 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/93ff6407-c851-4dec-a6bd-383600a74940-dns-swift-storage-0\") pod \"dnsmasq-dns-dc7c944bf-q7r8b\" (UID: \"93ff6407-c851-4dec-a6bd-383600a74940\") " pod="openstack/dnsmasq-dns-dc7c944bf-q7r8b" Feb 19 10:07:09 crc kubenswrapper[4965]: I0219 10:07:09.753800 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/93ff6407-c851-4dec-a6bd-383600a74940-openstack-edpm-ipam\") pod \"dnsmasq-dns-dc7c944bf-q7r8b\" (UID: \"93ff6407-c851-4dec-a6bd-383600a74940\") " pod="openstack/dnsmasq-dns-dc7c944bf-q7r8b" Feb 19 10:07:09 crc kubenswrapper[4965]: I0219 10:07:09.755026 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93ff6407-c851-4dec-a6bd-383600a74940-config\") pod \"dnsmasq-dns-dc7c944bf-q7r8b\" (UID: \"93ff6407-c851-4dec-a6bd-383600a74940\") " pod="openstack/dnsmasq-dns-dc7c944bf-q7r8b" Feb 19 10:07:09 crc kubenswrapper[4965]: I0219 10:07:09.761347 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xc2c5\" (UniqueName: \"kubernetes.io/projected/93ff6407-c851-4dec-a6bd-383600a74940-kube-api-access-xc2c5\") pod \"dnsmasq-dns-dc7c944bf-q7r8b\" (UID: \"93ff6407-c851-4dec-a6bd-383600a74940\") " pod="openstack/dnsmasq-dns-dc7c944bf-q7r8b" Feb 19 10:07:09 crc kubenswrapper[4965]: I0219 10:07:09.798791 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc7c944bf-q7r8b" Feb 19 10:07:10 crc kubenswrapper[4965]: W0219 10:07:10.616370 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93ff6407_c851_4dec_a6bd_383600a74940.slice/crio-6ad8d14ed216832bef07e8ec710cfca219ebe366a4afbedf2f3fb4cd68f4e584 WatchSource:0}: Error finding container 6ad8d14ed216832bef07e8ec710cfca219ebe366a4afbedf2f3fb4cd68f4e584: Status 404 returned error can't find the container with id 6ad8d14ed216832bef07e8ec710cfca219ebe366a4afbedf2f3fb4cd68f4e584 Feb 19 10:07:10 crc kubenswrapper[4965]: I0219 10:07:10.616638 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dc7c944bf-q7r8b"] Feb 19 10:07:10 crc kubenswrapper[4965]: I0219 10:07:10.684401 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc7c944bf-q7r8b" event={"ID":"93ff6407-c851-4dec-a6bd-383600a74940","Type":"ContainerStarted","Data":"6ad8d14ed216832bef07e8ec710cfca219ebe366a4afbedf2f3fb4cd68f4e584"} Feb 19 10:07:11 crc kubenswrapper[4965]: I0219 10:07:11.693384 4965 generic.go:334] "Generic (PLEG): container finished" podID="93ff6407-c851-4dec-a6bd-383600a74940" containerID="b8bff6e9c41b9e31d85bb19c025728d128aaf495f58dcc45a68331e207ddf71b" exitCode=0 Feb 19 10:07:11 crc kubenswrapper[4965]: I0219 10:07:11.694176 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc7c944bf-q7r8b" event={"ID":"93ff6407-c851-4dec-a6bd-383600a74940","Type":"ContainerDied","Data":"b8bff6e9c41b9e31d85bb19c025728d128aaf495f58dcc45a68331e207ddf71b"} Feb 19 10:07:12 crc kubenswrapper[4965]: I0219 10:07:12.717074 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc7c944bf-q7r8b" event={"ID":"93ff6407-c851-4dec-a6bd-383600a74940","Type":"ContainerStarted","Data":"6e8cc8d0e82304e2b957961b828a43d0cab2bcb1fa15e9a7ce69f60d2ad47402"} Feb 19 10:07:12 crc kubenswrapper[4965]: I0219 10:07:12.717438 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-dc7c944bf-q7r8b" Feb 19 10:07:12 crc kubenswrapper[4965]: I0219 10:07:12.750948 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-dc7c944bf-q7r8b" podStartSLOduration=3.75092799 podStartE2EDuration="3.75092799s" podCreationTimestamp="2026-02-19 10:07:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:07:12.742506266 +0000 UTC m=+1488.363827586" watchObservedRunningTime="2026-02-19 10:07:12.75092799 +0000 UTC m=+1488.372249310" Feb 19 10:07:16 crc kubenswrapper[4965]: I0219 10:07:16.601295 4965 patch_prober.go:28] interesting pod/machine-config-daemon-7mhh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:07:16 crc kubenswrapper[4965]: I0219 10:07:16.601746 4965 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:07:19 crc kubenswrapper[4965]: I0219 10:07:19.801414 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-dc7c944bf-q7r8b" Feb 19 10:07:19 crc kubenswrapper[4965]: I0219 10:07:19.906669 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54dd998c-dbkhp"] Feb 19 10:07:19 crc kubenswrapper[4965]: I0219 10:07:19.907122 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-54dd998c-dbkhp" podUID="6d323130-9034-45b0-9f95-02b4494ff391" containerName="dnsmasq-dns" containerID="cri-o://4939c29d495e6f2cd4b1da938e4d9028249a2ce393ea6aa52c613d0350b32b4e" gracePeriod=10 Feb 19 10:07:20 crc kubenswrapper[4965]: I0219 10:07:20.106425 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-c4b758ff5-gdxjp"] Feb 19 10:07:20 crc kubenswrapper[4965]: I0219 10:07:20.108734 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c4b758ff5-gdxjp" Feb 19 10:07:20 crc kubenswrapper[4965]: I0219 10:07:20.128679 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c4b758ff5-gdxjp"] Feb 19 10:07:20 crc kubenswrapper[4965]: I0219 10:07:20.190256 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f29d993-47df-4952-a137-bb5cf52ea59a-ovsdbserver-sb\") pod \"dnsmasq-dns-c4b758ff5-gdxjp\" (UID: \"5f29d993-47df-4952-a137-bb5cf52ea59a\") " pod="openstack/dnsmasq-dns-c4b758ff5-gdxjp" Feb 19 10:07:20 crc kubenswrapper[4965]: I0219 10:07:20.190729 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f29d993-47df-4952-a137-bb5cf52ea59a-config\") pod \"dnsmasq-dns-c4b758ff5-gdxjp\" (UID: \"5f29d993-47df-4952-a137-bb5cf52ea59a\") " pod="openstack/dnsmasq-dns-c4b758ff5-gdxjp" Feb 19 10:07:20 crc kubenswrapper[4965]: I0219 10:07:20.190912 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f29d993-47df-4952-a137-bb5cf52ea59a-dns-svc\") pod \"dnsmasq-dns-c4b758ff5-gdxjp\" (UID: \"5f29d993-47df-4952-a137-bb5cf52ea59a\") " pod="openstack/dnsmasq-dns-c4b758ff5-gdxjp" Feb 19 10:07:20 crc kubenswrapper[4965]: I0219 10:07:20.191057 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx5nv\" (UniqueName: \"kubernetes.io/projected/5f29d993-47df-4952-a137-bb5cf52ea59a-kube-api-access-nx5nv\") pod \"dnsmasq-dns-c4b758ff5-gdxjp\" (UID: \"5f29d993-47df-4952-a137-bb5cf52ea59a\") " pod="openstack/dnsmasq-dns-c4b758ff5-gdxjp" Feb 19 10:07:20 crc kubenswrapper[4965]: I0219 10:07:20.191340 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f29d993-47df-4952-a137-bb5cf52ea59a-ovsdbserver-nb\") pod \"dnsmasq-dns-c4b758ff5-gdxjp\" (UID: \"5f29d993-47df-4952-a137-bb5cf52ea59a\") " pod="openstack/dnsmasq-dns-c4b758ff5-gdxjp" Feb 19 10:07:20 crc kubenswrapper[4965]: I0219 10:07:20.191504 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5f29d993-47df-4952-a137-bb5cf52ea59a-dns-swift-storage-0\") pod \"dnsmasq-dns-c4b758ff5-gdxjp\" (UID: \"5f29d993-47df-4952-a137-bb5cf52ea59a\") " pod="openstack/dnsmasq-dns-c4b758ff5-gdxjp" Feb 19 10:07:20 crc kubenswrapper[4965]: I0219 10:07:20.191649 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5f29d993-47df-4952-a137-bb5cf52ea59a-openstack-edpm-ipam\") pod \"dnsmasq-dns-c4b758ff5-gdxjp\" (UID: \"5f29d993-47df-4952-a137-bb5cf52ea59a\") " pod="openstack/dnsmasq-dns-c4b758ff5-gdxjp" Feb 19 10:07:20 crc kubenswrapper[4965]: I0219 10:07:20.293630 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f29d993-47df-4952-a137-bb5cf52ea59a-config\") pod \"dnsmasq-dns-c4b758ff5-gdxjp\" (UID: \"5f29d993-47df-4952-a137-bb5cf52ea59a\") " pod="openstack/dnsmasq-dns-c4b758ff5-gdxjp" Feb 19 10:07:20 crc kubenswrapper[4965]: I0219 10:07:20.293705 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f29d993-47df-4952-a137-bb5cf52ea59a-dns-svc\") pod \"dnsmasq-dns-c4b758ff5-gdxjp\" (UID: \"5f29d993-47df-4952-a137-bb5cf52ea59a\") " pod="openstack/dnsmasq-dns-c4b758ff5-gdxjp" Feb 19 10:07:20 crc kubenswrapper[4965]: I0219 10:07:20.293736 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nx5nv\" (UniqueName: \"kubernetes.io/projected/5f29d993-47df-4952-a137-bb5cf52ea59a-kube-api-access-nx5nv\") pod \"dnsmasq-dns-c4b758ff5-gdxjp\" (UID: \"5f29d993-47df-4952-a137-bb5cf52ea59a\") " pod="openstack/dnsmasq-dns-c4b758ff5-gdxjp" Feb 19 10:07:20 crc kubenswrapper[4965]: I0219 10:07:20.293768 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f29d993-47df-4952-a137-bb5cf52ea59a-ovsdbserver-nb\") pod \"dnsmasq-dns-c4b758ff5-gdxjp\" (UID: \"5f29d993-47df-4952-a137-bb5cf52ea59a\") " pod="openstack/dnsmasq-dns-c4b758ff5-gdxjp" Feb 19 10:07:20 crc kubenswrapper[4965]: I0219 10:07:20.293817 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5f29d993-47df-4952-a137-bb5cf52ea59a-dns-swift-storage-0\") pod \"dnsmasq-dns-c4b758ff5-gdxjp\" (UID: \"5f29d993-47df-4952-a137-bb5cf52ea59a\") " pod="openstack/dnsmasq-dns-c4b758ff5-gdxjp" Feb 19 10:07:20 crc kubenswrapper[4965]: I0219 10:07:20.293855 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5f29d993-47df-4952-a137-bb5cf52ea59a-openstack-edpm-ipam\") pod \"dnsmasq-dns-c4b758ff5-gdxjp\" (UID: \"5f29d993-47df-4952-a137-bb5cf52ea59a\") " pod="openstack/dnsmasq-dns-c4b758ff5-gdxjp" Feb 19 10:07:20 crc kubenswrapper[4965]: I0219 10:07:20.293940 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f29d993-47df-4952-a137-bb5cf52ea59a-ovsdbserver-sb\") pod \"dnsmasq-dns-c4b758ff5-gdxjp\" (UID: \"5f29d993-47df-4952-a137-bb5cf52ea59a\") " pod="openstack/dnsmasq-dns-c4b758ff5-gdxjp" Feb 19 10:07:20 crc kubenswrapper[4965]: I0219 10:07:20.294831 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f29d993-47df-4952-a137-bb5cf52ea59a-ovsdbserver-sb\") pod \"dnsmasq-dns-c4b758ff5-gdxjp\" (UID: \"5f29d993-47df-4952-a137-bb5cf52ea59a\") " pod="openstack/dnsmasq-dns-c4b758ff5-gdxjp" Feb 19 10:07:20 crc kubenswrapper[4965]: I0219 10:07:20.295834 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f29d993-47df-4952-a137-bb5cf52ea59a-config\") pod \"dnsmasq-dns-c4b758ff5-gdxjp\" (UID: \"5f29d993-47df-4952-a137-bb5cf52ea59a\") " pod="openstack/dnsmasq-dns-c4b758ff5-gdxjp" Feb 19 10:07:20 crc kubenswrapper[4965]: I0219 10:07:20.296433 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f29d993-47df-4952-a137-bb5cf52ea59a-dns-svc\") pod \"dnsmasq-dns-c4b758ff5-gdxjp\" (UID: \"5f29d993-47df-4952-a137-bb5cf52ea59a\") " pod="openstack/dnsmasq-dns-c4b758ff5-gdxjp" Feb 19 10:07:20 crc kubenswrapper[4965]: I0219 10:07:20.297031 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f29d993-47df-4952-a137-bb5cf52ea59a-ovsdbserver-nb\") pod \"dnsmasq-dns-c4b758ff5-gdxjp\" (UID: \"5f29d993-47df-4952-a137-bb5cf52ea59a\") " pod="openstack/dnsmasq-dns-c4b758ff5-gdxjp" Feb 19 10:07:20 crc kubenswrapper[4965]: I0219 10:07:20.309017 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5f29d993-47df-4952-a137-bb5cf52ea59a-dns-swift-storage-0\") pod \"dnsmasq-dns-c4b758ff5-gdxjp\" (UID: \"5f29d993-47df-4952-a137-bb5cf52ea59a\") " pod="openstack/dnsmasq-dns-c4b758ff5-gdxjp" Feb 19 10:07:20 crc kubenswrapper[4965]: I0219 10:07:20.312018 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5f29d993-47df-4952-a137-bb5cf52ea59a-openstack-edpm-ipam\") pod \"dnsmasq-dns-c4b758ff5-gdxjp\" (UID: \"5f29d993-47df-4952-a137-bb5cf52ea59a\") " pod="openstack/dnsmasq-dns-c4b758ff5-gdxjp" Feb 19 10:07:20 crc kubenswrapper[4965]: I0219 10:07:20.372390 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nx5nv\" (UniqueName: \"kubernetes.io/projected/5f29d993-47df-4952-a137-bb5cf52ea59a-kube-api-access-nx5nv\") pod \"dnsmasq-dns-c4b758ff5-gdxjp\" (UID: \"5f29d993-47df-4952-a137-bb5cf52ea59a\") " pod="openstack/dnsmasq-dns-c4b758ff5-gdxjp" Feb 19 10:07:20 crc kubenswrapper[4965]: I0219 10:07:20.431717 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c4b758ff5-gdxjp" Feb 19 10:07:20 crc kubenswrapper[4965]: I0219 10:07:20.668177 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54dd998c-dbkhp" Feb 19 10:07:20 crc kubenswrapper[4965]: I0219 10:07:20.808978 4965 generic.go:334] "Generic (PLEG): container finished" podID="6d323130-9034-45b0-9f95-02b4494ff391" containerID="4939c29d495e6f2cd4b1da938e4d9028249a2ce393ea6aa52c613d0350b32b4e" exitCode=0 Feb 19 10:07:20 crc kubenswrapper[4965]: I0219 10:07:20.809332 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54dd998c-dbkhp" event={"ID":"6d323130-9034-45b0-9f95-02b4494ff391","Type":"ContainerDied","Data":"4939c29d495e6f2cd4b1da938e4d9028249a2ce393ea6aa52c613d0350b32b4e"} Feb 19 10:07:20 crc kubenswrapper[4965]: I0219 10:07:20.809369 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54dd998c-dbkhp" event={"ID":"6d323130-9034-45b0-9f95-02b4494ff391","Type":"ContainerDied","Data":"649c5c2d86383d5aa368caa7bf569c17540d93290773970d519f329a0774a7e3"} Feb 19 10:07:20 crc kubenswrapper[4965]: I0219 10:07:20.809391 4965 scope.go:117] "RemoveContainer" containerID="4939c29d495e6f2cd4b1da938e4d9028249a2ce393ea6aa52c613d0350b32b4e" Feb 19 10:07:20 crc kubenswrapper[4965]: I0219 10:07:20.809432 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54dd998c-dbkhp" Feb 19 10:07:20 crc kubenswrapper[4965]: I0219 10:07:20.811917 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d323130-9034-45b0-9f95-02b4494ff391-dns-svc\") pod \"6d323130-9034-45b0-9f95-02b4494ff391\" (UID: \"6d323130-9034-45b0-9f95-02b4494ff391\") " Feb 19 10:07:20 crc kubenswrapper[4965]: I0219 10:07:20.812072 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqp2v\" (UniqueName: \"kubernetes.io/projected/6d323130-9034-45b0-9f95-02b4494ff391-kube-api-access-lqp2v\") pod \"6d323130-9034-45b0-9f95-02b4494ff391\" (UID: \"6d323130-9034-45b0-9f95-02b4494ff391\") " Feb 19 10:07:20 crc kubenswrapper[4965]: I0219 10:07:20.812152 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6d323130-9034-45b0-9f95-02b4494ff391-dns-swift-storage-0\") pod \"6d323130-9034-45b0-9f95-02b4494ff391\" (UID: \"6d323130-9034-45b0-9f95-02b4494ff391\") " Feb 19 10:07:20 crc kubenswrapper[4965]: I0219 10:07:20.812177 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6d323130-9034-45b0-9f95-02b4494ff391-ovsdbserver-sb\") pod \"6d323130-9034-45b0-9f95-02b4494ff391\" (UID: \"6d323130-9034-45b0-9f95-02b4494ff391\") " Feb 19 10:07:20 crc kubenswrapper[4965]: I0219 10:07:20.812256 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6d323130-9034-45b0-9f95-02b4494ff391-ovsdbserver-nb\") pod \"6d323130-9034-45b0-9f95-02b4494ff391\" (UID: \"6d323130-9034-45b0-9f95-02b4494ff391\") " Feb 19 10:07:20 crc kubenswrapper[4965]: I0219 10:07:20.812422 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d323130-9034-45b0-9f95-02b4494ff391-config\") pod \"6d323130-9034-45b0-9f95-02b4494ff391\" (UID: \"6d323130-9034-45b0-9f95-02b4494ff391\") " Feb 19 10:07:20 crc kubenswrapper[4965]: I0219 10:07:20.818813 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d323130-9034-45b0-9f95-02b4494ff391-kube-api-access-lqp2v" (OuterVolumeSpecName: "kube-api-access-lqp2v") pod "6d323130-9034-45b0-9f95-02b4494ff391" (UID: "6d323130-9034-45b0-9f95-02b4494ff391"). InnerVolumeSpecName "kube-api-access-lqp2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:07:20 crc kubenswrapper[4965]: I0219 10:07:20.868362 4965 scope.go:117] "RemoveContainer" containerID="507b642af4de22e2d6208254efb55f0bd41d0516f0b4d92d0798908d6f278f80" Feb 19 10:07:20 crc kubenswrapper[4965]: I0219 10:07:20.884812 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d323130-9034-45b0-9f95-02b4494ff391-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6d323130-9034-45b0-9f95-02b4494ff391" (UID: "6d323130-9034-45b0-9f95-02b4494ff391"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:07:20 crc kubenswrapper[4965]: I0219 10:07:20.885733 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d323130-9034-45b0-9f95-02b4494ff391-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6d323130-9034-45b0-9f95-02b4494ff391" (UID: "6d323130-9034-45b0-9f95-02b4494ff391"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:07:20 crc kubenswrapper[4965]: I0219 10:07:20.897171 4965 scope.go:117] "RemoveContainer" containerID="4939c29d495e6f2cd4b1da938e4d9028249a2ce393ea6aa52c613d0350b32b4e" Feb 19 10:07:20 crc kubenswrapper[4965]: I0219 10:07:20.901098 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d323130-9034-45b0-9f95-02b4494ff391-config" (OuterVolumeSpecName: "config") pod "6d323130-9034-45b0-9f95-02b4494ff391" (UID: "6d323130-9034-45b0-9f95-02b4494ff391"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:07:20 crc kubenswrapper[4965]: E0219 10:07:20.897683 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4939c29d495e6f2cd4b1da938e4d9028249a2ce393ea6aa52c613d0350b32b4e\": container with ID starting with 4939c29d495e6f2cd4b1da938e4d9028249a2ce393ea6aa52c613d0350b32b4e not found: ID does not exist" containerID="4939c29d495e6f2cd4b1da938e4d9028249a2ce393ea6aa52c613d0350b32b4e" Feb 19 10:07:20 crc kubenswrapper[4965]: I0219 10:07:20.901246 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4939c29d495e6f2cd4b1da938e4d9028249a2ce393ea6aa52c613d0350b32b4e"} err="failed to get container status \"4939c29d495e6f2cd4b1da938e4d9028249a2ce393ea6aa52c613d0350b32b4e\": rpc error: code = NotFound desc = could not find container \"4939c29d495e6f2cd4b1da938e4d9028249a2ce393ea6aa52c613d0350b32b4e\": container with ID starting with 4939c29d495e6f2cd4b1da938e4d9028249a2ce393ea6aa52c613d0350b32b4e not found: ID does not exist" Feb 19 10:07:20 crc kubenswrapper[4965]: I0219 10:07:20.901391 4965 scope.go:117] "RemoveContainer" containerID="507b642af4de22e2d6208254efb55f0bd41d0516f0b4d92d0798908d6f278f80" Feb 19 10:07:20 crc kubenswrapper[4965]: E0219 10:07:20.902794 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"507b642af4de22e2d6208254efb55f0bd41d0516f0b4d92d0798908d6f278f80\": container with ID starting with 507b642af4de22e2d6208254efb55f0bd41d0516f0b4d92d0798908d6f278f80 not found: ID does not exist" containerID="507b642af4de22e2d6208254efb55f0bd41d0516f0b4d92d0798908d6f278f80" Feb 19 10:07:20 crc kubenswrapper[4965]: I0219 10:07:20.902841 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"507b642af4de22e2d6208254efb55f0bd41d0516f0b4d92d0798908d6f278f80"} err="failed to get container status \"507b642af4de22e2d6208254efb55f0bd41d0516f0b4d92d0798908d6f278f80\": rpc error: code = NotFound desc = could not find container \"507b642af4de22e2d6208254efb55f0bd41d0516f0b4d92d0798908d6f278f80\": container with ID starting with 507b642af4de22e2d6208254efb55f0bd41d0516f0b4d92d0798908d6f278f80 not found: ID does not exist" Feb 19 10:07:20 crc kubenswrapper[4965]: I0219 10:07:20.905318 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d323130-9034-45b0-9f95-02b4494ff391-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6d323130-9034-45b0-9f95-02b4494ff391" (UID: "6d323130-9034-45b0-9f95-02b4494ff391"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:07:20 crc kubenswrapper[4965]: I0219 10:07:20.909931 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d323130-9034-45b0-9f95-02b4494ff391-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6d323130-9034-45b0-9f95-02b4494ff391" (UID: "6d323130-9034-45b0-9f95-02b4494ff391"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:07:20 crc kubenswrapper[4965]: I0219 10:07:20.915725 4965 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d323130-9034-45b0-9f95-02b4494ff391-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:07:20 crc kubenswrapper[4965]: I0219 10:07:20.915763 4965 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d323130-9034-45b0-9f95-02b4494ff391-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 10:07:20 crc kubenswrapper[4965]: I0219 10:07:20.915776 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqp2v\" (UniqueName: \"kubernetes.io/projected/6d323130-9034-45b0-9f95-02b4494ff391-kube-api-access-lqp2v\") on node \"crc\" DevicePath \"\"" Feb 19 10:07:20 crc kubenswrapper[4965]: I0219 10:07:20.915792 4965 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6d323130-9034-45b0-9f95-02b4494ff391-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:07:20 crc kubenswrapper[4965]: I0219 10:07:20.915806 4965 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6d323130-9034-45b0-9f95-02b4494ff391-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 10:07:20 crc kubenswrapper[4965]: I0219 10:07:20.915819 4965 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6d323130-9034-45b0-9f95-02b4494ff391-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 10:07:20 crc kubenswrapper[4965]: I0219 10:07:20.981028 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c4b758ff5-gdxjp"] Feb 19 10:07:20 crc kubenswrapper[4965]: W0219 10:07:20.984738 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f29d993_47df_4952_a137_bb5cf52ea59a.slice/crio-f104628e15fd49e6ad7b2bcbccaf51655e3401d1918c79bcd1c13bcaee70799d WatchSource:0}: Error finding container f104628e15fd49e6ad7b2bcbccaf51655e3401d1918c79bcd1c13bcaee70799d: Status 404 returned error can't find the container with id f104628e15fd49e6ad7b2bcbccaf51655e3401d1918c79bcd1c13bcaee70799d Feb 19 10:07:21 crc kubenswrapper[4965]: I0219 10:07:21.193891 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54dd998c-dbkhp"] Feb 19 10:07:21 crc kubenswrapper[4965]: I0219 10:07:21.214511 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-54dd998c-dbkhp"] Feb 19 10:07:21 crc kubenswrapper[4965]: I0219 10:07:21.825591 4965 generic.go:334] "Generic (PLEG): container finished" podID="5f29d993-47df-4952-a137-bb5cf52ea59a" containerID="16b02e5d234018756e1fcb596db59825f8d295adcf68813cab2ca3c02784275c" exitCode=0 Feb 19 10:07:21 crc kubenswrapper[4965]: I0219 10:07:21.825678 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c4b758ff5-gdxjp" event={"ID":"5f29d993-47df-4952-a137-bb5cf52ea59a","Type":"ContainerDied","Data":"16b02e5d234018756e1fcb596db59825f8d295adcf68813cab2ca3c02784275c"} Feb 19 10:07:21 crc kubenswrapper[4965]: I0219 10:07:21.826016 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c4b758ff5-gdxjp" event={"ID":"5f29d993-47df-4952-a137-bb5cf52ea59a","Type":"ContainerStarted","Data":"f104628e15fd49e6ad7b2bcbccaf51655e3401d1918c79bcd1c13bcaee70799d"} Feb 19 10:07:22 crc kubenswrapper[4965]: I0219 10:07:22.841035 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c4b758ff5-gdxjp" event={"ID":"5f29d993-47df-4952-a137-bb5cf52ea59a","Type":"ContainerStarted","Data":"80e92038dcf70281190754e994b1f8aeb46bd71aec797a491b1bad2b18627e2f"} Feb 19 10:07:22 crc kubenswrapper[4965]: I0219 10:07:22.841285 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-c4b758ff5-gdxjp" Feb 19 10:07:22 crc kubenswrapper[4965]: I0219 10:07:22.872469 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-c4b758ff5-gdxjp" podStartSLOduration=2.872445877 podStartE2EDuration="2.872445877s" podCreationTimestamp="2026-02-19 10:07:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:07:22.862143687 +0000 UTC m=+1498.483464997" watchObservedRunningTime="2026-02-19 10:07:22.872445877 +0000 UTC m=+1498.493767187" Feb 19 10:07:23 crc kubenswrapper[4965]: I0219 10:07:23.213299 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d323130-9034-45b0-9f95-02b4494ff391" path="/var/lib/kubelet/pods/6d323130-9034-45b0-9f95-02b4494ff391/volumes" Feb 19 10:07:23 crc kubenswrapper[4965]: I0219 10:07:23.742745 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 19 10:07:30 crc kubenswrapper[4965]: I0219 10:07:30.433779 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-c4b758ff5-gdxjp" Feb 19 10:07:30 crc kubenswrapper[4965]: I0219 10:07:30.513702 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dc7c944bf-q7r8b"] Feb 19 10:07:30 crc kubenswrapper[4965]: I0219 10:07:30.514308 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-dc7c944bf-q7r8b" podUID="93ff6407-c851-4dec-a6bd-383600a74940" containerName="dnsmasq-dns" containerID="cri-o://6e8cc8d0e82304e2b957961b828a43d0cab2bcb1fa15e9a7ce69f60d2ad47402" gracePeriod=10 Feb 19 10:07:30 crc kubenswrapper[4965]: I0219 10:07:30.940985 4965 generic.go:334] "Generic (PLEG): container finished" podID="93ff6407-c851-4dec-a6bd-383600a74940" containerID="6e8cc8d0e82304e2b957961b828a43d0cab2bcb1fa15e9a7ce69f60d2ad47402" exitCode=0 Feb 19 10:07:30 crc kubenswrapper[4965]: I0219 10:07:30.941057 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc7c944bf-q7r8b" event={"ID":"93ff6407-c851-4dec-a6bd-383600a74940","Type":"ContainerDied","Data":"6e8cc8d0e82304e2b957961b828a43d0cab2bcb1fa15e9a7ce69f60d2ad47402"} Feb 19 10:07:31 crc kubenswrapper[4965]: I0219 10:07:31.083679 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc7c944bf-q7r8b" Feb 19 10:07:31 crc kubenswrapper[4965]: I0219 10:07:31.155778 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93ff6407-c851-4dec-a6bd-383600a74940-dns-svc\") pod \"93ff6407-c851-4dec-a6bd-383600a74940\" (UID: \"93ff6407-c851-4dec-a6bd-383600a74940\") " Feb 19 10:07:31 crc kubenswrapper[4965]: I0219 10:07:31.155920 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/93ff6407-c851-4dec-a6bd-383600a74940-openstack-edpm-ipam\") pod \"93ff6407-c851-4dec-a6bd-383600a74940\" (UID: \"93ff6407-c851-4dec-a6bd-383600a74940\") " Feb 19 10:07:31 crc kubenswrapper[4965]: I0219 10:07:31.156035 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93ff6407-c851-4dec-a6bd-383600a74940-config\") pod \"93ff6407-c851-4dec-a6bd-383600a74940\" (UID: \"93ff6407-c851-4dec-a6bd-383600a74940\") " Feb 19 10:07:31 crc kubenswrapper[4965]: I0219 10:07:31.156188 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93ff6407-c851-4dec-a6bd-383600a74940-ovsdbserver-nb\") pod \"93ff6407-c851-4dec-a6bd-383600a74940\" (UID: \"93ff6407-c851-4dec-a6bd-383600a74940\") " Feb 19 10:07:31 crc kubenswrapper[4965]: I0219 10:07:31.156291 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xc2c5\" (UniqueName: \"kubernetes.io/projected/93ff6407-c851-4dec-a6bd-383600a74940-kube-api-access-xc2c5\") pod \"93ff6407-c851-4dec-a6bd-383600a74940\" (UID: \"93ff6407-c851-4dec-a6bd-383600a74940\") " Feb 19 10:07:31 crc kubenswrapper[4965]: I0219 10:07:31.157032 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93ff6407-c851-4dec-a6bd-383600a74940-ovsdbserver-sb\") pod \"93ff6407-c851-4dec-a6bd-383600a74940\" (UID: \"93ff6407-c851-4dec-a6bd-383600a74940\") " Feb 19 10:07:31 crc kubenswrapper[4965]: I0219 10:07:31.157065 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/93ff6407-c851-4dec-a6bd-383600a74940-dns-swift-storage-0\") pod \"93ff6407-c851-4dec-a6bd-383600a74940\" (UID: \"93ff6407-c851-4dec-a6bd-383600a74940\") " Feb 19 10:07:31 crc kubenswrapper[4965]: I0219 10:07:31.186890 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93ff6407-c851-4dec-a6bd-383600a74940-kube-api-access-xc2c5" (OuterVolumeSpecName: "kube-api-access-xc2c5") pod "93ff6407-c851-4dec-a6bd-383600a74940" (UID: "93ff6407-c851-4dec-a6bd-383600a74940"). InnerVolumeSpecName "kube-api-access-xc2c5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:07:31 crc kubenswrapper[4965]: I0219 10:07:31.241297 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93ff6407-c851-4dec-a6bd-383600a74940-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "93ff6407-c851-4dec-a6bd-383600a74940" (UID: "93ff6407-c851-4dec-a6bd-383600a74940"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:07:31 crc kubenswrapper[4965]: I0219 10:07:31.255560 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93ff6407-c851-4dec-a6bd-383600a74940-config" (OuterVolumeSpecName: "config") pod "93ff6407-c851-4dec-a6bd-383600a74940" (UID: "93ff6407-c851-4dec-a6bd-383600a74940"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:07:31 crc kubenswrapper[4965]: I0219 10:07:31.257739 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93ff6407-c851-4dec-a6bd-383600a74940-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "93ff6407-c851-4dec-a6bd-383600a74940" (UID: "93ff6407-c851-4dec-a6bd-383600a74940"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:07:31 crc kubenswrapper[4965]: I0219 10:07:31.260573 4965 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93ff6407-c851-4dec-a6bd-383600a74940-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 10:07:31 crc kubenswrapper[4965]: I0219 10:07:31.260601 4965 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93ff6407-c851-4dec-a6bd-383600a74940-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 10:07:31 crc kubenswrapper[4965]: I0219 10:07:31.260610 4965 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93ff6407-c851-4dec-a6bd-383600a74940-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:07:31 crc kubenswrapper[4965]: I0219 10:07:31.260620 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xc2c5\" (UniqueName: \"kubernetes.io/projected/93ff6407-c851-4dec-a6bd-383600a74940-kube-api-access-xc2c5\") on node \"crc\" DevicePath \"\"" Feb 19 10:07:31 crc kubenswrapper[4965]: I0219 10:07:31.261514 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93ff6407-c851-4dec-a6bd-383600a74940-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "93ff6407-c851-4dec-a6bd-383600a74940" (UID: "93ff6407-c851-4dec-a6bd-383600a74940"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:07:31 crc kubenswrapper[4965]: I0219 10:07:31.268715 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93ff6407-c851-4dec-a6bd-383600a74940-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "93ff6407-c851-4dec-a6bd-383600a74940" (UID: "93ff6407-c851-4dec-a6bd-383600a74940"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:07:31 crc kubenswrapper[4965]: I0219 10:07:31.273488 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93ff6407-c851-4dec-a6bd-383600a74940-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "93ff6407-c851-4dec-a6bd-383600a74940" (UID: "93ff6407-c851-4dec-a6bd-383600a74940"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:07:31 crc kubenswrapper[4965]: I0219 10:07:31.361999 4965 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93ff6407-c851-4dec-a6bd-383600a74940-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 10:07:31 crc kubenswrapper[4965]: I0219 10:07:31.362035 4965 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/93ff6407-c851-4dec-a6bd-383600a74940-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:07:31 crc kubenswrapper[4965]: I0219 10:07:31.362045 4965 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/93ff6407-c851-4dec-a6bd-383600a74940-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 10:07:31 crc kubenswrapper[4965]: I0219 10:07:31.956108 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc7c944bf-q7r8b" event={"ID":"93ff6407-c851-4dec-a6bd-383600a74940","Type":"ContainerDied","Data":"6ad8d14ed216832bef07e8ec710cfca219ebe366a4afbedf2f3fb4cd68f4e584"} Feb 19 10:07:31 crc kubenswrapper[4965]: I0219 10:07:31.956177 4965 scope.go:117] "RemoveContainer" containerID="6e8cc8d0e82304e2b957961b828a43d0cab2bcb1fa15e9a7ce69f60d2ad47402" Feb 19 10:07:31 crc kubenswrapper[4965]: I0219 10:07:31.956230 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc7c944bf-q7r8b" Feb 19 10:07:31 crc kubenswrapper[4965]: I0219 10:07:31.988953 4965 scope.go:117] "RemoveContainer" containerID="b8bff6e9c41b9e31d85bb19c025728d128aaf495f58dcc45a68331e207ddf71b" Feb 19 10:07:32 crc kubenswrapper[4965]: I0219 10:07:32.000390 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dc7c944bf-q7r8b"] Feb 19 10:07:32 crc kubenswrapper[4965]: I0219 10:07:32.010650 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-dc7c944bf-q7r8b"] Feb 19 10:07:33 crc kubenswrapper[4965]: I0219 10:07:33.215793 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93ff6407-c851-4dec-a6bd-383600a74940" path="/var/lib/kubelet/pods/93ff6407-c851-4dec-a6bd-383600a74940/volumes" Feb 19 10:07:38 crc kubenswrapper[4965]: I0219 10:07:38.931592 4965 scope.go:117] "RemoveContainer" containerID="7e24aa672508623ed5e44c0c18287abe244b42be8f6e1ea0a2aabc6d3d793a9e" Feb 19 10:07:40 crc kubenswrapper[4965]: I0219 10:07:40.054134 4965 generic.go:334] "Generic (PLEG): container finished" podID="e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa" containerID="be5ee0ae14d047024958c60742b7dfaaf670b250edfb859c820de895131b6b18" exitCode=0 Feb 19 10:07:40 crc kubenswrapper[4965]: I0219 10:07:40.054261 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa","Type":"ContainerDied","Data":"be5ee0ae14d047024958c60742b7dfaaf670b250edfb859c820de895131b6b18"} Feb 19 10:07:40 crc kubenswrapper[4965]: I0219 10:07:40.058153 4965 generic.go:334] "Generic (PLEG): container finished" podID="8214d39f-90ff-4188-abbf-6a097f33eef0" containerID="dfae216125c0b56f6e0a48ada368f3c6903410e73a5a3508866a6ece3dab834e" exitCode=0 Feb 19 10:07:40 crc kubenswrapper[4965]: I0219 10:07:40.058265 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8214d39f-90ff-4188-abbf-6a097f33eef0","Type":"ContainerDied","Data":"dfae216125c0b56f6e0a48ada368f3c6903410e73a5a3508866a6ece3dab834e"} Feb 19 10:07:41 crc kubenswrapper[4965]: I0219 10:07:41.069504 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa","Type":"ContainerStarted","Data":"be9e58040941c228686bd6079d7a68da8f64cae5b28b3deb47ae041d2aa5c232"} Feb 19 10:07:41 crc kubenswrapper[4965]: I0219 10:07:41.069980 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 19 10:07:41 crc kubenswrapper[4965]: I0219 10:07:41.073057 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8214d39f-90ff-4188-abbf-6a097f33eef0","Type":"ContainerStarted","Data":"7ecdf76c07765b659219a117c9e27d30403135d72fe757b260d47a613c9fff03"} Feb 19 10:07:41 crc kubenswrapper[4965]: I0219 10:07:41.073352 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:07:41 crc kubenswrapper[4965]: I0219 10:07:41.092254 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.092236065 podStartE2EDuration="37.092236065s" podCreationTimestamp="2026-02-19 10:07:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:07:41.087396308 +0000 UTC m=+1516.708717638" watchObservedRunningTime="2026-02-19 10:07:41.092236065 +0000 UTC m=+1516.713557375" Feb 19 10:07:41 crc kubenswrapper[4965]: I0219 10:07:41.123094 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.123071424 podStartE2EDuration="37.123071424s" podCreationTimestamp="2026-02-19 10:07:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:07:41.118969174 +0000 UTC m=+1516.740290504" watchObservedRunningTime="2026-02-19 10:07:41.123071424 +0000 UTC m=+1516.744392734" Feb 19 10:07:43 crc kubenswrapper[4965]: I0219 10:07:43.666644 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-87z8v"] Feb 19 10:07:43 crc kubenswrapper[4965]: E0219 10:07:43.667244 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93ff6407-c851-4dec-a6bd-383600a74940" containerName="init" Feb 19 10:07:43 crc kubenswrapper[4965]: I0219 10:07:43.667256 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="93ff6407-c851-4dec-a6bd-383600a74940" containerName="init" Feb 19 10:07:43 crc kubenswrapper[4965]: E0219 10:07:43.667287 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d323130-9034-45b0-9f95-02b4494ff391" containerName="dnsmasq-dns" Feb 19 10:07:43 crc kubenswrapper[4965]: I0219 10:07:43.667293 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d323130-9034-45b0-9f95-02b4494ff391" containerName="dnsmasq-dns" Feb 19 10:07:43 crc kubenswrapper[4965]: E0219 10:07:43.667310 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d323130-9034-45b0-9f95-02b4494ff391" containerName="init" Feb 19 10:07:43 crc kubenswrapper[4965]: I0219 10:07:43.667316 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d323130-9034-45b0-9f95-02b4494ff391" containerName="init" Feb 19 10:07:43 crc kubenswrapper[4965]: E0219 10:07:43.667324 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93ff6407-c851-4dec-a6bd-383600a74940" containerName="dnsmasq-dns" Feb 19 10:07:43 crc kubenswrapper[4965]: I0219 10:07:43.667330 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="93ff6407-c851-4dec-a6bd-383600a74940" containerName="dnsmasq-dns" Feb 19 10:07:43 crc kubenswrapper[4965]: I0219 10:07:43.667502 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="93ff6407-c851-4dec-a6bd-383600a74940" containerName="dnsmasq-dns" Feb 19 10:07:43 crc kubenswrapper[4965]: I0219 10:07:43.667531 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d323130-9034-45b0-9f95-02b4494ff391" containerName="dnsmasq-dns" Feb 19 10:07:43 crc kubenswrapper[4965]: I0219 10:07:43.668208 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-87z8v" Feb 19 10:07:43 crc kubenswrapper[4965]: I0219 10:07:43.679051 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-87z8v"] Feb 19 10:07:43 crc kubenswrapper[4965]: I0219 10:07:43.682706 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 10:07:43 crc kubenswrapper[4965]: I0219 10:07:43.682780 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:07:43 crc kubenswrapper[4965]: I0219 10:07:43.682823 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cthw6" Feb 19 10:07:43 crc kubenswrapper[4965]: I0219 10:07:43.682730 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 10:07:43 crc kubenswrapper[4965]: I0219 10:07:43.782474 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hlz5\" (UniqueName: \"kubernetes.io/projected/58f8c7f1-d425-4b21-ba27-1e47c69ddd93-kube-api-access-8hlz5\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-87z8v\" (UID: \"58f8c7f1-d425-4b21-ba27-1e47c69ddd93\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-87z8v" Feb 19 10:07:43 crc kubenswrapper[4965]: I0219 10:07:43.782837 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/58f8c7f1-d425-4b21-ba27-1e47c69ddd93-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-87z8v\" (UID: \"58f8c7f1-d425-4b21-ba27-1e47c69ddd93\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-87z8v" Feb 19 10:07:43 crc kubenswrapper[4965]: I0219 10:07:43.783286 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58f8c7f1-d425-4b21-ba27-1e47c69ddd93-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-87z8v\" (UID: \"58f8c7f1-d425-4b21-ba27-1e47c69ddd93\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-87z8v" Feb 19 10:07:43 crc kubenswrapper[4965]: I0219 10:07:43.783467 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/58f8c7f1-d425-4b21-ba27-1e47c69ddd93-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-87z8v\" (UID: \"58f8c7f1-d425-4b21-ba27-1e47c69ddd93\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-87z8v" Feb 19 10:07:43 crc kubenswrapper[4965]: I0219 10:07:43.884911 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/58f8c7f1-d425-4b21-ba27-1e47c69ddd93-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-87z8v\" (UID: \"58f8c7f1-d425-4b21-ba27-1e47c69ddd93\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-87z8v" Feb 19 10:07:43 crc kubenswrapper[4965]: I0219 10:07:43.885180 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58f8c7f1-d425-4b21-ba27-1e47c69ddd93-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-87z8v\" (UID: \"58f8c7f1-d425-4b21-ba27-1e47c69ddd93\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-87z8v" Feb 19 10:07:43 crc kubenswrapper[4965]: I0219 10:07:43.885309 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/58f8c7f1-d425-4b21-ba27-1e47c69ddd93-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-87z8v\" (UID: \"58f8c7f1-d425-4b21-ba27-1e47c69ddd93\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-87z8v" Feb 19 10:07:43 crc kubenswrapper[4965]: I0219 10:07:43.885439 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hlz5\" (UniqueName: \"kubernetes.io/projected/58f8c7f1-d425-4b21-ba27-1e47c69ddd93-kube-api-access-8hlz5\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-87z8v\" (UID: \"58f8c7f1-d425-4b21-ba27-1e47c69ddd93\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-87z8v" Feb 19 10:07:43 crc kubenswrapper[4965]: I0219 10:07:43.891237 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/58f8c7f1-d425-4b21-ba27-1e47c69ddd93-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-87z8v\" (UID: \"58f8c7f1-d425-4b21-ba27-1e47c69ddd93\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-87z8v" Feb 19 10:07:43 crc kubenswrapper[4965]: I0219 10:07:43.893029 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58f8c7f1-d425-4b21-ba27-1e47c69ddd93-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-87z8v\" (UID: \"58f8c7f1-d425-4b21-ba27-1e47c69ddd93\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-87z8v" Feb 19 10:07:43 crc kubenswrapper[4965]: I0219 10:07:43.913921 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hlz5\" (UniqueName: \"kubernetes.io/projected/58f8c7f1-d425-4b21-ba27-1e47c69ddd93-kube-api-access-8hlz5\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-87z8v\" (UID: \"58f8c7f1-d425-4b21-ba27-1e47c69ddd93\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-87z8v" Feb 19 10:07:43 crc kubenswrapper[4965]: I0219 10:07:43.915103 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/58f8c7f1-d425-4b21-ba27-1e47c69ddd93-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-87z8v\" (UID: \"58f8c7f1-d425-4b21-ba27-1e47c69ddd93\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-87z8v" Feb 19 10:07:44 crc kubenswrapper[4965]: I0219 10:07:44.045355 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-87z8v" Feb 19 10:07:44 crc kubenswrapper[4965]: I0219 10:07:44.646928 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-87z8v"] Feb 19 10:07:45 crc kubenswrapper[4965]: I0219 10:07:45.120300 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-api-0" Feb 19 10:07:45 crc kubenswrapper[4965]: I0219 10:07:45.121953 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-87z8v" event={"ID":"58f8c7f1-d425-4b21-ba27-1e47c69ddd93","Type":"ContainerStarted","Data":"ffa37a839c7aeaa1e0df24c645c614759a966688ea9998cca853d1f185ee38b6"} Feb 19 10:07:46 crc kubenswrapper[4965]: I0219 10:07:46.600841 4965 patch_prober.go:28] interesting pod/machine-config-daemon-7mhh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:07:46 crc kubenswrapper[4965]: I0219 10:07:46.601134 4965 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:07:53 crc kubenswrapper[4965]: I0219 10:07:53.264704 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-52g9t"] Feb 19 10:07:53 crc kubenswrapper[4965]: I0219 10:07:53.267442 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-52g9t" Feb 19 10:07:53 crc kubenswrapper[4965]: I0219 10:07:53.285032 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-52g9t"] Feb 19 10:07:53 crc kubenswrapper[4965]: I0219 10:07:53.447790 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n456n\" (UniqueName: \"kubernetes.io/projected/e0220a23-3e8c-4081-b031-cbadd8c99d90-kube-api-access-n456n\") pod \"redhat-marketplace-52g9t\" (UID: \"e0220a23-3e8c-4081-b031-cbadd8c99d90\") " pod="openshift-marketplace/redhat-marketplace-52g9t" Feb 19 10:07:53 crc kubenswrapper[4965]: I0219 10:07:53.447869 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0220a23-3e8c-4081-b031-cbadd8c99d90-utilities\") pod \"redhat-marketplace-52g9t\" (UID: \"e0220a23-3e8c-4081-b031-cbadd8c99d90\") " pod="openshift-marketplace/redhat-marketplace-52g9t" Feb 19 10:07:53 crc kubenswrapper[4965]: I0219 10:07:53.447941 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0220a23-3e8c-4081-b031-cbadd8c99d90-catalog-content\") pod \"redhat-marketplace-52g9t\" (UID: \"e0220a23-3e8c-4081-b031-cbadd8c99d90\") " pod="openshift-marketplace/redhat-marketplace-52g9t" Feb 19 10:07:53 crc kubenswrapper[4965]: I0219 10:07:53.550041 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n456n\" (UniqueName: \"kubernetes.io/projected/e0220a23-3e8c-4081-b031-cbadd8c99d90-kube-api-access-n456n\") pod \"redhat-marketplace-52g9t\" (UID: \"e0220a23-3e8c-4081-b031-cbadd8c99d90\") " pod="openshift-marketplace/redhat-marketplace-52g9t" Feb 19 10:07:53 crc kubenswrapper[4965]: I0219 10:07:53.550130 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0220a23-3e8c-4081-b031-cbadd8c99d90-utilities\") pod \"redhat-marketplace-52g9t\" (UID: \"e0220a23-3e8c-4081-b031-cbadd8c99d90\") " pod="openshift-marketplace/redhat-marketplace-52g9t" Feb 19 10:07:53 crc kubenswrapper[4965]: I0219 10:07:53.550211 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0220a23-3e8c-4081-b031-cbadd8c99d90-catalog-content\") pod \"redhat-marketplace-52g9t\" (UID: \"e0220a23-3e8c-4081-b031-cbadd8c99d90\") " pod="openshift-marketplace/redhat-marketplace-52g9t" Feb 19 10:07:53 crc kubenswrapper[4965]: I0219 10:07:53.550743 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0220a23-3e8c-4081-b031-cbadd8c99d90-utilities\") pod \"redhat-marketplace-52g9t\" (UID: \"e0220a23-3e8c-4081-b031-cbadd8c99d90\") " pod="openshift-marketplace/redhat-marketplace-52g9t" Feb 19 10:07:53 crc kubenswrapper[4965]: I0219 10:07:53.550901 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0220a23-3e8c-4081-b031-cbadd8c99d90-catalog-content\") pod \"redhat-marketplace-52g9t\" (UID: \"e0220a23-3e8c-4081-b031-cbadd8c99d90\") " pod="openshift-marketplace/redhat-marketplace-52g9t" Feb 19 10:07:53 crc kubenswrapper[4965]: I0219 10:07:53.578853 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n456n\" (UniqueName: \"kubernetes.io/projected/e0220a23-3e8c-4081-b031-cbadd8c99d90-kube-api-access-n456n\") pod \"redhat-marketplace-52g9t\" (UID: \"e0220a23-3e8c-4081-b031-cbadd8c99d90\") " pod="openshift-marketplace/redhat-marketplace-52g9t" Feb 19 10:07:53 crc kubenswrapper[4965]: I0219 10:07:53.604062 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-52g9t" Feb 19 10:07:55 crc kubenswrapper[4965]: I0219 10:07:55.070427 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 19 10:07:55 crc kubenswrapper[4965]: I0219 10:07:55.132370 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:07:57 crc kubenswrapper[4965]: I0219 10:07:57.128544 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-52g9t"] Feb 19 10:07:57 crc kubenswrapper[4965]: I0219 10:07:57.287513 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-87z8v" event={"ID":"58f8c7f1-d425-4b21-ba27-1e47c69ddd93","Type":"ContainerStarted","Data":"06a13bf571ec2a87144b81c320b885d099bc94a74f079f3726d056822e8582cd"} Feb 19 10:07:57 crc kubenswrapper[4965]: I0219 10:07:57.288723 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-52g9t" event={"ID":"e0220a23-3e8c-4081-b031-cbadd8c99d90","Type":"ContainerStarted","Data":"d04ddbf5ca2ca0ad0ed659940f3a81ad54ec00f82c93b1f07aee11e7d34d672a"} Feb 19 10:07:57 crc kubenswrapper[4965]: I0219 10:07:57.310223 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-87z8v" podStartSLOduration=2.271925903 podStartE2EDuration="14.31018756s" podCreationTimestamp="2026-02-19 10:07:43 +0000 UTC" firstStartedPulling="2026-02-19 10:07:44.644260405 +0000 UTC m=+1520.265581725" lastFinishedPulling="2026-02-19 10:07:56.682522072 +0000 UTC m=+1532.303843382" observedRunningTime="2026-02-19 10:07:57.304277197 +0000 UTC m=+1532.925598507" watchObservedRunningTime="2026-02-19 10:07:57.31018756 +0000 UTC m=+1532.931508870" Feb 19 10:07:58 crc kubenswrapper[4965]: I0219 10:07:58.299925 4965 generic.go:334] "Generic (PLEG): container finished" podID="e0220a23-3e8c-4081-b031-cbadd8c99d90" containerID="9c7990407585c7ecff12658107c4bc2c2954d80626390662a66c62071763621c" exitCode=0 Feb 19 10:07:58 crc kubenswrapper[4965]: I0219 10:07:58.300180 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-52g9t" event={"ID":"e0220a23-3e8c-4081-b031-cbadd8c99d90","Type":"ContainerDied","Data":"9c7990407585c7ecff12658107c4bc2c2954d80626390662a66c62071763621c"} Feb 19 10:07:59 crc kubenswrapper[4965]: I0219 10:07:59.310109 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-52g9t" event={"ID":"e0220a23-3e8c-4081-b031-cbadd8c99d90","Type":"ContainerStarted","Data":"0e1137a9ae7fa05bedab186ccc1e00fcc8c7e02e9b1168c09c052750deaea509"} Feb 19 10:08:00 crc kubenswrapper[4965]: I0219 10:08:00.321005 4965 generic.go:334] "Generic (PLEG): container finished" podID="e0220a23-3e8c-4081-b031-cbadd8c99d90" containerID="0e1137a9ae7fa05bedab186ccc1e00fcc8c7e02e9b1168c09c052750deaea509" exitCode=0 Feb 19 10:08:00 crc kubenswrapper[4965]: I0219 10:08:00.321096 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-52g9t" event={"ID":"e0220a23-3e8c-4081-b031-cbadd8c99d90","Type":"ContainerDied","Data":"0e1137a9ae7fa05bedab186ccc1e00fcc8c7e02e9b1168c09c052750deaea509"} Feb 19 10:08:01 crc kubenswrapper[4965]: I0219 10:08:01.332410 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-52g9t" event={"ID":"e0220a23-3e8c-4081-b031-cbadd8c99d90","Type":"ContainerStarted","Data":"66d54a7e20f6f4fd481f7f51171a78988f01633daa4fa4374e1fe03046c0b8be"} Feb 19 10:08:01 crc kubenswrapper[4965]: I0219 10:08:01.353720 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-52g9t" podStartSLOduration=5.94820766 podStartE2EDuration="8.353696923s" podCreationTimestamp="2026-02-19 10:07:53 +0000 UTC" firstStartedPulling="2026-02-19 10:07:58.301859157 +0000 UTC m=+1533.923180467" lastFinishedPulling="2026-02-19 10:08:00.70734843 +0000 UTC m=+1536.328669730" observedRunningTime="2026-02-19 10:08:01.348027985 +0000 UTC m=+1536.969349295" watchObservedRunningTime="2026-02-19 10:08:01.353696923 +0000 UTC m=+1536.975018243" Feb 19 10:08:03 crc kubenswrapper[4965]: I0219 10:08:03.604537 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-52g9t" Feb 19 10:08:03 crc kubenswrapper[4965]: I0219 10:08:03.604879 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-52g9t" Feb 19 10:08:03 crc kubenswrapper[4965]: I0219 10:08:03.661678 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-52g9t" Feb 19 10:08:07 crc kubenswrapper[4965]: I0219 10:08:07.390996 4965 generic.go:334] "Generic (PLEG): container finished" podID="58f8c7f1-d425-4b21-ba27-1e47c69ddd93" containerID="06a13bf571ec2a87144b81c320b885d099bc94a74f079f3726d056822e8582cd" exitCode=0 Feb 19 10:08:07 crc kubenswrapper[4965]: I0219 10:08:07.391098 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-87z8v" event={"ID":"58f8c7f1-d425-4b21-ba27-1e47c69ddd93","Type":"ContainerDied","Data":"06a13bf571ec2a87144b81c320b885d099bc94a74f079f3726d056822e8582cd"} Feb 19 10:08:08 crc kubenswrapper[4965]: I0219 10:08:08.976229 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-87z8v" Feb 19 10:08:09 crc kubenswrapper[4965]: I0219 10:08:09.072364 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/58f8c7f1-d425-4b21-ba27-1e47c69ddd93-inventory\") pod \"58f8c7f1-d425-4b21-ba27-1e47c69ddd93\" (UID: \"58f8c7f1-d425-4b21-ba27-1e47c69ddd93\") " Feb 19 10:08:09 crc kubenswrapper[4965]: I0219 10:08:09.072476 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/58f8c7f1-d425-4b21-ba27-1e47c69ddd93-ssh-key-openstack-edpm-ipam\") pod \"58f8c7f1-d425-4b21-ba27-1e47c69ddd93\" (UID: \"58f8c7f1-d425-4b21-ba27-1e47c69ddd93\") " Feb 19 10:08:09 crc kubenswrapper[4965]: I0219 10:08:09.072588 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hlz5\" (UniqueName: \"kubernetes.io/projected/58f8c7f1-d425-4b21-ba27-1e47c69ddd93-kube-api-access-8hlz5\") pod \"58f8c7f1-d425-4b21-ba27-1e47c69ddd93\" (UID: \"58f8c7f1-d425-4b21-ba27-1e47c69ddd93\") " Feb 19 10:08:09 crc kubenswrapper[4965]: I0219 10:08:09.072655 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58f8c7f1-d425-4b21-ba27-1e47c69ddd93-repo-setup-combined-ca-bundle\") pod \"58f8c7f1-d425-4b21-ba27-1e47c69ddd93\" (UID: \"58f8c7f1-d425-4b21-ba27-1e47c69ddd93\") " Feb 19 10:08:09 crc kubenswrapper[4965]: I0219 10:08:09.080502 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58f8c7f1-d425-4b21-ba27-1e47c69ddd93-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "58f8c7f1-d425-4b21-ba27-1e47c69ddd93" (UID: "58f8c7f1-d425-4b21-ba27-1e47c69ddd93"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:09 crc kubenswrapper[4965]: I0219 10:08:09.084209 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58f8c7f1-d425-4b21-ba27-1e47c69ddd93-kube-api-access-8hlz5" (OuterVolumeSpecName: "kube-api-access-8hlz5") pod "58f8c7f1-d425-4b21-ba27-1e47c69ddd93" (UID: "58f8c7f1-d425-4b21-ba27-1e47c69ddd93"). InnerVolumeSpecName "kube-api-access-8hlz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:08:09 crc kubenswrapper[4965]: I0219 10:08:09.112738 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58f8c7f1-d425-4b21-ba27-1e47c69ddd93-inventory" (OuterVolumeSpecName: "inventory") pod "58f8c7f1-d425-4b21-ba27-1e47c69ddd93" (UID: "58f8c7f1-d425-4b21-ba27-1e47c69ddd93"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:09 crc kubenswrapper[4965]: I0219 10:08:09.114632 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58f8c7f1-d425-4b21-ba27-1e47c69ddd93-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "58f8c7f1-d425-4b21-ba27-1e47c69ddd93" (UID: "58f8c7f1-d425-4b21-ba27-1e47c69ddd93"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:09 crc kubenswrapper[4965]: I0219 10:08:09.178783 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hlz5\" (UniqueName: \"kubernetes.io/projected/58f8c7f1-d425-4b21-ba27-1e47c69ddd93-kube-api-access-8hlz5\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:09 crc kubenswrapper[4965]: I0219 10:08:09.178832 4965 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58f8c7f1-d425-4b21-ba27-1e47c69ddd93-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:09 crc kubenswrapper[4965]: I0219 10:08:09.178849 4965 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/58f8c7f1-d425-4b21-ba27-1e47c69ddd93-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:09 crc kubenswrapper[4965]: I0219 10:08:09.178863 4965 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/58f8c7f1-d425-4b21-ba27-1e47c69ddd93-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:09 crc kubenswrapper[4965]: I0219 10:08:09.432210 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-87z8v" event={"ID":"58f8c7f1-d425-4b21-ba27-1e47c69ddd93","Type":"ContainerDied","Data":"ffa37a839c7aeaa1e0df24c645c614759a966688ea9998cca853d1f185ee38b6"} Feb 19 10:08:09 crc kubenswrapper[4965]: I0219 10:08:09.432257 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ffa37a839c7aeaa1e0df24c645c614759a966688ea9998cca853d1f185ee38b6" Feb 19 10:08:09 crc kubenswrapper[4965]: I0219 10:08:09.432270 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-87z8v" Feb 19 10:08:09 crc kubenswrapper[4965]: I0219 10:08:09.511268 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-rdb84"] Feb 19 10:08:09 crc kubenswrapper[4965]: E0219 10:08:09.511786 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58f8c7f1-d425-4b21-ba27-1e47c69ddd93" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 19 10:08:09 crc kubenswrapper[4965]: I0219 10:08:09.511818 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="58f8c7f1-d425-4b21-ba27-1e47c69ddd93" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 19 10:08:09 crc kubenswrapper[4965]: I0219 10:08:09.512108 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="58f8c7f1-d425-4b21-ba27-1e47c69ddd93" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 19 10:08:09 crc kubenswrapper[4965]: I0219 10:08:09.513096 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rdb84" Feb 19 10:08:09 crc kubenswrapper[4965]: I0219 10:08:09.517028 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cthw6" Feb 19 10:08:09 crc kubenswrapper[4965]: I0219 10:08:09.517324 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 10:08:09 crc kubenswrapper[4965]: I0219 10:08:09.517428 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:08:09 crc kubenswrapper[4965]: I0219 10:08:09.517062 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 10:08:09 crc kubenswrapper[4965]: I0219 10:08:09.547687 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-rdb84"] Feb 19 10:08:09 crc kubenswrapper[4965]: I0219 10:08:09.589884 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bds28\" (UniqueName: \"kubernetes.io/projected/da068017-3803-4d74-bea1-932b1d829055-kube-api-access-bds28\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-rdb84\" (UID: \"da068017-3803-4d74-bea1-932b1d829055\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rdb84" Feb 19 10:08:09 crc kubenswrapper[4965]: I0219 10:08:09.590231 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da068017-3803-4d74-bea1-932b1d829055-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-rdb84\" (UID: \"da068017-3803-4d74-bea1-932b1d829055\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rdb84" Feb 19 10:08:09 crc kubenswrapper[4965]: I0219 10:08:09.590438 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/da068017-3803-4d74-bea1-932b1d829055-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-rdb84\" (UID: \"da068017-3803-4d74-bea1-932b1d829055\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rdb84" Feb 19 10:08:09 crc kubenswrapper[4965]: I0219 10:08:09.692511 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bds28\" (UniqueName: \"kubernetes.io/projected/da068017-3803-4d74-bea1-932b1d829055-kube-api-access-bds28\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-rdb84\" (UID: \"da068017-3803-4d74-bea1-932b1d829055\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rdb84" Feb 19 10:08:09 crc kubenswrapper[4965]: I0219 10:08:09.692689 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da068017-3803-4d74-bea1-932b1d829055-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-rdb84\" (UID: \"da068017-3803-4d74-bea1-932b1d829055\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rdb84" Feb 19 10:08:09 crc kubenswrapper[4965]: I0219 10:08:09.692781 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/da068017-3803-4d74-bea1-932b1d829055-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-rdb84\" (UID: \"da068017-3803-4d74-bea1-932b1d829055\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rdb84" Feb 19 10:08:09 crc kubenswrapper[4965]: I0219 10:08:09.698117 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/da068017-3803-4d74-bea1-932b1d829055-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-rdb84\" (UID: \"da068017-3803-4d74-bea1-932b1d829055\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rdb84" Feb 19 10:08:09 crc kubenswrapper[4965]: I0219 10:08:09.698721 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da068017-3803-4d74-bea1-932b1d829055-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-rdb84\" (UID: \"da068017-3803-4d74-bea1-932b1d829055\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rdb84" Feb 19 10:08:09 crc kubenswrapper[4965]: I0219 10:08:09.718582 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bds28\" (UniqueName: \"kubernetes.io/projected/da068017-3803-4d74-bea1-932b1d829055-kube-api-access-bds28\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-rdb84\" (UID: \"da068017-3803-4d74-bea1-932b1d829055\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rdb84" Feb 19 10:08:09 crc kubenswrapper[4965]: I0219 10:08:09.843371 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rdb84" Feb 19 10:08:10 crc kubenswrapper[4965]: W0219 10:08:10.370869 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda068017_3803_4d74_bea1_932b1d829055.slice/crio-4587c74310ea2c8c30ff62b6bf995fb34a3211551541c24cb6ed8522a98374ee WatchSource:0}: Error finding container 4587c74310ea2c8c30ff62b6bf995fb34a3211551541c24cb6ed8522a98374ee: Status 404 returned error can't find the container with id 4587c74310ea2c8c30ff62b6bf995fb34a3211551541c24cb6ed8522a98374ee Feb 19 10:08:10 crc kubenswrapper[4965]: I0219 10:08:10.371688 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-rdb84"] Feb 19 10:08:10 crc kubenswrapper[4965]: I0219 10:08:10.441995 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rdb84" event={"ID":"da068017-3803-4d74-bea1-932b1d829055","Type":"ContainerStarted","Data":"4587c74310ea2c8c30ff62b6bf995fb34a3211551541c24cb6ed8522a98374ee"} Feb 19 10:08:11 crc kubenswrapper[4965]: I0219 10:08:11.455353 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rdb84" event={"ID":"da068017-3803-4d74-bea1-932b1d829055","Type":"ContainerStarted","Data":"75d6e47bc0001d6a5d136565d6afc66f301b3a91f4005c5e854069e97d388dbc"} Feb 19 10:08:11 crc kubenswrapper[4965]: I0219 10:08:11.481228 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rdb84" podStartSLOduration=2.035187469 podStartE2EDuration="2.481207958s" podCreationTimestamp="2026-02-19 10:08:09 +0000 UTC" firstStartedPulling="2026-02-19 10:08:10.373795951 +0000 UTC m=+1545.995117261" lastFinishedPulling="2026-02-19 10:08:10.81981644 +0000 UTC m=+1546.441137750" observedRunningTime="2026-02-19 10:08:11.474548197 +0000 UTC m=+1547.095869537" watchObservedRunningTime="2026-02-19 10:08:11.481207958 +0000 UTC m=+1547.102529268" Feb 19 10:08:13 crc kubenswrapper[4965]: I0219 10:08:13.475499 4965 generic.go:334] "Generic (PLEG): container finished" podID="da068017-3803-4d74-bea1-932b1d829055" containerID="75d6e47bc0001d6a5d136565d6afc66f301b3a91f4005c5e854069e97d388dbc" exitCode=0 Feb 19 10:08:13 crc kubenswrapper[4965]: I0219 10:08:13.475596 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rdb84" event={"ID":"da068017-3803-4d74-bea1-932b1d829055","Type":"ContainerDied","Data":"75d6e47bc0001d6a5d136565d6afc66f301b3a91f4005c5e854069e97d388dbc"} Feb 19 10:08:13 crc kubenswrapper[4965]: I0219 10:08:13.659590 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-52g9t" Feb 19 10:08:13 crc kubenswrapper[4965]: I0219 10:08:13.720262 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-52g9t"] Feb 19 10:08:14 crc kubenswrapper[4965]: I0219 10:08:14.485999 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-52g9t" podUID="e0220a23-3e8c-4081-b031-cbadd8c99d90" containerName="registry-server" containerID="cri-o://66d54a7e20f6f4fd481f7f51171a78988f01633daa4fa4374e1fe03046c0b8be" gracePeriod=2 Feb 19 10:08:15 crc kubenswrapper[4965]: I0219 10:08:15.220647 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rdb84" Feb 19 10:08:15 crc kubenswrapper[4965]: I0219 10:08:15.240518 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-52g9t" Feb 19 10:08:15 crc kubenswrapper[4965]: I0219 10:08:15.307812 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n456n\" (UniqueName: \"kubernetes.io/projected/e0220a23-3e8c-4081-b031-cbadd8c99d90-kube-api-access-n456n\") pod \"e0220a23-3e8c-4081-b031-cbadd8c99d90\" (UID: \"e0220a23-3e8c-4081-b031-cbadd8c99d90\") " Feb 19 10:08:15 crc kubenswrapper[4965]: I0219 10:08:15.307872 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bds28\" (UniqueName: \"kubernetes.io/projected/da068017-3803-4d74-bea1-932b1d829055-kube-api-access-bds28\") pod \"da068017-3803-4d74-bea1-932b1d829055\" (UID: \"da068017-3803-4d74-bea1-932b1d829055\") " Feb 19 10:08:15 crc kubenswrapper[4965]: I0219 10:08:15.307938 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0220a23-3e8c-4081-b031-cbadd8c99d90-catalog-content\") pod \"e0220a23-3e8c-4081-b031-cbadd8c99d90\" (UID: \"e0220a23-3e8c-4081-b031-cbadd8c99d90\") " Feb 19 10:08:15 crc kubenswrapper[4965]: I0219 10:08:15.307972 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/da068017-3803-4d74-bea1-932b1d829055-ssh-key-openstack-edpm-ipam\") pod \"da068017-3803-4d74-bea1-932b1d829055\" (UID: \"da068017-3803-4d74-bea1-932b1d829055\") " Feb 19 10:08:15 crc kubenswrapper[4965]: I0219 10:08:15.308106 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da068017-3803-4d74-bea1-932b1d829055-inventory\") pod \"da068017-3803-4d74-bea1-932b1d829055\" (UID: \"da068017-3803-4d74-bea1-932b1d829055\") " Feb 19 10:08:15 crc kubenswrapper[4965]: I0219 10:08:15.308310 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0220a23-3e8c-4081-b031-cbadd8c99d90-utilities\") pod \"e0220a23-3e8c-4081-b031-cbadd8c99d90\" (UID: \"e0220a23-3e8c-4081-b031-cbadd8c99d90\") " Feb 19 10:08:15 crc kubenswrapper[4965]: I0219 10:08:15.313950 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0220a23-3e8c-4081-b031-cbadd8c99d90-utilities" (OuterVolumeSpecName: "utilities") pod "e0220a23-3e8c-4081-b031-cbadd8c99d90" (UID: "e0220a23-3e8c-4081-b031-cbadd8c99d90"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:08:15 crc kubenswrapper[4965]: I0219 10:08:15.316507 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0220a23-3e8c-4081-b031-cbadd8c99d90-kube-api-access-n456n" (OuterVolumeSpecName: "kube-api-access-n456n") pod "e0220a23-3e8c-4081-b031-cbadd8c99d90" (UID: "e0220a23-3e8c-4081-b031-cbadd8c99d90"). InnerVolumeSpecName "kube-api-access-n456n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:08:15 crc kubenswrapper[4965]: I0219 10:08:15.316586 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da068017-3803-4d74-bea1-932b1d829055-kube-api-access-bds28" (OuterVolumeSpecName: "kube-api-access-bds28") pod "da068017-3803-4d74-bea1-932b1d829055" (UID: "da068017-3803-4d74-bea1-932b1d829055"). InnerVolumeSpecName "kube-api-access-bds28". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:08:15 crc kubenswrapper[4965]: I0219 10:08:15.333884 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0220a23-3e8c-4081-b031-cbadd8c99d90-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e0220a23-3e8c-4081-b031-cbadd8c99d90" (UID: "e0220a23-3e8c-4081-b031-cbadd8c99d90"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:08:15 crc kubenswrapper[4965]: I0219 10:08:15.337590 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da068017-3803-4d74-bea1-932b1d829055-inventory" (OuterVolumeSpecName: "inventory") pod "da068017-3803-4d74-bea1-932b1d829055" (UID: "da068017-3803-4d74-bea1-932b1d829055"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:15 crc kubenswrapper[4965]: I0219 10:08:15.345186 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da068017-3803-4d74-bea1-932b1d829055-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "da068017-3803-4d74-bea1-932b1d829055" (UID: "da068017-3803-4d74-bea1-932b1d829055"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:15 crc kubenswrapper[4965]: I0219 10:08:15.412484 4965 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da068017-3803-4d74-bea1-932b1d829055-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:15 crc kubenswrapper[4965]: I0219 10:08:15.412526 4965 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0220a23-3e8c-4081-b031-cbadd8c99d90-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:15 crc kubenswrapper[4965]: I0219 10:08:15.412536 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n456n\" (UniqueName: \"kubernetes.io/projected/e0220a23-3e8c-4081-b031-cbadd8c99d90-kube-api-access-n456n\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:15 crc kubenswrapper[4965]: I0219 10:08:15.412550 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bds28\" (UniqueName: \"kubernetes.io/projected/da068017-3803-4d74-bea1-932b1d829055-kube-api-access-bds28\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:15 crc kubenswrapper[4965]: I0219 10:08:15.412558 4965 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0220a23-3e8c-4081-b031-cbadd8c99d90-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:15 crc kubenswrapper[4965]: I0219 10:08:15.412567 4965 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/da068017-3803-4d74-bea1-932b1d829055-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:15 crc kubenswrapper[4965]: I0219 10:08:15.496711 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rdb84" event={"ID":"da068017-3803-4d74-bea1-932b1d829055","Type":"ContainerDied","Data":"4587c74310ea2c8c30ff62b6bf995fb34a3211551541c24cb6ed8522a98374ee"} Feb 19 10:08:15 crc kubenswrapper[4965]: I0219 10:08:15.497036 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4587c74310ea2c8c30ff62b6bf995fb34a3211551541c24cb6ed8522a98374ee" Feb 19 10:08:15 crc kubenswrapper[4965]: I0219 10:08:15.496800 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rdb84" Feb 19 10:08:15 crc kubenswrapper[4965]: I0219 10:08:15.504019 4965 generic.go:334] "Generic (PLEG): container finished" podID="e0220a23-3e8c-4081-b031-cbadd8c99d90" containerID="66d54a7e20f6f4fd481f7f51171a78988f01633daa4fa4374e1fe03046c0b8be" exitCode=0 Feb 19 10:08:15 crc kubenswrapper[4965]: I0219 10:08:15.504068 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-52g9t" event={"ID":"e0220a23-3e8c-4081-b031-cbadd8c99d90","Type":"ContainerDied","Data":"66d54a7e20f6f4fd481f7f51171a78988f01633daa4fa4374e1fe03046c0b8be"} Feb 19 10:08:15 crc kubenswrapper[4965]: I0219 10:08:15.504129 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-52g9t" event={"ID":"e0220a23-3e8c-4081-b031-cbadd8c99d90","Type":"ContainerDied","Data":"d04ddbf5ca2ca0ad0ed659940f3a81ad54ec00f82c93b1f07aee11e7d34d672a"} Feb 19 10:08:15 crc kubenswrapper[4965]: I0219 10:08:15.504158 4965 scope.go:117] "RemoveContainer" containerID="66d54a7e20f6f4fd481f7f51171a78988f01633daa4fa4374e1fe03046c0b8be" Feb 19 10:08:15 crc kubenswrapper[4965]: I0219 10:08:15.504096 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-52g9t" Feb 19 10:08:15 crc kubenswrapper[4965]: I0219 10:08:15.536260 4965 scope.go:117] "RemoveContainer" containerID="0e1137a9ae7fa05bedab186ccc1e00fcc8c7e02e9b1168c09c052750deaea509" Feb 19 10:08:15 crc kubenswrapper[4965]: I0219 10:08:15.571301 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-52g9t"] Feb 19 10:08:15 crc kubenswrapper[4965]: I0219 10:08:15.573729 4965 scope.go:117] "RemoveContainer" containerID="9c7990407585c7ecff12658107c4bc2c2954d80626390662a66c62071763621c" Feb 19 10:08:15 crc kubenswrapper[4965]: I0219 10:08:15.614751 4965 scope.go:117] "RemoveContainer" containerID="66d54a7e20f6f4fd481f7f51171a78988f01633daa4fa4374e1fe03046c0b8be" Feb 19 10:08:15 crc kubenswrapper[4965]: E0219 10:08:15.625728 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66d54a7e20f6f4fd481f7f51171a78988f01633daa4fa4374e1fe03046c0b8be\": container with ID starting with 66d54a7e20f6f4fd481f7f51171a78988f01633daa4fa4374e1fe03046c0b8be not found: ID does not exist" containerID="66d54a7e20f6f4fd481f7f51171a78988f01633daa4fa4374e1fe03046c0b8be" Feb 19 10:08:15 crc kubenswrapper[4965]: I0219 10:08:15.625989 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66d54a7e20f6f4fd481f7f51171a78988f01633daa4fa4374e1fe03046c0b8be"} err="failed to get container status \"66d54a7e20f6f4fd481f7f51171a78988f01633daa4fa4374e1fe03046c0b8be\": rpc error: code = NotFound desc = could not find container \"66d54a7e20f6f4fd481f7f51171a78988f01633daa4fa4374e1fe03046c0b8be\": container with ID starting with 66d54a7e20f6f4fd481f7f51171a78988f01633daa4fa4374e1fe03046c0b8be not found: ID does not exist" Feb 19 10:08:15 crc kubenswrapper[4965]: I0219 10:08:15.626078 4965 scope.go:117] "RemoveContainer" containerID="0e1137a9ae7fa05bedab186ccc1e00fcc8c7e02e9b1168c09c052750deaea509" Feb 19 10:08:15 crc kubenswrapper[4965]: E0219 10:08:15.628906 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e1137a9ae7fa05bedab186ccc1e00fcc8c7e02e9b1168c09c052750deaea509\": container with ID starting with 0e1137a9ae7fa05bedab186ccc1e00fcc8c7e02e9b1168c09c052750deaea509 not found: ID does not exist" containerID="0e1137a9ae7fa05bedab186ccc1e00fcc8c7e02e9b1168c09c052750deaea509" Feb 19 10:08:15 crc kubenswrapper[4965]: I0219 10:08:15.628939 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e1137a9ae7fa05bedab186ccc1e00fcc8c7e02e9b1168c09c052750deaea509"} err="failed to get container status \"0e1137a9ae7fa05bedab186ccc1e00fcc8c7e02e9b1168c09c052750deaea509\": rpc error: code = NotFound desc = could not find container \"0e1137a9ae7fa05bedab186ccc1e00fcc8c7e02e9b1168c09c052750deaea509\": container with ID starting with 0e1137a9ae7fa05bedab186ccc1e00fcc8c7e02e9b1168c09c052750deaea509 not found: ID does not exist" Feb 19 10:08:15 crc kubenswrapper[4965]: I0219 10:08:15.628960 4965 scope.go:117] "RemoveContainer" containerID="9c7990407585c7ecff12658107c4bc2c2954d80626390662a66c62071763621c" Feb 19 10:08:15 crc kubenswrapper[4965]: I0219 10:08:15.630361 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-52g9t"] Feb 19 10:08:15 crc kubenswrapper[4965]: E0219 10:08:15.632987 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c7990407585c7ecff12658107c4bc2c2954d80626390662a66c62071763621c\": container with ID starting with 9c7990407585c7ecff12658107c4bc2c2954d80626390662a66c62071763621c not found: ID does not exist" containerID="9c7990407585c7ecff12658107c4bc2c2954d80626390662a66c62071763621c" Feb 19 10:08:15 crc kubenswrapper[4965]: I0219 10:08:15.633099 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c7990407585c7ecff12658107c4bc2c2954d80626390662a66c62071763621c"} err="failed to get container status \"9c7990407585c7ecff12658107c4bc2c2954d80626390662a66c62071763621c\": rpc error: code = NotFound desc = could not find container \"9c7990407585c7ecff12658107c4bc2c2954d80626390662a66c62071763621c\": container with ID starting with 9c7990407585c7ecff12658107c4bc2c2954d80626390662a66c62071763621c not found: ID does not exist" Feb 19 10:08:15 crc kubenswrapper[4965]: I0219 10:08:15.641648 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m8wg9"] Feb 19 10:08:15 crc kubenswrapper[4965]: E0219 10:08:15.642218 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0220a23-3e8c-4081-b031-cbadd8c99d90" containerName="registry-server" Feb 19 10:08:15 crc kubenswrapper[4965]: I0219 10:08:15.642244 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0220a23-3e8c-4081-b031-cbadd8c99d90" containerName="registry-server" Feb 19 10:08:15 crc kubenswrapper[4965]: E0219 10:08:15.642281 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0220a23-3e8c-4081-b031-cbadd8c99d90" containerName="extract-utilities" Feb 19 10:08:15 crc kubenswrapper[4965]: I0219 10:08:15.642291 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0220a23-3e8c-4081-b031-cbadd8c99d90" containerName="extract-utilities" Feb 19 10:08:15 crc kubenswrapper[4965]: E0219 10:08:15.642314 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0220a23-3e8c-4081-b031-cbadd8c99d90" containerName="extract-content" Feb 19 10:08:15 crc kubenswrapper[4965]: I0219 10:08:15.642322 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0220a23-3e8c-4081-b031-cbadd8c99d90" containerName="extract-content" Feb 19 10:08:15 crc kubenswrapper[4965]: E0219 10:08:15.642335 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da068017-3803-4d74-bea1-932b1d829055" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 19 10:08:15 crc kubenswrapper[4965]: I0219 10:08:15.642343 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="da068017-3803-4d74-bea1-932b1d829055" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 19 10:08:15 crc kubenswrapper[4965]: I0219 10:08:15.642568 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="da068017-3803-4d74-bea1-932b1d829055" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 19 10:08:15 crc kubenswrapper[4965]: I0219 10:08:15.642583 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0220a23-3e8c-4081-b031-cbadd8c99d90" containerName="registry-server" Feb 19 10:08:15 crc kubenswrapper[4965]: I0219 10:08:15.643459 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m8wg9" Feb 19 10:08:15 crc kubenswrapper[4965]: I0219 10:08:15.646132 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 10:08:15 crc kubenswrapper[4965]: I0219 10:08:15.646377 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:08:15 crc kubenswrapper[4965]: I0219 10:08:15.647500 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cthw6" Feb 19 10:08:15 crc kubenswrapper[4965]: I0219 10:08:15.647920 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 10:08:15 crc kubenswrapper[4965]: I0219 10:08:15.650958 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m8wg9"] Feb 19 10:08:15 crc kubenswrapper[4965]: I0219 10:08:15.728363 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6a006f0-d704-4e08-bc46-118269ad9b1a-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-m8wg9\" (UID: \"a6a006f0-d704-4e08-bc46-118269ad9b1a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m8wg9" Feb 19 10:08:15 crc kubenswrapper[4965]: I0219 10:08:15.728531 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfr9n\" (UniqueName: \"kubernetes.io/projected/a6a006f0-d704-4e08-bc46-118269ad9b1a-kube-api-access-qfr9n\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-m8wg9\" (UID: \"a6a006f0-d704-4e08-bc46-118269ad9b1a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m8wg9" Feb 19 10:08:15 crc kubenswrapper[4965]: I0219 10:08:15.728769 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a6a006f0-d704-4e08-bc46-118269ad9b1a-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-m8wg9\" (UID: \"a6a006f0-d704-4e08-bc46-118269ad9b1a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m8wg9" Feb 19 10:08:15 crc kubenswrapper[4965]: I0219 10:08:15.728822 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6a006f0-d704-4e08-bc46-118269ad9b1a-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-m8wg9\" (UID: \"a6a006f0-d704-4e08-bc46-118269ad9b1a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m8wg9" Feb 19 10:08:15 crc kubenswrapper[4965]: I0219 10:08:15.831068 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6a006f0-d704-4e08-bc46-118269ad9b1a-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-m8wg9\" (UID: \"a6a006f0-d704-4e08-bc46-118269ad9b1a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m8wg9" Feb 19 10:08:15 crc kubenswrapper[4965]: I0219 10:08:15.831232 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfr9n\" (UniqueName: \"kubernetes.io/projected/a6a006f0-d704-4e08-bc46-118269ad9b1a-kube-api-access-qfr9n\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-m8wg9\" (UID: \"a6a006f0-d704-4e08-bc46-118269ad9b1a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m8wg9" Feb 19 10:08:15 crc kubenswrapper[4965]: I0219 10:08:15.831314 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a6a006f0-d704-4e08-bc46-118269ad9b1a-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-m8wg9\" (UID: \"a6a006f0-d704-4e08-bc46-118269ad9b1a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m8wg9" Feb 19 10:08:15 crc kubenswrapper[4965]: I0219 10:08:15.831346 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6a006f0-d704-4e08-bc46-118269ad9b1a-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-m8wg9\" (UID: \"a6a006f0-d704-4e08-bc46-118269ad9b1a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m8wg9" Feb 19 10:08:15 crc kubenswrapper[4965]: I0219 10:08:15.835777 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6a006f0-d704-4e08-bc46-118269ad9b1a-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-m8wg9\" (UID: \"a6a006f0-d704-4e08-bc46-118269ad9b1a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m8wg9" Feb 19 10:08:15 crc kubenswrapper[4965]: I0219 10:08:15.838651 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a6a006f0-d704-4e08-bc46-118269ad9b1a-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-m8wg9\" (UID: \"a6a006f0-d704-4e08-bc46-118269ad9b1a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m8wg9" Feb 19 10:08:15 crc kubenswrapper[4965]: I0219 10:08:15.838813 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6a006f0-d704-4e08-bc46-118269ad9b1a-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-m8wg9\" (UID: \"a6a006f0-d704-4e08-bc46-118269ad9b1a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m8wg9" Feb 19 10:08:15 crc kubenswrapper[4965]: I0219 10:08:15.848183 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfr9n\" (UniqueName: \"kubernetes.io/projected/a6a006f0-d704-4e08-bc46-118269ad9b1a-kube-api-access-qfr9n\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-m8wg9\" (UID: \"a6a006f0-d704-4e08-bc46-118269ad9b1a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m8wg9" Feb 19 10:08:15 crc kubenswrapper[4965]: I0219 10:08:15.990493 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m8wg9" Feb 19 10:08:16 crc kubenswrapper[4965]: I0219 10:08:16.555766 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m8wg9"] Feb 19 10:08:16 crc kubenswrapper[4965]: W0219 10:08:16.560649 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6a006f0_d704_4e08_bc46_118269ad9b1a.slice/crio-cd8818b8c5d9a0bf7dcd2a9de3cdf80aaeea91491fac3bb8a38c3dcd9b57ca41 WatchSource:0}: Error finding container cd8818b8c5d9a0bf7dcd2a9de3cdf80aaeea91491fac3bb8a38c3dcd9b57ca41: Status 404 returned error can't find the container with id cd8818b8c5d9a0bf7dcd2a9de3cdf80aaeea91491fac3bb8a38c3dcd9b57ca41 Feb 19 10:08:16 crc kubenswrapper[4965]: I0219 10:08:16.600998 4965 patch_prober.go:28] interesting pod/machine-config-daemon-7mhh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:08:16 crc kubenswrapper[4965]: I0219 10:08:16.601066 4965 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:08:16 crc kubenswrapper[4965]: I0219 10:08:16.601107 4965 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" Feb 19 10:08:16 crc kubenswrapper[4965]: I0219 10:08:16.602039 4965 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"76f06bc02934238a40bb54d8f37e941fa531c6ec466807a5bec720886092509c"} pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 10:08:16 crc kubenswrapper[4965]: I0219 10:08:16.602093 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerName="machine-config-daemon" containerID="cri-o://76f06bc02934238a40bb54d8f37e941fa531c6ec466807a5bec720886092509c" gracePeriod=600 Feb 19 10:08:16 crc kubenswrapper[4965]: E0219 10:08:16.728487 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:08:17 crc kubenswrapper[4965]: I0219 10:08:17.210692 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0220a23-3e8c-4081-b031-cbadd8c99d90" path="/var/lib/kubelet/pods/e0220a23-3e8c-4081-b031-cbadd8c99d90/volumes" Feb 19 10:08:17 crc kubenswrapper[4965]: I0219 10:08:17.528912 4965 generic.go:334] "Generic (PLEG): container finished" podID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerID="76f06bc02934238a40bb54d8f37e941fa531c6ec466807a5bec720886092509c" exitCode=0 Feb 19 10:08:17 crc kubenswrapper[4965]: I0219 10:08:17.528975 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" event={"ID":"63ef3eb8-6103-492d-b6ef-f16081d15e83","Type":"ContainerDied","Data":"76f06bc02934238a40bb54d8f37e941fa531c6ec466807a5bec720886092509c"} Feb 19 10:08:17 crc kubenswrapper[4965]: I0219 10:08:17.529008 4965 scope.go:117] "RemoveContainer" containerID="6b5800cd8d3cdf0bd49b0429f539e236aa824e01e6e8bf55c3f2737a438df531" Feb 19 10:08:17 crc kubenswrapper[4965]: I0219 10:08:17.530915 4965 scope.go:117] "RemoveContainer" containerID="76f06bc02934238a40bb54d8f37e941fa531c6ec466807a5bec720886092509c" Feb 19 10:08:17 crc kubenswrapper[4965]: E0219 10:08:17.531657 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:08:17 crc kubenswrapper[4965]: I0219 10:08:17.549015 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m8wg9" event={"ID":"a6a006f0-d704-4e08-bc46-118269ad9b1a","Type":"ContainerStarted","Data":"ea86e3e26447948246288c2d901c011373b2ceae33101e6a28ec541801f7bad9"} Feb 19 10:08:17 crc kubenswrapper[4965]: I0219 10:08:17.549061 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m8wg9" event={"ID":"a6a006f0-d704-4e08-bc46-118269ad9b1a","Type":"ContainerStarted","Data":"cd8818b8c5d9a0bf7dcd2a9de3cdf80aaeea91491fac3bb8a38c3dcd9b57ca41"} Feb 19 10:08:17 crc kubenswrapper[4965]: I0219 10:08:17.581418 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m8wg9" podStartSLOduration=2.102799254 podStartE2EDuration="2.581399464s" podCreationTimestamp="2026-02-19 10:08:15 +0000 UTC" firstStartedPulling="2026-02-19 10:08:16.563505711 +0000 UTC m=+1552.184827061" lastFinishedPulling="2026-02-19 10:08:17.042105961 +0000 UTC m=+1552.663427271" observedRunningTime="2026-02-19 10:08:17.576962237 +0000 UTC m=+1553.198283557" watchObservedRunningTime="2026-02-19 10:08:17.581399464 +0000 UTC m=+1553.202720774" Feb 19 10:08:32 crc kubenswrapper[4965]: I0219 10:08:32.198318 4965 scope.go:117] "RemoveContainer" containerID="76f06bc02934238a40bb54d8f37e941fa531c6ec466807a5bec720886092509c" Feb 19 10:08:32 crc kubenswrapper[4965]: E0219 10:08:32.199541 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:08:39 crc kubenswrapper[4965]: I0219 10:08:39.134817 4965 scope.go:117] "RemoveContainer" containerID="608a21ed7bc143cd03932578a5956c3b61eb39b55c671064ae53cc1bffed4b79" Feb 19 10:08:39 crc kubenswrapper[4965]: I0219 10:08:39.180398 4965 scope.go:117] "RemoveContainer" containerID="feeae2cc21bdacee5d2b3b592d0dc5c61c7070aee632a7ecbf902939ce3970f4" Feb 19 10:08:39 crc kubenswrapper[4965]: I0219 10:08:39.251139 4965 scope.go:117] "RemoveContainer" containerID="2fc31f0ae4a307f5ef9cd950c3fd4308423d9cdf3157dfd12804e1e247595f7d" Feb 19 10:08:39 crc kubenswrapper[4965]: I0219 10:08:39.293602 4965 scope.go:117] "RemoveContainer" containerID="4510ff5521596128b6842f4d75fff21620f3b5eab589a07f182813e8f8a49a65" Feb 19 10:08:39 crc kubenswrapper[4965]: I0219 10:08:39.346262 4965 scope.go:117] "RemoveContainer" containerID="69d0ce7d229f21da5b7a38cdd07621d4accb6a0beddfe52b80b2c8d758e82c75" Feb 19 10:08:47 crc kubenswrapper[4965]: I0219 10:08:47.198038 4965 scope.go:117] "RemoveContainer" containerID="76f06bc02934238a40bb54d8f37e941fa531c6ec466807a5bec720886092509c" Feb 19 10:08:47 crc kubenswrapper[4965]: E0219 10:08:47.199548 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:09:02 crc kubenswrapper[4965]: I0219 10:09:02.201330 4965 scope.go:117] "RemoveContainer" containerID="76f06bc02934238a40bb54d8f37e941fa531c6ec466807a5bec720886092509c" Feb 19 10:09:02 crc kubenswrapper[4965]: E0219 10:09:02.202023 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:09:14 crc kubenswrapper[4965]: I0219 10:09:14.198621 4965 scope.go:117] "RemoveContainer" containerID="76f06bc02934238a40bb54d8f37e941fa531c6ec466807a5bec720886092509c" Feb 19 10:09:14 crc kubenswrapper[4965]: E0219 10:09:14.199434 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:09:27 crc kubenswrapper[4965]: I0219 10:09:27.198567 4965 scope.go:117] "RemoveContainer" containerID="76f06bc02934238a40bb54d8f37e941fa531c6ec466807a5bec720886092509c" Feb 19 10:09:27 crc kubenswrapper[4965]: E0219 10:09:27.199301 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:09:34 crc kubenswrapper[4965]: I0219 10:09:34.885802 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pqnnp"] Feb 19 10:09:34 crc kubenswrapper[4965]: I0219 10:09:34.890019 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pqnnp" Feb 19 10:09:34 crc kubenswrapper[4965]: I0219 10:09:34.913618 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pqnnp"] Feb 19 10:09:35 crc kubenswrapper[4965]: I0219 10:09:35.000498 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ks4n9\" (UniqueName: \"kubernetes.io/projected/355479f0-bc77-47b7-9fa5-472e2cead404-kube-api-access-ks4n9\") pod \"certified-operators-pqnnp\" (UID: \"355479f0-bc77-47b7-9fa5-472e2cead404\") " pod="openshift-marketplace/certified-operators-pqnnp" Feb 19 10:09:35 crc kubenswrapper[4965]: I0219 10:09:35.000599 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/355479f0-bc77-47b7-9fa5-472e2cead404-utilities\") pod \"certified-operators-pqnnp\" (UID: \"355479f0-bc77-47b7-9fa5-472e2cead404\") " pod="openshift-marketplace/certified-operators-pqnnp" Feb 19 10:09:35 crc kubenswrapper[4965]: I0219 10:09:35.000680 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/355479f0-bc77-47b7-9fa5-472e2cead404-catalog-content\") pod \"certified-operators-pqnnp\" (UID: \"355479f0-bc77-47b7-9fa5-472e2cead404\") " pod="openshift-marketplace/certified-operators-pqnnp" Feb 19 10:09:35 crc kubenswrapper[4965]: I0219 10:09:35.102502 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/355479f0-bc77-47b7-9fa5-472e2cead404-utilities\") pod \"certified-operators-pqnnp\" (UID: \"355479f0-bc77-47b7-9fa5-472e2cead404\") " pod="openshift-marketplace/certified-operators-pqnnp" Feb 19 10:09:35 crc kubenswrapper[4965]: I0219 10:09:35.102624 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/355479f0-bc77-47b7-9fa5-472e2cead404-catalog-content\") pod \"certified-operators-pqnnp\" (UID: \"355479f0-bc77-47b7-9fa5-472e2cead404\") " pod="openshift-marketplace/certified-operators-pqnnp" Feb 19 10:09:35 crc kubenswrapper[4965]: I0219 10:09:35.102748 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ks4n9\" (UniqueName: \"kubernetes.io/projected/355479f0-bc77-47b7-9fa5-472e2cead404-kube-api-access-ks4n9\") pod \"certified-operators-pqnnp\" (UID: \"355479f0-bc77-47b7-9fa5-472e2cead404\") " pod="openshift-marketplace/certified-operators-pqnnp" Feb 19 10:09:35 crc kubenswrapper[4965]: I0219 10:09:35.103151 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/355479f0-bc77-47b7-9fa5-472e2cead404-utilities\") pod \"certified-operators-pqnnp\" (UID: \"355479f0-bc77-47b7-9fa5-472e2cead404\") " pod="openshift-marketplace/certified-operators-pqnnp" Feb 19 10:09:35 crc kubenswrapper[4965]: I0219 10:09:35.103225 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/355479f0-bc77-47b7-9fa5-472e2cead404-catalog-content\") pod \"certified-operators-pqnnp\" (UID: \"355479f0-bc77-47b7-9fa5-472e2cead404\") " pod="openshift-marketplace/certified-operators-pqnnp" Feb 19 10:09:35 crc kubenswrapper[4965]: I0219 10:09:35.126953 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ks4n9\" (UniqueName: \"kubernetes.io/projected/355479f0-bc77-47b7-9fa5-472e2cead404-kube-api-access-ks4n9\") pod \"certified-operators-pqnnp\" (UID: \"355479f0-bc77-47b7-9fa5-472e2cead404\") " pod="openshift-marketplace/certified-operators-pqnnp" Feb 19 10:09:35 crc kubenswrapper[4965]: I0219 10:09:35.250848 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pqnnp" Feb 19 10:09:35 crc kubenswrapper[4965]: I0219 10:09:35.845032 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pqnnp"] Feb 19 10:09:36 crc kubenswrapper[4965]: I0219 10:09:36.451714 4965 generic.go:334] "Generic (PLEG): container finished" podID="355479f0-bc77-47b7-9fa5-472e2cead404" containerID="d6939a3f2d0e2005ba08758c41fe3c8bdd07c85191d6a74ae67fbf72cfb6ef1e" exitCode=0 Feb 19 10:09:36 crc kubenswrapper[4965]: I0219 10:09:36.451829 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pqnnp" event={"ID":"355479f0-bc77-47b7-9fa5-472e2cead404","Type":"ContainerDied","Data":"d6939a3f2d0e2005ba08758c41fe3c8bdd07c85191d6a74ae67fbf72cfb6ef1e"} Feb 19 10:09:36 crc kubenswrapper[4965]: I0219 10:09:36.453452 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pqnnp" event={"ID":"355479f0-bc77-47b7-9fa5-472e2cead404","Type":"ContainerStarted","Data":"73a96b1dc38bd2d03893b54d4a1628f4b43426b628b57a43b0257b71c5f3db09"} Feb 19 10:09:36 crc kubenswrapper[4965]: I0219 10:09:36.454594 4965 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 10:09:37 crc kubenswrapper[4965]: I0219 10:09:37.464543 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pqnnp" event={"ID":"355479f0-bc77-47b7-9fa5-472e2cead404","Type":"ContainerStarted","Data":"af8876910270cd8ccff4c5b91b0a42655233473ff28b65bfbbca46973eda954b"} Feb 19 10:09:39 crc kubenswrapper[4965]: I0219 10:09:39.487162 4965 generic.go:334] "Generic (PLEG): container finished" podID="355479f0-bc77-47b7-9fa5-472e2cead404" containerID="af8876910270cd8ccff4c5b91b0a42655233473ff28b65bfbbca46973eda954b" exitCode=0 Feb 19 10:09:39 crc kubenswrapper[4965]: I0219 10:09:39.487683 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pqnnp" event={"ID":"355479f0-bc77-47b7-9fa5-472e2cead404","Type":"ContainerDied","Data":"af8876910270cd8ccff4c5b91b0a42655233473ff28b65bfbbca46973eda954b"} Feb 19 10:09:39 crc kubenswrapper[4965]: I0219 10:09:39.528021 4965 scope.go:117] "RemoveContainer" containerID="d639b0e62244bd72cf3e36a38011fe222908c4f35ba3b0bc5b9e57e2d49084ed" Feb 19 10:09:39 crc kubenswrapper[4965]: I0219 10:09:39.577936 4965 scope.go:117] "RemoveContainer" containerID="0b8cd47096442e6c01e98dc21447886c4820940445b68906e0b057f855e32074" Feb 19 10:09:40 crc kubenswrapper[4965]: I0219 10:09:40.198318 4965 scope.go:117] "RemoveContainer" containerID="76f06bc02934238a40bb54d8f37e941fa531c6ec466807a5bec720886092509c" Feb 19 10:09:40 crc kubenswrapper[4965]: E0219 10:09:40.198850 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:09:40 crc kubenswrapper[4965]: I0219 10:09:40.500186 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pqnnp" event={"ID":"355479f0-bc77-47b7-9fa5-472e2cead404","Type":"ContainerStarted","Data":"ab9187c9dde41466d39d5d51db8bbaa94c89824d430a078d17118f38ef2da374"} Feb 19 10:09:40 crc kubenswrapper[4965]: I0219 10:09:40.522623 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pqnnp" podStartSLOduration=3.101670083 podStartE2EDuration="6.522602101s" podCreationTimestamp="2026-02-19 10:09:34 +0000 UTC" firstStartedPulling="2026-02-19 10:09:36.454351367 +0000 UTC m=+1632.075672677" lastFinishedPulling="2026-02-19 10:09:39.875283385 +0000 UTC m=+1635.496604695" observedRunningTime="2026-02-19 10:09:40.520000107 +0000 UTC m=+1636.141321427" watchObservedRunningTime="2026-02-19 10:09:40.522602101 +0000 UTC m=+1636.143923431" Feb 19 10:09:45 crc kubenswrapper[4965]: I0219 10:09:45.252235 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pqnnp" Feb 19 10:09:45 crc kubenswrapper[4965]: I0219 10:09:45.252801 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pqnnp" Feb 19 10:09:45 crc kubenswrapper[4965]: I0219 10:09:45.318489 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pqnnp" Feb 19 10:09:45 crc kubenswrapper[4965]: I0219 10:09:45.614869 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pqnnp" Feb 19 10:09:45 crc kubenswrapper[4965]: I0219 10:09:45.662658 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pqnnp"] Feb 19 10:09:47 crc kubenswrapper[4965]: I0219 10:09:47.578305 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pqnnp" podUID="355479f0-bc77-47b7-9fa5-472e2cead404" containerName="registry-server" containerID="cri-o://ab9187c9dde41466d39d5d51db8bbaa94c89824d430a078d17118f38ef2da374" gracePeriod=2 Feb 19 10:09:47 crc kubenswrapper[4965]: E0219 10:09:47.850562 4965 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod355479f0_bc77_47b7_9fa5_472e2cead404.slice/crio-ab9187c9dde41466d39d5d51db8bbaa94c89824d430a078d17118f38ef2da374.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod355479f0_bc77_47b7_9fa5_472e2cead404.slice/crio-conmon-ab9187c9dde41466d39d5d51db8bbaa94c89824d430a078d17118f38ef2da374.scope\": RecentStats: unable to find data in memory cache]" Feb 19 10:09:48 crc kubenswrapper[4965]: I0219 10:09:48.080512 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pqnnp" Feb 19 10:09:48 crc kubenswrapper[4965]: I0219 10:09:48.187287 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks4n9\" (UniqueName: \"kubernetes.io/projected/355479f0-bc77-47b7-9fa5-472e2cead404-kube-api-access-ks4n9\") pod \"355479f0-bc77-47b7-9fa5-472e2cead404\" (UID: \"355479f0-bc77-47b7-9fa5-472e2cead404\") " Feb 19 10:09:48 crc kubenswrapper[4965]: I0219 10:09:48.187419 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/355479f0-bc77-47b7-9fa5-472e2cead404-utilities\") pod \"355479f0-bc77-47b7-9fa5-472e2cead404\" (UID: \"355479f0-bc77-47b7-9fa5-472e2cead404\") " Feb 19 10:09:48 crc kubenswrapper[4965]: I0219 10:09:48.187597 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/355479f0-bc77-47b7-9fa5-472e2cead404-catalog-content\") pod \"355479f0-bc77-47b7-9fa5-472e2cead404\" (UID: \"355479f0-bc77-47b7-9fa5-472e2cead404\") " Feb 19 10:09:48 crc kubenswrapper[4965]: I0219 10:09:48.188277 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/355479f0-bc77-47b7-9fa5-472e2cead404-utilities" (OuterVolumeSpecName: "utilities") pod "355479f0-bc77-47b7-9fa5-472e2cead404" (UID: "355479f0-bc77-47b7-9fa5-472e2cead404"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:09:48 crc kubenswrapper[4965]: I0219 10:09:48.205544 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/355479f0-bc77-47b7-9fa5-472e2cead404-kube-api-access-ks4n9" (OuterVolumeSpecName: "kube-api-access-ks4n9") pod "355479f0-bc77-47b7-9fa5-472e2cead404" (UID: "355479f0-bc77-47b7-9fa5-472e2cead404"). InnerVolumeSpecName "kube-api-access-ks4n9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:09:48 crc kubenswrapper[4965]: I0219 10:09:48.244909 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/355479f0-bc77-47b7-9fa5-472e2cead404-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "355479f0-bc77-47b7-9fa5-472e2cead404" (UID: "355479f0-bc77-47b7-9fa5-472e2cead404"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:09:48 crc kubenswrapper[4965]: I0219 10:09:48.289725 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ks4n9\" (UniqueName: \"kubernetes.io/projected/355479f0-bc77-47b7-9fa5-472e2cead404-kube-api-access-ks4n9\") on node \"crc\" DevicePath \"\"" Feb 19 10:09:48 crc kubenswrapper[4965]: I0219 10:09:48.289765 4965 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/355479f0-bc77-47b7-9fa5-472e2cead404-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:09:48 crc kubenswrapper[4965]: I0219 10:09:48.289775 4965 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/355479f0-bc77-47b7-9fa5-472e2cead404-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:09:48 crc kubenswrapper[4965]: I0219 10:09:48.599276 4965 generic.go:334] "Generic (PLEG): container finished" podID="355479f0-bc77-47b7-9fa5-472e2cead404" containerID="ab9187c9dde41466d39d5d51db8bbaa94c89824d430a078d17118f38ef2da374" exitCode=0 Feb 19 10:09:48 crc kubenswrapper[4965]: I0219 10:09:48.599351 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pqnnp" Feb 19 10:09:48 crc kubenswrapper[4965]: I0219 10:09:48.599349 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pqnnp" event={"ID":"355479f0-bc77-47b7-9fa5-472e2cead404","Type":"ContainerDied","Data":"ab9187c9dde41466d39d5d51db8bbaa94c89824d430a078d17118f38ef2da374"} Feb 19 10:09:48 crc kubenswrapper[4965]: I0219 10:09:48.599477 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pqnnp" event={"ID":"355479f0-bc77-47b7-9fa5-472e2cead404","Type":"ContainerDied","Data":"73a96b1dc38bd2d03893b54d4a1628f4b43426b628b57a43b0257b71c5f3db09"} Feb 19 10:09:48 crc kubenswrapper[4965]: I0219 10:09:48.599514 4965 scope.go:117] "RemoveContainer" containerID="ab9187c9dde41466d39d5d51db8bbaa94c89824d430a078d17118f38ef2da374" Feb 19 10:09:48 crc kubenswrapper[4965]: I0219 10:09:48.637576 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pqnnp"] Feb 19 10:09:48 crc kubenswrapper[4965]: I0219 10:09:48.638180 4965 scope.go:117] "RemoveContainer" containerID="af8876910270cd8ccff4c5b91b0a42655233473ff28b65bfbbca46973eda954b" Feb 19 10:09:48 crc kubenswrapper[4965]: I0219 10:09:48.649066 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pqnnp"] Feb 19 10:09:48 crc kubenswrapper[4965]: I0219 10:09:48.670873 4965 scope.go:117] "RemoveContainer" containerID="d6939a3f2d0e2005ba08758c41fe3c8bdd07c85191d6a74ae67fbf72cfb6ef1e" Feb 19 10:09:48 crc kubenswrapper[4965]: I0219 10:09:48.715573 4965 scope.go:117] "RemoveContainer" containerID="ab9187c9dde41466d39d5d51db8bbaa94c89824d430a078d17118f38ef2da374" Feb 19 10:09:48 crc kubenswrapper[4965]: E0219 10:09:48.716144 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab9187c9dde41466d39d5d51db8bbaa94c89824d430a078d17118f38ef2da374\": container with ID starting with ab9187c9dde41466d39d5d51db8bbaa94c89824d430a078d17118f38ef2da374 not found: ID does not exist" containerID="ab9187c9dde41466d39d5d51db8bbaa94c89824d430a078d17118f38ef2da374" Feb 19 10:09:48 crc kubenswrapper[4965]: I0219 10:09:48.716186 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab9187c9dde41466d39d5d51db8bbaa94c89824d430a078d17118f38ef2da374"} err="failed to get container status \"ab9187c9dde41466d39d5d51db8bbaa94c89824d430a078d17118f38ef2da374\": rpc error: code = NotFound desc = could not find container \"ab9187c9dde41466d39d5d51db8bbaa94c89824d430a078d17118f38ef2da374\": container with ID starting with ab9187c9dde41466d39d5d51db8bbaa94c89824d430a078d17118f38ef2da374 not found: ID does not exist" Feb 19 10:09:48 crc kubenswrapper[4965]: I0219 10:09:48.716270 4965 scope.go:117] "RemoveContainer" containerID="af8876910270cd8ccff4c5b91b0a42655233473ff28b65bfbbca46973eda954b" Feb 19 10:09:48 crc kubenswrapper[4965]: E0219 10:09:48.716568 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af8876910270cd8ccff4c5b91b0a42655233473ff28b65bfbbca46973eda954b\": container with ID starting with af8876910270cd8ccff4c5b91b0a42655233473ff28b65bfbbca46973eda954b not found: ID does not exist" containerID="af8876910270cd8ccff4c5b91b0a42655233473ff28b65bfbbca46973eda954b" Feb 19 10:09:48 crc kubenswrapper[4965]: I0219 10:09:48.716608 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af8876910270cd8ccff4c5b91b0a42655233473ff28b65bfbbca46973eda954b"} err="failed to get container status \"af8876910270cd8ccff4c5b91b0a42655233473ff28b65bfbbca46973eda954b\": rpc error: code = NotFound desc = could not find container \"af8876910270cd8ccff4c5b91b0a42655233473ff28b65bfbbca46973eda954b\": container with ID starting with af8876910270cd8ccff4c5b91b0a42655233473ff28b65bfbbca46973eda954b not found: ID does not exist" Feb 19 10:09:48 crc kubenswrapper[4965]: I0219 10:09:48.716635 4965 scope.go:117] "RemoveContainer" containerID="d6939a3f2d0e2005ba08758c41fe3c8bdd07c85191d6a74ae67fbf72cfb6ef1e" Feb 19 10:09:48 crc kubenswrapper[4965]: E0219 10:09:48.717128 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6939a3f2d0e2005ba08758c41fe3c8bdd07c85191d6a74ae67fbf72cfb6ef1e\": container with ID starting with d6939a3f2d0e2005ba08758c41fe3c8bdd07c85191d6a74ae67fbf72cfb6ef1e not found: ID does not exist" containerID="d6939a3f2d0e2005ba08758c41fe3c8bdd07c85191d6a74ae67fbf72cfb6ef1e" Feb 19 10:09:48 crc kubenswrapper[4965]: I0219 10:09:48.717156 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6939a3f2d0e2005ba08758c41fe3c8bdd07c85191d6a74ae67fbf72cfb6ef1e"} err="failed to get container status \"d6939a3f2d0e2005ba08758c41fe3c8bdd07c85191d6a74ae67fbf72cfb6ef1e\": rpc error: code = NotFound desc = could not find container \"d6939a3f2d0e2005ba08758c41fe3c8bdd07c85191d6a74ae67fbf72cfb6ef1e\": container with ID starting with d6939a3f2d0e2005ba08758c41fe3c8bdd07c85191d6a74ae67fbf72cfb6ef1e not found: ID does not exist" Feb 19 10:09:49 crc kubenswrapper[4965]: I0219 10:09:49.228983 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="355479f0-bc77-47b7-9fa5-472e2cead404" path="/var/lib/kubelet/pods/355479f0-bc77-47b7-9fa5-472e2cead404/volumes" Feb 19 10:09:52 crc kubenswrapper[4965]: I0219 10:09:52.198817 4965 scope.go:117] "RemoveContainer" containerID="76f06bc02934238a40bb54d8f37e941fa531c6ec466807a5bec720886092509c" Feb 19 10:09:52 crc kubenswrapper[4965]: E0219 10:09:52.199375 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:10:06 crc kubenswrapper[4965]: I0219 10:10:06.199616 4965 scope.go:117] "RemoveContainer" containerID="76f06bc02934238a40bb54d8f37e941fa531c6ec466807a5bec720886092509c" Feb 19 10:10:06 crc kubenswrapper[4965]: E0219 10:10:06.200687 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:10:18 crc kubenswrapper[4965]: I0219 10:10:18.198424 4965 scope.go:117] "RemoveContainer" containerID="76f06bc02934238a40bb54d8f37e941fa531c6ec466807a5bec720886092509c" Feb 19 10:10:18 crc kubenswrapper[4965]: E0219 10:10:18.199482 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:10:33 crc kubenswrapper[4965]: I0219 10:10:33.199385 4965 scope.go:117] "RemoveContainer" containerID="76f06bc02934238a40bb54d8f37e941fa531c6ec466807a5bec720886092509c" Feb 19 10:10:33 crc kubenswrapper[4965]: E0219 10:10:33.200134 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:10:39 crc kubenswrapper[4965]: I0219 10:10:39.662537 4965 scope.go:117] "RemoveContainer" containerID="aca2560cfad27a77ad1d806aae5b17a0c4b6722235e43ff20de62370a59b472a" Feb 19 10:10:39 crc kubenswrapper[4965]: I0219 10:10:39.706780 4965 scope.go:117] "RemoveContainer" containerID="a60392232dfe2d29976017ebb4400c1b12da8611796e84a5d35349313cfd48ff" Feb 19 10:10:39 crc kubenswrapper[4965]: I0219 10:10:39.742883 4965 scope.go:117] "RemoveContainer" containerID="6bae93998cadbb0f6db20398825e276d6023176ba1e54278ec384c904652ebc5" Feb 19 10:10:44 crc kubenswrapper[4965]: I0219 10:10:44.197931 4965 scope.go:117] "RemoveContainer" containerID="76f06bc02934238a40bb54d8f37e941fa531c6ec466807a5bec720886092509c" Feb 19 10:10:44 crc kubenswrapper[4965]: E0219 10:10:44.198758 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:10:59 crc kubenswrapper[4965]: I0219 10:10:59.198912 4965 scope.go:117] "RemoveContainer" containerID="76f06bc02934238a40bb54d8f37e941fa531c6ec466807a5bec720886092509c" Feb 19 10:10:59 crc kubenswrapper[4965]: E0219 10:10:59.200440 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:11:02 crc kubenswrapper[4965]: I0219 10:11:02.497053 4965 generic.go:334] "Generic (PLEG): container finished" podID="a6a006f0-d704-4e08-bc46-118269ad9b1a" containerID="ea86e3e26447948246288c2d901c011373b2ceae33101e6a28ec541801f7bad9" exitCode=0 Feb 19 10:11:02 crc kubenswrapper[4965]: I0219 10:11:02.498807 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m8wg9" event={"ID":"a6a006f0-d704-4e08-bc46-118269ad9b1a","Type":"ContainerDied","Data":"ea86e3e26447948246288c2d901c011373b2ceae33101e6a28ec541801f7bad9"} Feb 19 10:11:04 crc kubenswrapper[4965]: I0219 10:11:04.087971 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m8wg9" Feb 19 10:11:04 crc kubenswrapper[4965]: I0219 10:11:04.250277 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfr9n\" (UniqueName: \"kubernetes.io/projected/a6a006f0-d704-4e08-bc46-118269ad9b1a-kube-api-access-qfr9n\") pod \"a6a006f0-d704-4e08-bc46-118269ad9b1a\" (UID: \"a6a006f0-d704-4e08-bc46-118269ad9b1a\") " Feb 19 10:11:04 crc kubenswrapper[4965]: I0219 10:11:04.250335 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6a006f0-d704-4e08-bc46-118269ad9b1a-inventory\") pod \"a6a006f0-d704-4e08-bc46-118269ad9b1a\" (UID: \"a6a006f0-d704-4e08-bc46-118269ad9b1a\") " Feb 19 10:11:04 crc kubenswrapper[4965]: I0219 10:11:04.251176 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a6a006f0-d704-4e08-bc46-118269ad9b1a-ssh-key-openstack-edpm-ipam\") pod \"a6a006f0-d704-4e08-bc46-118269ad9b1a\" (UID: \"a6a006f0-d704-4e08-bc46-118269ad9b1a\") " Feb 19 10:11:04 crc kubenswrapper[4965]: I0219 10:11:04.251265 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6a006f0-d704-4e08-bc46-118269ad9b1a-bootstrap-combined-ca-bundle\") pod \"a6a006f0-d704-4e08-bc46-118269ad9b1a\" (UID: \"a6a006f0-d704-4e08-bc46-118269ad9b1a\") " Feb 19 10:11:04 crc kubenswrapper[4965]: I0219 10:11:04.258034 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6a006f0-d704-4e08-bc46-118269ad9b1a-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "a6a006f0-d704-4e08-bc46-118269ad9b1a" (UID: "a6a006f0-d704-4e08-bc46-118269ad9b1a"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:11:04 crc kubenswrapper[4965]: I0219 10:11:04.258215 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6a006f0-d704-4e08-bc46-118269ad9b1a-kube-api-access-qfr9n" (OuterVolumeSpecName: "kube-api-access-qfr9n") pod "a6a006f0-d704-4e08-bc46-118269ad9b1a" (UID: "a6a006f0-d704-4e08-bc46-118269ad9b1a"). InnerVolumeSpecName "kube-api-access-qfr9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:11:04 crc kubenswrapper[4965]: E0219 10:11:04.291610 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a6a006f0-d704-4e08-bc46-118269ad9b1a-ssh-key-openstack-edpm-ipam podName:a6a006f0-d704-4e08-bc46-118269ad9b1a nodeName:}" failed. No retries permitted until 2026-02-19 10:11:04.791586554 +0000 UTC m=+1720.412907864 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "ssh-key-openstack-edpm-ipam" (UniqueName: "kubernetes.io/secret/a6a006f0-d704-4e08-bc46-118269ad9b1a-ssh-key-openstack-edpm-ipam") pod "a6a006f0-d704-4e08-bc46-118269ad9b1a" (UID: "a6a006f0-d704-4e08-bc46-118269ad9b1a") : error deleting /var/lib/kubelet/pods/a6a006f0-d704-4e08-bc46-118269ad9b1a/volume-subpaths: remove /var/lib/kubelet/pods/a6a006f0-d704-4e08-bc46-118269ad9b1a/volume-subpaths: no such file or directory Feb 19 10:11:04 crc kubenswrapper[4965]: I0219 10:11:04.294339 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6a006f0-d704-4e08-bc46-118269ad9b1a-inventory" (OuterVolumeSpecName: "inventory") pod "a6a006f0-d704-4e08-bc46-118269ad9b1a" (UID: "a6a006f0-d704-4e08-bc46-118269ad9b1a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:11:04 crc kubenswrapper[4965]: I0219 10:11:04.353924 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfr9n\" (UniqueName: \"kubernetes.io/projected/a6a006f0-d704-4e08-bc46-118269ad9b1a-kube-api-access-qfr9n\") on node \"crc\" DevicePath \"\"" Feb 19 10:11:04 crc kubenswrapper[4965]: I0219 10:11:04.353949 4965 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6a006f0-d704-4e08-bc46-118269ad9b1a-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:11:04 crc kubenswrapper[4965]: I0219 10:11:04.353959 4965 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6a006f0-d704-4e08-bc46-118269ad9b1a-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:11:04 crc kubenswrapper[4965]: I0219 10:11:04.524970 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m8wg9" event={"ID":"a6a006f0-d704-4e08-bc46-118269ad9b1a","Type":"ContainerDied","Data":"cd8818b8c5d9a0bf7dcd2a9de3cdf80aaeea91491fac3bb8a38c3dcd9b57ca41"} Feb 19 10:11:04 crc kubenswrapper[4965]: I0219 10:11:04.525015 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd8818b8c5d9a0bf7dcd2a9de3cdf80aaeea91491fac3bb8a38c3dcd9b57ca41" Feb 19 10:11:04 crc kubenswrapper[4965]: I0219 10:11:04.525049 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m8wg9" Feb 19 10:11:04 crc kubenswrapper[4965]: I0219 10:11:04.629116 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vfth9"] Feb 19 10:11:04 crc kubenswrapper[4965]: E0219 10:11:04.629885 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="355479f0-bc77-47b7-9fa5-472e2cead404" containerName="extract-utilities" Feb 19 10:11:04 crc kubenswrapper[4965]: I0219 10:11:04.629922 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="355479f0-bc77-47b7-9fa5-472e2cead404" containerName="extract-utilities" Feb 19 10:11:04 crc kubenswrapper[4965]: E0219 10:11:04.629943 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6a006f0-d704-4e08-bc46-118269ad9b1a" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 19 10:11:04 crc kubenswrapper[4965]: I0219 10:11:04.629958 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6a006f0-d704-4e08-bc46-118269ad9b1a" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 19 10:11:04 crc kubenswrapper[4965]: E0219 10:11:04.629981 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="355479f0-bc77-47b7-9fa5-472e2cead404" containerName="registry-server" Feb 19 10:11:04 crc kubenswrapper[4965]: I0219 10:11:04.629997 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="355479f0-bc77-47b7-9fa5-472e2cead404" containerName="registry-server" Feb 19 10:11:04 crc kubenswrapper[4965]: E0219 10:11:04.630023 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="355479f0-bc77-47b7-9fa5-472e2cead404" containerName="extract-content" Feb 19 10:11:04 crc kubenswrapper[4965]: I0219 10:11:04.630035 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="355479f0-bc77-47b7-9fa5-472e2cead404" containerName="extract-content" Feb 19 10:11:04 crc kubenswrapper[4965]: I0219 10:11:04.630414 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6a006f0-d704-4e08-bc46-118269ad9b1a" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 19 10:11:04 crc kubenswrapper[4965]: I0219 10:11:04.630450 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="355479f0-bc77-47b7-9fa5-472e2cead404" containerName="registry-server" Feb 19 10:11:04 crc kubenswrapper[4965]: I0219 10:11:04.631704 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vfth9" Feb 19 10:11:04 crc kubenswrapper[4965]: I0219 10:11:04.643283 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vfth9"] Feb 19 10:11:04 crc kubenswrapper[4965]: I0219 10:11:04.771014 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpp7b\" (UniqueName: \"kubernetes.io/projected/2cc29510-fe65-45e1-b4fe-fef9bb2923b0-kube-api-access-bpp7b\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vfth9\" (UID: \"2cc29510-fe65-45e1-b4fe-fef9bb2923b0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vfth9" Feb 19 10:11:04 crc kubenswrapper[4965]: I0219 10:11:04.771467 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2cc29510-fe65-45e1-b4fe-fef9bb2923b0-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vfth9\" (UID: \"2cc29510-fe65-45e1-b4fe-fef9bb2923b0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vfth9" Feb 19 10:11:04 crc kubenswrapper[4965]: I0219 10:11:04.771748 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cc29510-fe65-45e1-b4fe-fef9bb2923b0-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vfth9\" (UID: \"2cc29510-fe65-45e1-b4fe-fef9bb2923b0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vfth9" Feb 19 10:11:04 crc kubenswrapper[4965]: I0219 10:11:04.873970 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a6a006f0-d704-4e08-bc46-118269ad9b1a-ssh-key-openstack-edpm-ipam\") pod \"a6a006f0-d704-4e08-bc46-118269ad9b1a\" (UID: \"a6a006f0-d704-4e08-bc46-118269ad9b1a\") " Feb 19 10:11:04 crc kubenswrapper[4965]: I0219 10:11:04.874730 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2cc29510-fe65-45e1-b4fe-fef9bb2923b0-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vfth9\" (UID: \"2cc29510-fe65-45e1-b4fe-fef9bb2923b0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vfth9" Feb 19 10:11:04 crc kubenswrapper[4965]: I0219 10:11:04.874844 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cc29510-fe65-45e1-b4fe-fef9bb2923b0-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vfth9\" (UID: \"2cc29510-fe65-45e1-b4fe-fef9bb2923b0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vfth9" Feb 19 10:11:04 crc kubenswrapper[4965]: I0219 10:11:04.875763 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpp7b\" (UniqueName: \"kubernetes.io/projected/2cc29510-fe65-45e1-b4fe-fef9bb2923b0-kube-api-access-bpp7b\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vfth9\" (UID: \"2cc29510-fe65-45e1-b4fe-fef9bb2923b0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vfth9" Feb 19 10:11:04 crc kubenswrapper[4965]: I0219 10:11:04.878486 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6a006f0-d704-4e08-bc46-118269ad9b1a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a6a006f0-d704-4e08-bc46-118269ad9b1a" (UID: "a6a006f0-d704-4e08-bc46-118269ad9b1a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:11:04 crc kubenswrapper[4965]: I0219 10:11:04.879918 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cc29510-fe65-45e1-b4fe-fef9bb2923b0-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vfth9\" (UID: \"2cc29510-fe65-45e1-b4fe-fef9bb2923b0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vfth9" Feb 19 10:11:04 crc kubenswrapper[4965]: I0219 10:11:04.879920 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2cc29510-fe65-45e1-b4fe-fef9bb2923b0-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vfth9\" (UID: \"2cc29510-fe65-45e1-b4fe-fef9bb2923b0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vfth9" Feb 19 10:11:04 crc kubenswrapper[4965]: I0219 10:11:04.903774 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpp7b\" (UniqueName: \"kubernetes.io/projected/2cc29510-fe65-45e1-b4fe-fef9bb2923b0-kube-api-access-bpp7b\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vfth9\" (UID: \"2cc29510-fe65-45e1-b4fe-fef9bb2923b0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vfth9" Feb 19 10:11:04 crc kubenswrapper[4965]: I0219 10:11:04.954901 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vfth9" Feb 19 10:11:04 crc kubenswrapper[4965]: I0219 10:11:04.978141 4965 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a6a006f0-d704-4e08-bc46-118269ad9b1a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 10:11:05 crc kubenswrapper[4965]: I0219 10:11:05.591902 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vfth9"] Feb 19 10:11:06 crc kubenswrapper[4965]: I0219 10:11:06.544337 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vfth9" event={"ID":"2cc29510-fe65-45e1-b4fe-fef9bb2923b0","Type":"ContainerStarted","Data":"0d265ff598b89fe7902bc1375e10e827fbde89125db8966fc8a30964cbbceb4c"} Feb 19 10:11:06 crc kubenswrapper[4965]: I0219 10:11:06.544666 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vfth9" event={"ID":"2cc29510-fe65-45e1-b4fe-fef9bb2923b0","Type":"ContainerStarted","Data":"6c5b24c1808999e12af9da927703c161cbc6974f4e76815acddf4c19ae7eb740"} Feb 19 10:11:06 crc kubenswrapper[4965]: I0219 10:11:06.563160 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vfth9" podStartSLOduration=2.091168492 podStartE2EDuration="2.563136561s" podCreationTimestamp="2026-02-19 10:11:04 +0000 UTC" firstStartedPulling="2026-02-19 10:11:05.593987673 +0000 UTC m=+1721.215308993" lastFinishedPulling="2026-02-19 10:11:06.065955752 +0000 UTC m=+1721.687277062" observedRunningTime="2026-02-19 10:11:06.557015854 +0000 UTC m=+1722.178337164" watchObservedRunningTime="2026-02-19 10:11:06.563136561 +0000 UTC m=+1722.184457871" Feb 19 10:11:12 crc kubenswrapper[4965]: I0219 10:11:12.199279 4965 scope.go:117] "RemoveContainer" containerID="76f06bc02934238a40bb54d8f37e941fa531c6ec466807a5bec720886092509c" Feb 19 10:11:12 crc kubenswrapper[4965]: E0219 10:11:12.200220 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:11:23 crc kubenswrapper[4965]: I0219 10:11:23.199060 4965 scope.go:117] "RemoveContainer" containerID="76f06bc02934238a40bb54d8f37e941fa531c6ec466807a5bec720886092509c" Feb 19 10:11:23 crc kubenswrapper[4965]: E0219 10:11:23.199964 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:11:35 crc kubenswrapper[4965]: I0219 10:11:35.220832 4965 scope.go:117] "RemoveContainer" containerID="76f06bc02934238a40bb54d8f37e941fa531c6ec466807a5bec720886092509c" Feb 19 10:11:35 crc kubenswrapper[4965]: E0219 10:11:35.222531 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:11:39 crc kubenswrapper[4965]: I0219 10:11:39.857280 4965 scope.go:117] "RemoveContainer" containerID="1f579eaee71c7a9dc8a5379f1e5f83bc285c6d4eaf2acc3566b5f381ea6667c4" Feb 19 10:11:39 crc kubenswrapper[4965]: I0219 10:11:39.883781 4965 scope.go:117] "RemoveContainer" containerID="4f5cf7e8baf7d6cab5ae59ce06c0ee44b12e6416ae5f5dbe5dc3af81f258cfe7" Feb 19 10:11:39 crc kubenswrapper[4965]: I0219 10:11:39.914972 4965 scope.go:117] "RemoveContainer" containerID="cd2223af625aaf5c14380d7e00959262f24260882cb9053e7ef6e0f0c8857b92" Feb 19 10:11:39 crc kubenswrapper[4965]: I0219 10:11:39.944131 4965 scope.go:117] "RemoveContainer" containerID="f85798d7f6105aae216db015b6205b18df3d53f8707ab9a1dfab0d4aec4f5ecb" Feb 19 10:11:39 crc kubenswrapper[4965]: I0219 10:11:39.967809 4965 scope.go:117] "RemoveContainer" containerID="06b8e45a13ff271edb63dc5141fa51d3cd108a2a4888b96e852541d8b583efcc" Feb 19 10:11:39 crc kubenswrapper[4965]: I0219 10:11:39.988835 4965 scope.go:117] "RemoveContainer" containerID="35d9cb5fff67f62d381376ed728ea50496920f694b5ea9d371ddbdc0da48546d" Feb 19 10:11:40 crc kubenswrapper[4965]: I0219 10:11:40.026782 4965 scope.go:117] "RemoveContainer" containerID="1d01fec3c6e56a2f55da4dc0ed1e7913354afb8b3557ad30632dfb3e647f2a49" Feb 19 10:11:40 crc kubenswrapper[4965]: I0219 10:11:40.058839 4965 scope.go:117] "RemoveContainer" containerID="475d7b0f8a76a385b34955f80982c1f0792b7806b49576cbe9b801197953d60e" Feb 19 10:11:50 crc kubenswrapper[4965]: I0219 10:11:50.369602 4965 scope.go:117] "RemoveContainer" containerID="76f06bc02934238a40bb54d8f37e941fa531c6ec466807a5bec720886092509c" Feb 19 10:11:50 crc kubenswrapper[4965]: E0219 10:11:50.371777 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:12:05 crc kubenswrapper[4965]: I0219 10:12:05.208761 4965 scope.go:117] "RemoveContainer" containerID="76f06bc02934238a40bb54d8f37e941fa531c6ec466807a5bec720886092509c" Feb 19 10:12:05 crc kubenswrapper[4965]: E0219 10:12:05.209818 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:12:10 crc kubenswrapper[4965]: I0219 10:12:10.060457 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-pl6lh"] Feb 19 10:12:10 crc kubenswrapper[4965]: I0219 10:12:10.079710 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-pl6lh"] Feb 19 10:12:11 crc kubenswrapper[4965]: I0219 10:12:11.067969 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-596f-account-create-update-l526t"] Feb 19 10:12:11 crc kubenswrapper[4965]: I0219 10:12:11.088045 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-25kh7"] Feb 19 10:12:11 crc kubenswrapper[4965]: I0219 10:12:11.098741 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-25kh7"] Feb 19 10:12:11 crc kubenswrapper[4965]: I0219 10:12:11.108360 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-596f-account-create-update-l526t"] Feb 19 10:12:11 crc kubenswrapper[4965]: I0219 10:12:11.220955 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="504af3e1-9b2c-4c21-8243-00e8b011c665" path="/var/lib/kubelet/pods/504af3e1-9b2c-4c21-8243-00e8b011c665/volumes" Feb 19 10:12:11 crc kubenswrapper[4965]: I0219 10:12:11.222506 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f4a6564-b3dd-48b8-8f45-b89155f4ddbf" path="/var/lib/kubelet/pods/5f4a6564-b3dd-48b8-8f45-b89155f4ddbf/volumes" Feb 19 10:12:11 crc kubenswrapper[4965]: I0219 10:12:11.223989 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87f25999-5c83-4b40-9d6e-c32d88532e00" path="/var/lib/kubelet/pods/87f25999-5c83-4b40-9d6e-c32d88532e00/volumes" Feb 19 10:12:12 crc kubenswrapper[4965]: I0219 10:12:12.037523 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-7351-account-create-update-8ssjj"] Feb 19 10:12:12 crc kubenswrapper[4965]: I0219 10:12:12.050506 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-ff7d-account-create-update-wx6sr"] Feb 19 10:12:12 crc kubenswrapper[4965]: I0219 10:12:12.063603 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-kldd9"] Feb 19 10:12:12 crc kubenswrapper[4965]: I0219 10:12:12.073001 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-7351-account-create-update-8ssjj"] Feb 19 10:12:12 crc kubenswrapper[4965]: I0219 10:12:12.082381 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-ff7d-account-create-update-wx6sr"] Feb 19 10:12:12 crc kubenswrapper[4965]: I0219 10:12:12.091891 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-kldd9"] Feb 19 10:12:13 crc kubenswrapper[4965]: I0219 10:12:13.222995 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7b581c7-5ca4-4e60-bea9-db65839ed46c" path="/var/lib/kubelet/pods/b7b581c7-5ca4-4e60-bea9-db65839ed46c/volumes" Feb 19 10:12:13 crc kubenswrapper[4965]: I0219 10:12:13.224522 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="baca6400-0fa5-49f2-8eb2-54a774607cc3" path="/var/lib/kubelet/pods/baca6400-0fa5-49f2-8eb2-54a774607cc3/volumes" Feb 19 10:12:13 crc kubenswrapper[4965]: I0219 10:12:13.226274 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9a5f937-3184-4cef-a4ac-8f7205952bbc" path="/var/lib/kubelet/pods/e9a5f937-3184-4cef-a4ac-8f7205952bbc/volumes" Feb 19 10:12:19 crc kubenswrapper[4965]: I0219 10:12:19.198173 4965 scope.go:117] "RemoveContainer" containerID="76f06bc02934238a40bb54d8f37e941fa531c6ec466807a5bec720886092509c" Feb 19 10:12:19 crc kubenswrapper[4965]: E0219 10:12:19.199062 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:12:24 crc kubenswrapper[4965]: I0219 10:12:24.035865 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-pr7kx"] Feb 19 10:12:24 crc kubenswrapper[4965]: I0219 10:12:24.051904 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-pr7kx"] Feb 19 10:12:25 crc kubenswrapper[4965]: I0219 10:12:25.224549 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bd8d85c-9a7d-4f54-a589-330a68d04f51" path="/var/lib/kubelet/pods/3bd8d85c-9a7d-4f54-a589-330a68d04f51/volumes" Feb 19 10:12:30 crc kubenswrapper[4965]: I0219 10:12:30.198127 4965 scope.go:117] "RemoveContainer" containerID="76f06bc02934238a40bb54d8f37e941fa531c6ec466807a5bec720886092509c" Feb 19 10:12:30 crc kubenswrapper[4965]: E0219 10:12:30.199376 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:12:34 crc kubenswrapper[4965]: I0219 10:12:34.934846 4965 generic.go:334] "Generic (PLEG): container finished" podID="2cc29510-fe65-45e1-b4fe-fef9bb2923b0" containerID="0d265ff598b89fe7902bc1375e10e827fbde89125db8966fc8a30964cbbceb4c" exitCode=0 Feb 19 10:12:34 crc kubenswrapper[4965]: I0219 10:12:34.934972 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vfth9" event={"ID":"2cc29510-fe65-45e1-b4fe-fef9bb2923b0","Type":"ContainerDied","Data":"0d265ff598b89fe7902bc1375e10e827fbde89125db8966fc8a30964cbbceb4c"} Feb 19 10:12:36 crc kubenswrapper[4965]: I0219 10:12:36.654107 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vfth9" Feb 19 10:12:36 crc kubenswrapper[4965]: I0219 10:12:36.740586 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2cc29510-fe65-45e1-b4fe-fef9bb2923b0-ssh-key-openstack-edpm-ipam\") pod \"2cc29510-fe65-45e1-b4fe-fef9bb2923b0\" (UID: \"2cc29510-fe65-45e1-b4fe-fef9bb2923b0\") " Feb 19 10:12:36 crc kubenswrapper[4965]: I0219 10:12:36.740734 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cc29510-fe65-45e1-b4fe-fef9bb2923b0-inventory\") pod \"2cc29510-fe65-45e1-b4fe-fef9bb2923b0\" (UID: \"2cc29510-fe65-45e1-b4fe-fef9bb2923b0\") " Feb 19 10:12:36 crc kubenswrapper[4965]: I0219 10:12:36.740801 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpp7b\" (UniqueName: \"kubernetes.io/projected/2cc29510-fe65-45e1-b4fe-fef9bb2923b0-kube-api-access-bpp7b\") pod \"2cc29510-fe65-45e1-b4fe-fef9bb2923b0\" (UID: \"2cc29510-fe65-45e1-b4fe-fef9bb2923b0\") " Feb 19 10:12:36 crc kubenswrapper[4965]: I0219 10:12:36.747373 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cc29510-fe65-45e1-b4fe-fef9bb2923b0-kube-api-access-bpp7b" (OuterVolumeSpecName: "kube-api-access-bpp7b") pod "2cc29510-fe65-45e1-b4fe-fef9bb2923b0" (UID: "2cc29510-fe65-45e1-b4fe-fef9bb2923b0"). InnerVolumeSpecName "kube-api-access-bpp7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:12:36 crc kubenswrapper[4965]: I0219 10:12:36.773308 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cc29510-fe65-45e1-b4fe-fef9bb2923b0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2cc29510-fe65-45e1-b4fe-fef9bb2923b0" (UID: "2cc29510-fe65-45e1-b4fe-fef9bb2923b0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:12:36 crc kubenswrapper[4965]: I0219 10:12:36.789075 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cc29510-fe65-45e1-b4fe-fef9bb2923b0-inventory" (OuterVolumeSpecName: "inventory") pod "2cc29510-fe65-45e1-b4fe-fef9bb2923b0" (UID: "2cc29510-fe65-45e1-b4fe-fef9bb2923b0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:12:36 crc kubenswrapper[4965]: I0219 10:12:36.844890 4965 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2cc29510-fe65-45e1-b4fe-fef9bb2923b0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 10:12:36 crc kubenswrapper[4965]: I0219 10:12:36.844966 4965 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cc29510-fe65-45e1-b4fe-fef9bb2923b0-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:12:36 crc kubenswrapper[4965]: I0219 10:12:36.844981 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpp7b\" (UniqueName: \"kubernetes.io/projected/2cc29510-fe65-45e1-b4fe-fef9bb2923b0-kube-api-access-bpp7b\") on node \"crc\" DevicePath \"\"" Feb 19 10:12:36 crc kubenswrapper[4965]: I0219 10:12:36.959288 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vfth9" event={"ID":"2cc29510-fe65-45e1-b4fe-fef9bb2923b0","Type":"ContainerDied","Data":"6c5b24c1808999e12af9da927703c161cbc6974f4e76815acddf4c19ae7eb740"} Feb 19 10:12:36 crc kubenswrapper[4965]: I0219 10:12:36.959365 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c5b24c1808999e12af9da927703c161cbc6974f4e76815acddf4c19ae7eb740" Feb 19 10:12:36 crc kubenswrapper[4965]: I0219 10:12:36.959376 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vfth9" Feb 19 10:12:37 crc kubenswrapper[4965]: I0219 10:12:37.066305 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lkc2z"] Feb 19 10:12:37 crc kubenswrapper[4965]: E0219 10:12:37.066912 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cc29510-fe65-45e1-b4fe-fef9bb2923b0" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 19 10:12:37 crc kubenswrapper[4965]: I0219 10:12:37.066936 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cc29510-fe65-45e1-b4fe-fef9bb2923b0" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 19 10:12:37 crc kubenswrapper[4965]: I0219 10:12:37.067237 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cc29510-fe65-45e1-b4fe-fef9bb2923b0" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 19 10:12:37 crc kubenswrapper[4965]: I0219 10:12:37.068239 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lkc2z" Feb 19 10:12:37 crc kubenswrapper[4965]: I0219 10:12:37.070703 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cthw6" Feb 19 10:12:37 crc kubenswrapper[4965]: I0219 10:12:37.071113 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 10:12:37 crc kubenswrapper[4965]: I0219 10:12:37.071179 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:12:37 crc kubenswrapper[4965]: I0219 10:12:37.071375 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 10:12:37 crc kubenswrapper[4965]: I0219 10:12:37.077049 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lkc2z"] Feb 19 10:12:37 crc kubenswrapper[4965]: I0219 10:12:37.152376 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/04f58633-9350-49a8-9c41-522490a298eb-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-lkc2z\" (UID: \"04f58633-9350-49a8-9c41-522490a298eb\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lkc2z" Feb 19 10:12:37 crc kubenswrapper[4965]: I0219 10:12:37.152515 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtfmp\" (UniqueName: \"kubernetes.io/projected/04f58633-9350-49a8-9c41-522490a298eb-kube-api-access-vtfmp\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-lkc2z\" (UID: \"04f58633-9350-49a8-9c41-522490a298eb\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lkc2z" Feb 19 10:12:37 crc kubenswrapper[4965]: I0219 10:12:37.152551 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/04f58633-9350-49a8-9c41-522490a298eb-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-lkc2z\" (UID: \"04f58633-9350-49a8-9c41-522490a298eb\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lkc2z" Feb 19 10:12:37 crc kubenswrapper[4965]: I0219 10:12:37.254699 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtfmp\" (UniqueName: \"kubernetes.io/projected/04f58633-9350-49a8-9c41-522490a298eb-kube-api-access-vtfmp\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-lkc2z\" (UID: \"04f58633-9350-49a8-9c41-522490a298eb\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lkc2z" Feb 19 10:12:37 crc kubenswrapper[4965]: I0219 10:12:37.255036 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/04f58633-9350-49a8-9c41-522490a298eb-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-lkc2z\" (UID: \"04f58633-9350-49a8-9c41-522490a298eb\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lkc2z" Feb 19 10:12:37 crc kubenswrapper[4965]: I0219 10:12:37.255111 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/04f58633-9350-49a8-9c41-522490a298eb-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-lkc2z\" (UID: \"04f58633-9350-49a8-9c41-522490a298eb\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lkc2z" Feb 19 10:12:37 crc kubenswrapper[4965]: I0219 10:12:37.259372 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/04f58633-9350-49a8-9c41-522490a298eb-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-lkc2z\" (UID: \"04f58633-9350-49a8-9c41-522490a298eb\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lkc2z" Feb 19 10:12:37 crc kubenswrapper[4965]: I0219 10:12:37.260065 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/04f58633-9350-49a8-9c41-522490a298eb-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-lkc2z\" (UID: \"04f58633-9350-49a8-9c41-522490a298eb\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lkc2z" Feb 19 10:12:37 crc kubenswrapper[4965]: I0219 10:12:37.276425 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtfmp\" (UniqueName: \"kubernetes.io/projected/04f58633-9350-49a8-9c41-522490a298eb-kube-api-access-vtfmp\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-lkc2z\" (UID: \"04f58633-9350-49a8-9c41-522490a298eb\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lkc2z" Feb 19 10:12:37 crc kubenswrapper[4965]: I0219 10:12:37.388588 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lkc2z" Feb 19 10:12:37 crc kubenswrapper[4965]: I0219 10:12:37.987670 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lkc2z"] Feb 19 10:12:38 crc kubenswrapper[4965]: I0219 10:12:38.984639 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lkc2z" event={"ID":"04f58633-9350-49a8-9c41-522490a298eb","Type":"ContainerStarted","Data":"d6ea33994f73f90dfc0406ac26fd6df376657031d298372ff81a9da853d5381e"} Feb 19 10:12:39 crc kubenswrapper[4965]: I0219 10:12:39.040878 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-db-create-9s824"] Feb 19 10:12:39 crc kubenswrapper[4965]: I0219 10:12:39.052489 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-zhgqf"] Feb 19 10:12:39 crc kubenswrapper[4965]: I0219 10:12:39.065480 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-zwjcj"] Feb 19 10:12:39 crc kubenswrapper[4965]: I0219 10:12:39.075517 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-zhgqf"] Feb 19 10:12:39 crc kubenswrapper[4965]: I0219 10:12:39.084541 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-db-create-9s824"] Feb 19 10:12:39 crc kubenswrapper[4965]: I0219 10:12:39.110587 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-zwjcj"] Feb 19 10:12:39 crc kubenswrapper[4965]: I0219 10:12:39.219431 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57994f21-19c8-4e09-b972-de9d0f398410" path="/var/lib/kubelet/pods/57994f21-19c8-4e09-b972-de9d0f398410/volumes" Feb 19 10:12:39 crc kubenswrapper[4965]: I0219 10:12:39.220079 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9f0aaf4-29c1-4187-a237-39502b74bbe9" path="/var/lib/kubelet/pods/a9f0aaf4-29c1-4187-a237-39502b74bbe9/volumes" Feb 19 10:12:39 crc kubenswrapper[4965]: I0219 10:12:39.221506 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e59eb68c-26e2-4951-900e-5a7b59197d54" path="/var/lib/kubelet/pods/e59eb68c-26e2-4951-900e-5a7b59197d54/volumes" Feb 19 10:12:39 crc kubenswrapper[4965]: I0219 10:12:39.998898 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lkc2z" event={"ID":"04f58633-9350-49a8-9c41-522490a298eb","Type":"ContainerStarted","Data":"bf1c2f347e734e5b4e286654537be3b46045949c2f90ca288e2543a6a01cfc20"} Feb 19 10:12:40 crc kubenswrapper[4965]: I0219 10:12:40.037161 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lkc2z" podStartSLOduration=1.9257681039999999 podStartE2EDuration="3.037132517s" podCreationTimestamp="2026-02-19 10:12:37 +0000 UTC" firstStartedPulling="2026-02-19 10:12:37.997989767 +0000 UTC m=+1813.619311087" lastFinishedPulling="2026-02-19 10:12:39.10935419 +0000 UTC m=+1814.730675500" observedRunningTime="2026-02-19 10:12:40.027815351 +0000 UTC m=+1815.649136661" watchObservedRunningTime="2026-02-19 10:12:40.037132517 +0000 UTC m=+1815.658453867" Feb 19 10:12:40 crc kubenswrapper[4965]: I0219 10:12:40.156569 4965 scope.go:117] "RemoveContainer" containerID="146be35c72f957cac9abab6462be4be9050a65418ad879665bfaf2f8085ab115" Feb 19 10:12:40 crc kubenswrapper[4965]: I0219 10:12:40.180931 4965 scope.go:117] "RemoveContainer" containerID="1852de23369442e85fdeb588ebe22c4d2dbf04d7d4bc0beeadd826819c3f253c" Feb 19 10:12:40 crc kubenswrapper[4965]: I0219 10:12:40.267902 4965 scope.go:117] "RemoveContainer" containerID="a4e325c54e3b3515aaa7fe72f19d2cda2462a7f90e24a8802365a26273561f24" Feb 19 10:12:40 crc kubenswrapper[4965]: I0219 10:12:40.336124 4965 scope.go:117] "RemoveContainer" containerID="526001e016d3f1ecd82f92520a4c5184147503fbabed0e6f8703f6924e37a45b" Feb 19 10:12:40 crc kubenswrapper[4965]: I0219 10:12:40.393050 4965 scope.go:117] "RemoveContainer" containerID="b2c13e1059d076cb67bc06414f3afa17cf1b7857d675093db6bea1e5de55dc5c" Feb 19 10:12:40 crc kubenswrapper[4965]: I0219 10:12:40.427883 4965 scope.go:117] "RemoveContainer" containerID="bc1f510260ed5f9e5edc13268f9c1fceb24cee7e6ba66cf62dd489e767ce159b" Feb 19 10:12:40 crc kubenswrapper[4965]: I0219 10:12:40.485631 4965 scope.go:117] "RemoveContainer" containerID="3de348a507e057bdc27188e8836a0f27d6fb5f564743fa24e567ce6c24a7c27b" Feb 19 10:12:40 crc kubenswrapper[4965]: I0219 10:12:40.515569 4965 scope.go:117] "RemoveContainer" containerID="37b1a3213ee9697bb90c072ab7d08ac6fb1373b0c45fe8c561e027c524da23ec" Feb 19 10:12:40 crc kubenswrapper[4965]: I0219 10:12:40.538730 4965 scope.go:117] "RemoveContainer" containerID="d9b57422079ea1c76777a981c29de019e18e721210b7a1402aa522902e89d98a" Feb 19 10:12:40 crc kubenswrapper[4965]: I0219 10:12:40.557684 4965 scope.go:117] "RemoveContainer" containerID="d456e2f2a99a581533e374b7b37765945723f234c8eba5b3534a15b6418180e8" Feb 19 10:12:42 crc kubenswrapper[4965]: I0219 10:12:42.078025 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-c4ed-account-create-update-w5v9f"] Feb 19 10:12:42 crc kubenswrapper[4965]: I0219 10:12:42.092985 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-6865-account-create-update-ql48d"] Feb 19 10:12:42 crc kubenswrapper[4965]: I0219 10:12:42.101706 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-f339-account-create-update-vw9jg"] Feb 19 10:12:42 crc kubenswrapper[4965]: I0219 10:12:42.109837 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-7880-account-create-update-fxqfz"] Feb 19 10:12:42 crc kubenswrapper[4965]: I0219 10:12:42.118955 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-c4ed-account-create-update-w5v9f"] Feb 19 10:12:42 crc kubenswrapper[4965]: I0219 10:12:42.127361 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-6865-account-create-update-ql48d"] Feb 19 10:12:42 crc kubenswrapper[4965]: I0219 10:12:42.135971 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-f339-account-create-update-vw9jg"] Feb 19 10:12:42 crc kubenswrapper[4965]: I0219 10:12:42.146269 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-7880-account-create-update-fxqfz"] Feb 19 10:12:42 crc kubenswrapper[4965]: I0219 10:12:42.155278 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-4dd7f"] Feb 19 10:12:42 crc kubenswrapper[4965]: I0219 10:12:42.163762 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-4dd7f"] Feb 19 10:12:43 crc kubenswrapper[4965]: I0219 10:12:43.198672 4965 scope.go:117] "RemoveContainer" containerID="76f06bc02934238a40bb54d8f37e941fa531c6ec466807a5bec720886092509c" Feb 19 10:12:43 crc kubenswrapper[4965]: E0219 10:12:43.200490 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:12:43 crc kubenswrapper[4965]: I0219 10:12:43.216850 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="547c989f-c71d-4a1b-9031-61fd03d9c2f1" path="/var/lib/kubelet/pods/547c989f-c71d-4a1b-9031-61fd03d9c2f1/volumes" Feb 19 10:12:43 crc kubenswrapper[4965]: I0219 10:12:43.218073 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5eb6197e-e339-43bd-861a-faff9e8f4f65" path="/var/lib/kubelet/pods/5eb6197e-e339-43bd-861a-faff9e8f4f65/volumes" Feb 19 10:12:43 crc kubenswrapper[4965]: I0219 10:12:43.219611 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfd014f4-1151-4fc5-8b0b-cfeb54b3845d" path="/var/lib/kubelet/pods/dfd014f4-1151-4fc5-8b0b-cfeb54b3845d/volumes" Feb 19 10:12:43 crc kubenswrapper[4965]: I0219 10:12:43.221630 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3111b00-ad13-4a92-97ca-95a778007dc2" path="/var/lib/kubelet/pods/e3111b00-ad13-4a92-97ca-95a778007dc2/volumes" Feb 19 10:12:43 crc kubenswrapper[4965]: I0219 10:12:43.223916 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5fa636a-ebf1-4873-a54b-bdf1171f8138" path="/var/lib/kubelet/pods/f5fa636a-ebf1-4873-a54b-bdf1171f8138/volumes" Feb 19 10:12:45 crc kubenswrapper[4965]: I0219 10:12:45.043337 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-vkkc7"] Feb 19 10:12:45 crc kubenswrapper[4965]: I0219 10:12:45.056972 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-vkkc7"] Feb 19 10:12:45 crc kubenswrapper[4965]: I0219 10:12:45.233994 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18ba8da5-8c18-4dad-91ce-dc34ef3fc6e4" path="/var/lib/kubelet/pods/18ba8da5-8c18-4dad-91ce-dc34ef3fc6e4/volumes" Feb 19 10:12:47 crc kubenswrapper[4965]: I0219 10:12:47.032889 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-r86pd"] Feb 19 10:12:47 crc kubenswrapper[4965]: I0219 10:12:47.040947 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-r86pd"] Feb 19 10:12:47 crc kubenswrapper[4965]: I0219 10:12:47.220366 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e834046-19e3-47b1-b822-6c73b0d8be74" path="/var/lib/kubelet/pods/5e834046-19e3-47b1-b822-6c73b0d8be74/volumes" Feb 19 10:12:57 crc kubenswrapper[4965]: I0219 10:12:57.197718 4965 scope.go:117] "RemoveContainer" containerID="76f06bc02934238a40bb54d8f37e941fa531c6ec466807a5bec720886092509c" Feb 19 10:12:57 crc kubenswrapper[4965]: E0219 10:12:57.198535 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:13:10 crc kubenswrapper[4965]: I0219 10:13:10.197865 4965 scope.go:117] "RemoveContainer" containerID="76f06bc02934238a40bb54d8f37e941fa531c6ec466807a5bec720886092509c" Feb 19 10:13:10 crc kubenswrapper[4965]: E0219 10:13:10.198630 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:13:20 crc kubenswrapper[4965]: I0219 10:13:20.046256 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-wsss7"] Feb 19 10:13:20 crc kubenswrapper[4965]: I0219 10:13:20.056549 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-wsss7"] Feb 19 10:13:21 crc kubenswrapper[4965]: I0219 10:13:21.224187 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4c38eda-5f59-4756-a3b7-2731c66ef436" path="/var/lib/kubelet/pods/f4c38eda-5f59-4756-a3b7-2731c66ef436/volumes" Feb 19 10:13:25 crc kubenswrapper[4965]: I0219 10:13:25.218918 4965 scope.go:117] "RemoveContainer" containerID="76f06bc02934238a40bb54d8f37e941fa531c6ec466807a5bec720886092509c" Feb 19 10:13:25 crc kubenswrapper[4965]: I0219 10:13:25.518950 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" event={"ID":"63ef3eb8-6103-492d-b6ef-f16081d15e83","Type":"ContainerStarted","Data":"6e790f4d4e6658655db9f91927db114ee9b37405e8ae4a7d350746d0c209e2f2"} Feb 19 10:13:32 crc kubenswrapper[4965]: I0219 10:13:32.032738 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-6vqp7"] Feb 19 10:13:32 crc kubenswrapper[4965]: I0219 10:13:32.043223 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-6vqp7"] Feb 19 10:13:33 crc kubenswrapper[4965]: I0219 10:13:33.032741 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-ns8h9"] Feb 19 10:13:33 crc kubenswrapper[4965]: I0219 10:13:33.043024 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-ns8h9"] Feb 19 10:13:33 crc kubenswrapper[4965]: I0219 10:13:33.213105 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53127e22-4e09-45f9-a73b-641d087fd3cd" path="/var/lib/kubelet/pods/53127e22-4e09-45f9-a73b-641d087fd3cd/volumes" Feb 19 10:13:33 crc kubenswrapper[4965]: I0219 10:13:33.214077 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8671fa02-a5fa-41f0-b232-fdfc4133ab58" path="/var/lib/kubelet/pods/8671fa02-a5fa-41f0-b232-fdfc4133ab58/volumes" Feb 19 10:13:39 crc kubenswrapper[4965]: I0219 10:13:39.657783 4965 generic.go:334] "Generic (PLEG): container finished" podID="04f58633-9350-49a8-9c41-522490a298eb" containerID="bf1c2f347e734e5b4e286654537be3b46045949c2f90ca288e2543a6a01cfc20" exitCode=0 Feb 19 10:13:39 crc kubenswrapper[4965]: I0219 10:13:39.657871 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lkc2z" event={"ID":"04f58633-9350-49a8-9c41-522490a298eb","Type":"ContainerDied","Data":"bf1c2f347e734e5b4e286654537be3b46045949c2f90ca288e2543a6a01cfc20"} Feb 19 10:13:39 crc kubenswrapper[4965]: I0219 10:13:39.880000 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-d4w6n"] Feb 19 10:13:39 crc kubenswrapper[4965]: I0219 10:13:39.882523 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d4w6n" Feb 19 10:13:39 crc kubenswrapper[4965]: I0219 10:13:39.891216 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d4w6n"] Feb 19 10:13:39 crc kubenswrapper[4965]: I0219 10:13:39.910175 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80443a02-bea0-40dc-aff2-9472db3c29f6-catalog-content\") pod \"redhat-operators-d4w6n\" (UID: \"80443a02-bea0-40dc-aff2-9472db3c29f6\") " pod="openshift-marketplace/redhat-operators-d4w6n" Feb 19 10:13:39 crc kubenswrapper[4965]: I0219 10:13:39.910312 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zhhr\" (UniqueName: \"kubernetes.io/projected/80443a02-bea0-40dc-aff2-9472db3c29f6-kube-api-access-9zhhr\") pod \"redhat-operators-d4w6n\" (UID: \"80443a02-bea0-40dc-aff2-9472db3c29f6\") " pod="openshift-marketplace/redhat-operators-d4w6n" Feb 19 10:13:39 crc kubenswrapper[4965]: I0219 10:13:39.910433 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80443a02-bea0-40dc-aff2-9472db3c29f6-utilities\") pod \"redhat-operators-d4w6n\" (UID: \"80443a02-bea0-40dc-aff2-9472db3c29f6\") " pod="openshift-marketplace/redhat-operators-d4w6n" Feb 19 10:13:40 crc kubenswrapper[4965]: I0219 10:13:40.013044 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80443a02-bea0-40dc-aff2-9472db3c29f6-utilities\") pod \"redhat-operators-d4w6n\" (UID: \"80443a02-bea0-40dc-aff2-9472db3c29f6\") " pod="openshift-marketplace/redhat-operators-d4w6n" Feb 19 10:13:40 crc kubenswrapper[4965]: I0219 10:13:40.013211 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80443a02-bea0-40dc-aff2-9472db3c29f6-catalog-content\") pod \"redhat-operators-d4w6n\" (UID: \"80443a02-bea0-40dc-aff2-9472db3c29f6\") " pod="openshift-marketplace/redhat-operators-d4w6n" Feb 19 10:13:40 crc kubenswrapper[4965]: I0219 10:13:40.013295 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zhhr\" (UniqueName: \"kubernetes.io/projected/80443a02-bea0-40dc-aff2-9472db3c29f6-kube-api-access-9zhhr\") pod \"redhat-operators-d4w6n\" (UID: \"80443a02-bea0-40dc-aff2-9472db3c29f6\") " pod="openshift-marketplace/redhat-operators-d4w6n" Feb 19 10:13:40 crc kubenswrapper[4965]: I0219 10:13:40.013625 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80443a02-bea0-40dc-aff2-9472db3c29f6-utilities\") pod \"redhat-operators-d4w6n\" (UID: \"80443a02-bea0-40dc-aff2-9472db3c29f6\") " pod="openshift-marketplace/redhat-operators-d4w6n" Feb 19 10:13:40 crc kubenswrapper[4965]: I0219 10:13:40.013711 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80443a02-bea0-40dc-aff2-9472db3c29f6-catalog-content\") pod \"redhat-operators-d4w6n\" (UID: \"80443a02-bea0-40dc-aff2-9472db3c29f6\") " pod="openshift-marketplace/redhat-operators-d4w6n" Feb 19 10:13:40 crc kubenswrapper[4965]: I0219 10:13:40.051919 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zhhr\" (UniqueName: \"kubernetes.io/projected/80443a02-bea0-40dc-aff2-9472db3c29f6-kube-api-access-9zhhr\") pod \"redhat-operators-d4w6n\" (UID: \"80443a02-bea0-40dc-aff2-9472db3c29f6\") " pod="openshift-marketplace/redhat-operators-d4w6n" Feb 19 10:13:40 crc kubenswrapper[4965]: I0219 10:13:40.215032 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d4w6n" Feb 19 10:13:40 crc kubenswrapper[4965]: I0219 10:13:40.669535 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d4w6n"] Feb 19 10:13:40 crc kubenswrapper[4965]: I0219 10:13:40.784461 4965 scope.go:117] "RemoveContainer" containerID="ef632fe3e3bf9c0ed0f47da3d2d474210be51f58973ecfe5bb091366f7778748" Feb 19 10:13:40 crc kubenswrapper[4965]: I0219 10:13:40.828390 4965 scope.go:117] "RemoveContainer" containerID="aa4c026affdae8309e122dbdb7ae257dfa05e7aaab8f20bcd8b59c50364162e4" Feb 19 10:13:40 crc kubenswrapper[4965]: I0219 10:13:40.868888 4965 scope.go:117] "RemoveContainer" containerID="5b688329c2653fc9cf4ab20dd9f742d9cb2861aa97797074f100f283f322803b" Feb 19 10:13:40 crc kubenswrapper[4965]: I0219 10:13:40.894778 4965 scope.go:117] "RemoveContainer" containerID="dba155e576c3de023f13b353f54421392f022889e1b71ca59d844da11ccfb4f9" Feb 19 10:13:40 crc kubenswrapper[4965]: I0219 10:13:40.946242 4965 scope.go:117] "RemoveContainer" containerID="8140197a941920e015288e8493da3b9f38be477528a2aa04ccfd44714bb7d857" Feb 19 10:13:40 crc kubenswrapper[4965]: I0219 10:13:40.975677 4965 scope.go:117] "RemoveContainer" containerID="3ebc0b821875d223cce0fb49a14bd7d11f2df4cf95d0585b31c4c625055df862" Feb 19 10:13:41 crc kubenswrapper[4965]: I0219 10:13:41.009425 4965 scope.go:117] "RemoveContainer" containerID="e4c39128a06b341fe96b189d872bd76c045429dbb13f013f0d1a36f7273a1afc" Feb 19 10:13:41 crc kubenswrapper[4965]: I0219 10:13:41.029143 4965 scope.go:117] "RemoveContainer" containerID="0a713c7e515c2ad86f134cc876a83fc23beb8a521c00339ea120389fa4cc470f" Feb 19 10:13:41 crc kubenswrapper[4965]: I0219 10:13:41.061912 4965 scope.go:117] "RemoveContainer" containerID="5febd91fdf00927af6661c02036c04492a2c683750f9e984e67d385fa02aff6d" Feb 19 10:13:41 crc kubenswrapper[4965]: I0219 10:13:41.173768 4965 scope.go:117] "RemoveContainer" containerID="51cbe1456e4397c88bed1f3860a060a92b32671437b6d75e715fd55941d4693e" Feb 19 10:13:41 crc kubenswrapper[4965]: I0219 10:13:41.202799 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lkc2z" Feb 19 10:13:41 crc kubenswrapper[4965]: I0219 10:13:41.339760 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/04f58633-9350-49a8-9c41-522490a298eb-inventory\") pod \"04f58633-9350-49a8-9c41-522490a298eb\" (UID: \"04f58633-9350-49a8-9c41-522490a298eb\") " Feb 19 10:13:41 crc kubenswrapper[4965]: I0219 10:13:41.339918 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/04f58633-9350-49a8-9c41-522490a298eb-ssh-key-openstack-edpm-ipam\") pod \"04f58633-9350-49a8-9c41-522490a298eb\" (UID: \"04f58633-9350-49a8-9c41-522490a298eb\") " Feb 19 10:13:41 crc kubenswrapper[4965]: I0219 10:13:41.340019 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtfmp\" (UniqueName: \"kubernetes.io/projected/04f58633-9350-49a8-9c41-522490a298eb-kube-api-access-vtfmp\") pod \"04f58633-9350-49a8-9c41-522490a298eb\" (UID: \"04f58633-9350-49a8-9c41-522490a298eb\") " Feb 19 10:13:41 crc kubenswrapper[4965]: I0219 10:13:41.345257 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04f58633-9350-49a8-9c41-522490a298eb-kube-api-access-vtfmp" (OuterVolumeSpecName: "kube-api-access-vtfmp") pod "04f58633-9350-49a8-9c41-522490a298eb" (UID: "04f58633-9350-49a8-9c41-522490a298eb"). InnerVolumeSpecName "kube-api-access-vtfmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:13:41 crc kubenswrapper[4965]: I0219 10:13:41.371429 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04f58633-9350-49a8-9c41-522490a298eb-inventory" (OuterVolumeSpecName: "inventory") pod "04f58633-9350-49a8-9c41-522490a298eb" (UID: "04f58633-9350-49a8-9c41-522490a298eb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:13:41 crc kubenswrapper[4965]: I0219 10:13:41.382882 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04f58633-9350-49a8-9c41-522490a298eb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "04f58633-9350-49a8-9c41-522490a298eb" (UID: "04f58633-9350-49a8-9c41-522490a298eb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:13:41 crc kubenswrapper[4965]: I0219 10:13:41.443146 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtfmp\" (UniqueName: \"kubernetes.io/projected/04f58633-9350-49a8-9c41-522490a298eb-kube-api-access-vtfmp\") on node \"crc\" DevicePath \"\"" Feb 19 10:13:41 crc kubenswrapper[4965]: I0219 10:13:41.443216 4965 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/04f58633-9350-49a8-9c41-522490a298eb-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:13:41 crc kubenswrapper[4965]: I0219 10:13:41.443231 4965 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/04f58633-9350-49a8-9c41-522490a298eb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 10:13:41 crc kubenswrapper[4965]: I0219 10:13:41.677336 4965 generic.go:334] "Generic (PLEG): container finished" podID="80443a02-bea0-40dc-aff2-9472db3c29f6" containerID="ebf8d0e0a595dc6a01019dd33f314de1001f38cd0351beee4a1ee4ae4ff913a3" exitCode=0 Feb 19 10:13:41 crc kubenswrapper[4965]: I0219 10:13:41.677388 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d4w6n" event={"ID":"80443a02-bea0-40dc-aff2-9472db3c29f6","Type":"ContainerDied","Data":"ebf8d0e0a595dc6a01019dd33f314de1001f38cd0351beee4a1ee4ae4ff913a3"} Feb 19 10:13:41 crc kubenswrapper[4965]: I0219 10:13:41.677463 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d4w6n" event={"ID":"80443a02-bea0-40dc-aff2-9472db3c29f6","Type":"ContainerStarted","Data":"621484ad8bd9adfc0cd8be09e8168afe6dc0a0782d5d7766a89b7914d501a490"} Feb 19 10:13:41 crc kubenswrapper[4965]: I0219 10:13:41.679961 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lkc2z" event={"ID":"04f58633-9350-49a8-9c41-522490a298eb","Type":"ContainerDied","Data":"d6ea33994f73f90dfc0406ac26fd6df376657031d298372ff81a9da853d5381e"} Feb 19 10:13:41 crc kubenswrapper[4965]: I0219 10:13:41.679987 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6ea33994f73f90dfc0406ac26fd6df376657031d298372ff81a9da853d5381e" Feb 19 10:13:41 crc kubenswrapper[4965]: I0219 10:13:41.680028 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lkc2z" Feb 19 10:13:41 crc kubenswrapper[4965]: I0219 10:13:41.766565 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-58l4m"] Feb 19 10:13:41 crc kubenswrapper[4965]: E0219 10:13:41.767007 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04f58633-9350-49a8-9c41-522490a298eb" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 19 10:13:41 crc kubenswrapper[4965]: I0219 10:13:41.767026 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="04f58633-9350-49a8-9c41-522490a298eb" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 19 10:13:41 crc kubenswrapper[4965]: I0219 10:13:41.767246 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="04f58633-9350-49a8-9c41-522490a298eb" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 19 10:13:41 crc kubenswrapper[4965]: I0219 10:13:41.767995 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-58l4m" Feb 19 10:13:41 crc kubenswrapper[4965]: I0219 10:13:41.769840 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 10:13:41 crc kubenswrapper[4965]: I0219 10:13:41.769948 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 10:13:41 crc kubenswrapper[4965]: I0219 10:13:41.770127 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cthw6" Feb 19 10:13:41 crc kubenswrapper[4965]: I0219 10:13:41.772277 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:13:41 crc kubenswrapper[4965]: I0219 10:13:41.777830 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-58l4m"] Feb 19 10:13:41 crc kubenswrapper[4965]: I0219 10:13:41.954079 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpq5q\" (UniqueName: \"kubernetes.io/projected/9873ade5-a134-4b72-bbfe-468df59b993f-kube-api-access-cpq5q\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-58l4m\" (UID: \"9873ade5-a134-4b72-bbfe-468df59b993f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-58l4m" Feb 19 10:13:41 crc kubenswrapper[4965]: I0219 10:13:41.954716 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9873ade5-a134-4b72-bbfe-468df59b993f-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-58l4m\" (UID: \"9873ade5-a134-4b72-bbfe-468df59b993f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-58l4m" Feb 19 10:13:41 crc kubenswrapper[4965]: I0219 10:13:41.954929 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9873ade5-a134-4b72-bbfe-468df59b993f-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-58l4m\" (UID: \"9873ade5-a134-4b72-bbfe-468df59b993f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-58l4m" Feb 19 10:13:42 crc kubenswrapper[4965]: I0219 10:13:42.057038 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpq5q\" (UniqueName: \"kubernetes.io/projected/9873ade5-a134-4b72-bbfe-468df59b993f-kube-api-access-cpq5q\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-58l4m\" (UID: \"9873ade5-a134-4b72-bbfe-468df59b993f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-58l4m" Feb 19 10:13:42 crc kubenswrapper[4965]: I0219 10:13:42.057124 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9873ade5-a134-4b72-bbfe-468df59b993f-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-58l4m\" (UID: \"9873ade5-a134-4b72-bbfe-468df59b993f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-58l4m" Feb 19 10:13:42 crc kubenswrapper[4965]: I0219 10:13:42.057189 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9873ade5-a134-4b72-bbfe-468df59b993f-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-58l4m\" (UID: \"9873ade5-a134-4b72-bbfe-468df59b993f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-58l4m" Feb 19 10:13:42 crc kubenswrapper[4965]: I0219 10:13:42.062708 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9873ade5-a134-4b72-bbfe-468df59b993f-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-58l4m\" (UID: \"9873ade5-a134-4b72-bbfe-468df59b993f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-58l4m" Feb 19 10:13:42 crc kubenswrapper[4965]: I0219 10:13:42.065740 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9873ade5-a134-4b72-bbfe-468df59b993f-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-58l4m\" (UID: \"9873ade5-a134-4b72-bbfe-468df59b993f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-58l4m" Feb 19 10:13:42 crc kubenswrapper[4965]: I0219 10:13:42.077404 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpq5q\" (UniqueName: \"kubernetes.io/projected/9873ade5-a134-4b72-bbfe-468df59b993f-kube-api-access-cpq5q\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-58l4m\" (UID: \"9873ade5-a134-4b72-bbfe-468df59b993f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-58l4m" Feb 19 10:13:42 crc kubenswrapper[4965]: I0219 10:13:42.086626 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-58l4m" Feb 19 10:13:42 crc kubenswrapper[4965]: W0219 10:13:42.605064 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9873ade5_a134_4b72_bbfe_468df59b993f.slice/crio-92401147743aaec4ab83e3bf4b9c2002ed5cab315eb353585296540bd0de2ec1 WatchSource:0}: Error finding container 92401147743aaec4ab83e3bf4b9c2002ed5cab315eb353585296540bd0de2ec1: Status 404 returned error can't find the container with id 92401147743aaec4ab83e3bf4b9c2002ed5cab315eb353585296540bd0de2ec1 Feb 19 10:13:42 crc kubenswrapper[4965]: I0219 10:13:42.606919 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-58l4m"] Feb 19 10:13:42 crc kubenswrapper[4965]: I0219 10:13:42.690493 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-58l4m" event={"ID":"9873ade5-a134-4b72-bbfe-468df59b993f","Type":"ContainerStarted","Data":"92401147743aaec4ab83e3bf4b9c2002ed5cab315eb353585296540bd0de2ec1"} Feb 19 10:13:42 crc kubenswrapper[4965]: I0219 10:13:42.693730 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d4w6n" event={"ID":"80443a02-bea0-40dc-aff2-9472db3c29f6","Type":"ContainerStarted","Data":"f8a7d2e4a3c7be8dd6317021d563d6cde0b08e12da21e6a050a45b72f9153c33"} Feb 19 10:13:43 crc kubenswrapper[4965]: I0219 10:13:43.752125 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-58l4m" event={"ID":"9873ade5-a134-4b72-bbfe-468df59b993f","Type":"ContainerStarted","Data":"3e505d889d475cac362ca55bba7448d37ccbe3a7e9947935817f557d389059f8"} Feb 19 10:13:43 crc kubenswrapper[4965]: I0219 10:13:43.796185 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-58l4m" podStartSLOduration=2.371293218 podStartE2EDuration="2.796168687s" podCreationTimestamp="2026-02-19 10:13:41 +0000 UTC" firstStartedPulling="2026-02-19 10:13:42.610412453 +0000 UTC m=+1878.231733763" lastFinishedPulling="2026-02-19 10:13:43.035287922 +0000 UTC m=+1878.656609232" observedRunningTime="2026-02-19 10:13:43.791246718 +0000 UTC m=+1879.412568028" watchObservedRunningTime="2026-02-19 10:13:43.796168687 +0000 UTC m=+1879.417489997" Feb 19 10:13:46 crc kubenswrapper[4965]: I0219 10:13:46.042680 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-qllz5"] Feb 19 10:13:46 crc kubenswrapper[4965]: I0219 10:13:46.052720 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-qllz5"] Feb 19 10:13:47 crc kubenswrapper[4965]: I0219 10:13:47.040041 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-7rwpz"] Feb 19 10:13:47 crc kubenswrapper[4965]: I0219 10:13:47.051544 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-7rwpz"] Feb 19 10:13:47 crc kubenswrapper[4965]: I0219 10:13:47.212287 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce8bac0d-7aa6-437f-b234-370384cf1153" path="/var/lib/kubelet/pods/ce8bac0d-7aa6-437f-b234-370384cf1153/volumes" Feb 19 10:13:47 crc kubenswrapper[4965]: I0219 10:13:47.213013 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7bc0481-970b-4e8e-868f-490ea553952e" path="/var/lib/kubelet/pods/d7bc0481-970b-4e8e-868f-490ea553952e/volumes" Feb 19 10:13:48 crc kubenswrapper[4965]: I0219 10:13:48.797892 4965 generic.go:334] "Generic (PLEG): container finished" podID="80443a02-bea0-40dc-aff2-9472db3c29f6" containerID="f8a7d2e4a3c7be8dd6317021d563d6cde0b08e12da21e6a050a45b72f9153c33" exitCode=0 Feb 19 10:13:48 crc kubenswrapper[4965]: I0219 10:13:48.798024 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d4w6n" event={"ID":"80443a02-bea0-40dc-aff2-9472db3c29f6","Type":"ContainerDied","Data":"f8a7d2e4a3c7be8dd6317021d563d6cde0b08e12da21e6a050a45b72f9153c33"} Feb 19 10:13:48 crc kubenswrapper[4965]: I0219 10:13:48.801019 4965 generic.go:334] "Generic (PLEG): container finished" podID="9873ade5-a134-4b72-bbfe-468df59b993f" containerID="3e505d889d475cac362ca55bba7448d37ccbe3a7e9947935817f557d389059f8" exitCode=0 Feb 19 10:13:48 crc kubenswrapper[4965]: I0219 10:13:48.801063 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-58l4m" event={"ID":"9873ade5-a134-4b72-bbfe-468df59b993f","Type":"ContainerDied","Data":"3e505d889d475cac362ca55bba7448d37ccbe3a7e9947935817f557d389059f8"} Feb 19 10:13:49 crc kubenswrapper[4965]: I0219 10:13:49.811602 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d4w6n" event={"ID":"80443a02-bea0-40dc-aff2-9472db3c29f6","Type":"ContainerStarted","Data":"3f5a49c6d8e44ab76abd8ea5f1aa2b33ddbe9ee9f278138253d250926a85f1c3"} Feb 19 10:13:49 crc kubenswrapper[4965]: I0219 10:13:49.839172 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-d4w6n" podStartSLOduration=3.3038806 podStartE2EDuration="10.839153123s" podCreationTimestamp="2026-02-19 10:13:39 +0000 UTC" firstStartedPulling="2026-02-19 10:13:41.679306525 +0000 UTC m=+1877.300627835" lastFinishedPulling="2026-02-19 10:13:49.214579048 +0000 UTC m=+1884.835900358" observedRunningTime="2026-02-19 10:13:49.837775459 +0000 UTC m=+1885.459096769" watchObservedRunningTime="2026-02-19 10:13:49.839153123 +0000 UTC m=+1885.460474433" Feb 19 10:13:50 crc kubenswrapper[4965]: I0219 10:13:50.216320 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-d4w6n" Feb 19 10:13:50 crc kubenswrapper[4965]: I0219 10:13:50.216682 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-d4w6n" Feb 19 10:13:50 crc kubenswrapper[4965]: I0219 10:13:50.367771 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-58l4m" Feb 19 10:13:50 crc kubenswrapper[4965]: I0219 10:13:50.529020 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpq5q\" (UniqueName: \"kubernetes.io/projected/9873ade5-a134-4b72-bbfe-468df59b993f-kube-api-access-cpq5q\") pod \"9873ade5-a134-4b72-bbfe-468df59b993f\" (UID: \"9873ade5-a134-4b72-bbfe-468df59b993f\") " Feb 19 10:13:50 crc kubenswrapper[4965]: I0219 10:13:50.529337 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9873ade5-a134-4b72-bbfe-468df59b993f-inventory\") pod \"9873ade5-a134-4b72-bbfe-468df59b993f\" (UID: \"9873ade5-a134-4b72-bbfe-468df59b993f\") " Feb 19 10:13:50 crc kubenswrapper[4965]: I0219 10:13:50.529362 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9873ade5-a134-4b72-bbfe-468df59b993f-ssh-key-openstack-edpm-ipam\") pod \"9873ade5-a134-4b72-bbfe-468df59b993f\" (UID: \"9873ade5-a134-4b72-bbfe-468df59b993f\") " Feb 19 10:13:50 crc kubenswrapper[4965]: I0219 10:13:50.533979 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9873ade5-a134-4b72-bbfe-468df59b993f-kube-api-access-cpq5q" (OuterVolumeSpecName: "kube-api-access-cpq5q") pod "9873ade5-a134-4b72-bbfe-468df59b993f" (UID: "9873ade5-a134-4b72-bbfe-468df59b993f"). InnerVolumeSpecName "kube-api-access-cpq5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:13:50 crc kubenswrapper[4965]: I0219 10:13:50.556038 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9873ade5-a134-4b72-bbfe-468df59b993f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9873ade5-a134-4b72-bbfe-468df59b993f" (UID: "9873ade5-a134-4b72-bbfe-468df59b993f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:13:50 crc kubenswrapper[4965]: I0219 10:13:50.557670 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9873ade5-a134-4b72-bbfe-468df59b993f-inventory" (OuterVolumeSpecName: "inventory") pod "9873ade5-a134-4b72-bbfe-468df59b993f" (UID: "9873ade5-a134-4b72-bbfe-468df59b993f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:13:50 crc kubenswrapper[4965]: I0219 10:13:50.631544 4965 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9873ade5-a134-4b72-bbfe-468df59b993f-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:13:50 crc kubenswrapper[4965]: I0219 10:13:50.631775 4965 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9873ade5-a134-4b72-bbfe-468df59b993f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 10:13:50 crc kubenswrapper[4965]: I0219 10:13:50.631853 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpq5q\" (UniqueName: \"kubernetes.io/projected/9873ade5-a134-4b72-bbfe-468df59b993f-kube-api-access-cpq5q\") on node \"crc\" DevicePath \"\"" Feb 19 10:13:50 crc kubenswrapper[4965]: I0219 10:13:50.840495 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-58l4m" Feb 19 10:13:50 crc kubenswrapper[4965]: I0219 10:13:50.840525 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-58l4m" event={"ID":"9873ade5-a134-4b72-bbfe-468df59b993f","Type":"ContainerDied","Data":"92401147743aaec4ab83e3bf4b9c2002ed5cab315eb353585296540bd0de2ec1"} Feb 19 10:13:50 crc kubenswrapper[4965]: I0219 10:13:50.840589 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92401147743aaec4ab83e3bf4b9c2002ed5cab315eb353585296540bd0de2ec1" Feb 19 10:13:50 crc kubenswrapper[4965]: I0219 10:13:50.951097 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-kpx6m"] Feb 19 10:13:50 crc kubenswrapper[4965]: E0219 10:13:50.951486 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9873ade5-a134-4b72-bbfe-468df59b993f" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 19 10:13:50 crc kubenswrapper[4965]: I0219 10:13:50.951505 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="9873ade5-a134-4b72-bbfe-468df59b993f" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 19 10:13:50 crc kubenswrapper[4965]: I0219 10:13:50.951705 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="9873ade5-a134-4b72-bbfe-468df59b993f" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 19 10:13:50 crc kubenswrapper[4965]: I0219 10:13:50.952449 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kpx6m" Feb 19 10:13:50 crc kubenswrapper[4965]: I0219 10:13:50.954320 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 10:13:50 crc kubenswrapper[4965]: I0219 10:13:50.959670 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:13:50 crc kubenswrapper[4965]: I0219 10:13:50.959720 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 10:13:50 crc kubenswrapper[4965]: I0219 10:13:50.960006 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cthw6" Feb 19 10:13:50 crc kubenswrapper[4965]: I0219 10:13:50.988035 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-kpx6m"] Feb 19 10:13:51 crc kubenswrapper[4965]: I0219 10:13:51.143006 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsrnx\" (UniqueName: \"kubernetes.io/projected/0f72a778-ba2a-4454-bba8-865897b5d656-kube-api-access-rsrnx\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kpx6m\" (UID: \"0f72a778-ba2a-4454-bba8-865897b5d656\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kpx6m" Feb 19 10:13:51 crc kubenswrapper[4965]: I0219 10:13:51.143163 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0f72a778-ba2a-4454-bba8-865897b5d656-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kpx6m\" (UID: \"0f72a778-ba2a-4454-bba8-865897b5d656\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kpx6m" Feb 19 10:13:51 crc kubenswrapper[4965]: I0219 10:13:51.143261 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f72a778-ba2a-4454-bba8-865897b5d656-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kpx6m\" (UID: \"0f72a778-ba2a-4454-bba8-865897b5d656\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kpx6m" Feb 19 10:13:51 crc kubenswrapper[4965]: I0219 10:13:51.245390 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f72a778-ba2a-4454-bba8-865897b5d656-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kpx6m\" (UID: \"0f72a778-ba2a-4454-bba8-865897b5d656\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kpx6m" Feb 19 10:13:51 crc kubenswrapper[4965]: I0219 10:13:51.245504 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsrnx\" (UniqueName: \"kubernetes.io/projected/0f72a778-ba2a-4454-bba8-865897b5d656-kube-api-access-rsrnx\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kpx6m\" (UID: \"0f72a778-ba2a-4454-bba8-865897b5d656\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kpx6m" Feb 19 10:13:51 crc kubenswrapper[4965]: I0219 10:13:51.245591 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0f72a778-ba2a-4454-bba8-865897b5d656-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kpx6m\" (UID: \"0f72a778-ba2a-4454-bba8-865897b5d656\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kpx6m" Feb 19 10:13:51 crc kubenswrapper[4965]: I0219 10:13:51.249556 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0f72a778-ba2a-4454-bba8-865897b5d656-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kpx6m\" (UID: \"0f72a778-ba2a-4454-bba8-865897b5d656\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kpx6m" Feb 19 10:13:51 crc kubenswrapper[4965]: I0219 10:13:51.252019 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f72a778-ba2a-4454-bba8-865897b5d656-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kpx6m\" (UID: \"0f72a778-ba2a-4454-bba8-865897b5d656\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kpx6m" Feb 19 10:13:51 crc kubenswrapper[4965]: I0219 10:13:51.261439 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsrnx\" (UniqueName: \"kubernetes.io/projected/0f72a778-ba2a-4454-bba8-865897b5d656-kube-api-access-rsrnx\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kpx6m\" (UID: \"0f72a778-ba2a-4454-bba8-865897b5d656\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kpx6m" Feb 19 10:13:51 crc kubenswrapper[4965]: I0219 10:13:51.267298 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kpx6m" Feb 19 10:13:51 crc kubenswrapper[4965]: I0219 10:13:51.275917 4965 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-d4w6n" podUID="80443a02-bea0-40dc-aff2-9472db3c29f6" containerName="registry-server" probeResult="failure" output=< Feb 19 10:13:51 crc kubenswrapper[4965]: timeout: failed to connect service ":50051" within 1s Feb 19 10:13:51 crc kubenswrapper[4965]: > Feb 19 10:13:51 crc kubenswrapper[4965]: W0219 10:13:51.933374 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f72a778_ba2a_4454_bba8_865897b5d656.slice/crio-205a08fa272506b7dde60e9ea25a4d99eefb3258be94f050f06dbe9fadfb714c WatchSource:0}: Error finding container 205a08fa272506b7dde60e9ea25a4d99eefb3258be94f050f06dbe9fadfb714c: Status 404 returned error can't find the container with id 205a08fa272506b7dde60e9ea25a4d99eefb3258be94f050f06dbe9fadfb714c Feb 19 10:13:51 crc kubenswrapper[4965]: I0219 10:13:51.935829 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-kpx6m"] Feb 19 10:13:52 crc kubenswrapper[4965]: I0219 10:13:52.861798 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kpx6m" event={"ID":"0f72a778-ba2a-4454-bba8-865897b5d656","Type":"ContainerStarted","Data":"ba15508ecf9e4c2e3779443557f18d625cae883fd79c77bc5c9d3bbfabbc095f"} Feb 19 10:13:52 crc kubenswrapper[4965]: I0219 10:13:52.862049 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kpx6m" event={"ID":"0f72a778-ba2a-4454-bba8-865897b5d656","Type":"ContainerStarted","Data":"205a08fa272506b7dde60e9ea25a4d99eefb3258be94f050f06dbe9fadfb714c"} Feb 19 10:14:01 crc kubenswrapper[4965]: I0219 10:14:01.276626 4965 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-d4w6n" podUID="80443a02-bea0-40dc-aff2-9472db3c29f6" containerName="registry-server" probeResult="failure" output=< Feb 19 10:14:01 crc kubenswrapper[4965]: timeout: failed to connect service ":50051" within 1s Feb 19 10:14:01 crc kubenswrapper[4965]: > Feb 19 10:14:11 crc kubenswrapper[4965]: I0219 10:14:11.277162 4965 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-d4w6n" podUID="80443a02-bea0-40dc-aff2-9472db3c29f6" containerName="registry-server" probeResult="failure" output=< Feb 19 10:14:11 crc kubenswrapper[4965]: timeout: failed to connect service ":50051" within 1s Feb 19 10:14:11 crc kubenswrapper[4965]: > Feb 19 10:14:20 crc kubenswrapper[4965]: I0219 10:14:20.273496 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-d4w6n" Feb 19 10:14:20 crc kubenswrapper[4965]: I0219 10:14:20.296319 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kpx6m" podStartSLOduration=29.865523527 podStartE2EDuration="30.296292828s" podCreationTimestamp="2026-02-19 10:13:50 +0000 UTC" firstStartedPulling="2026-02-19 10:13:51.935480408 +0000 UTC m=+1887.556801718" lastFinishedPulling="2026-02-19 10:13:52.366249709 +0000 UTC m=+1887.987571019" observedRunningTime="2026-02-19 10:13:52.880345928 +0000 UTC m=+1888.501667248" watchObservedRunningTime="2026-02-19 10:14:20.296292828 +0000 UTC m=+1915.917614178" Feb 19 10:14:20 crc kubenswrapper[4965]: I0219 10:14:20.364091 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-d4w6n" Feb 19 10:14:20 crc kubenswrapper[4965]: I0219 10:14:20.515731 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d4w6n"] Feb 19 10:14:21 crc kubenswrapper[4965]: I0219 10:14:21.061695 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-ffs2w"] Feb 19 10:14:21 crc kubenswrapper[4965]: I0219 10:14:21.076505 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-fl9jp"] Feb 19 10:14:21 crc kubenswrapper[4965]: I0219 10:14:21.089824 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-ffs2w"] Feb 19 10:14:21 crc kubenswrapper[4965]: I0219 10:14:21.104222 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-fl9jp"] Feb 19 10:14:21 crc kubenswrapper[4965]: I0219 10:14:21.211404 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ff16dee-aeea-4cb2-ba44-a8de28ff44ec" path="/var/lib/kubelet/pods/0ff16dee-aeea-4cb2-ba44-a8de28ff44ec/volumes" Feb 19 10:14:21 crc kubenswrapper[4965]: I0219 10:14:21.212365 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="140e538f-9a0d-4c71-80a3-8710e0622021" path="/var/lib/kubelet/pods/140e538f-9a0d-4c71-80a3-8710e0622021/volumes" Feb 19 10:14:22 crc kubenswrapper[4965]: I0219 10:14:22.039187 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-ec51-account-create-update-s5v66"] Feb 19 10:14:22 crc kubenswrapper[4965]: I0219 10:14:22.051613 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-4d8f-account-create-update-xdplb"] Feb 19 10:14:22 crc kubenswrapper[4965]: I0219 10:14:22.087223 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-2j7hd"] Feb 19 10:14:22 crc kubenswrapper[4965]: I0219 10:14:22.100499 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-ec51-account-create-update-s5v66"] Feb 19 10:14:22 crc kubenswrapper[4965]: I0219 10:14:22.108719 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-2j7hd"] Feb 19 10:14:22 crc kubenswrapper[4965]: I0219 10:14:22.119115 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-4d8f-account-create-update-xdplb"] Feb 19 10:14:22 crc kubenswrapper[4965]: I0219 10:14:22.175605 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-d4w6n" podUID="80443a02-bea0-40dc-aff2-9472db3c29f6" containerName="registry-server" containerID="cri-o://3f5a49c6d8e44ab76abd8ea5f1aa2b33ddbe9ee9f278138253d250926a85f1c3" gracePeriod=2 Feb 19 10:14:22 crc kubenswrapper[4965]: I0219 10:14:22.744012 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d4w6n" Feb 19 10:14:22 crc kubenswrapper[4965]: I0219 10:14:22.868903 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80443a02-bea0-40dc-aff2-9472db3c29f6-catalog-content\") pod \"80443a02-bea0-40dc-aff2-9472db3c29f6\" (UID: \"80443a02-bea0-40dc-aff2-9472db3c29f6\") " Feb 19 10:14:22 crc kubenswrapper[4965]: I0219 10:14:22.869030 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zhhr\" (UniqueName: \"kubernetes.io/projected/80443a02-bea0-40dc-aff2-9472db3c29f6-kube-api-access-9zhhr\") pod \"80443a02-bea0-40dc-aff2-9472db3c29f6\" (UID: \"80443a02-bea0-40dc-aff2-9472db3c29f6\") " Feb 19 10:14:22 crc kubenswrapper[4965]: I0219 10:14:22.869327 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80443a02-bea0-40dc-aff2-9472db3c29f6-utilities\") pod \"80443a02-bea0-40dc-aff2-9472db3c29f6\" (UID: \"80443a02-bea0-40dc-aff2-9472db3c29f6\") " Feb 19 10:14:22 crc kubenswrapper[4965]: I0219 10:14:22.871035 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80443a02-bea0-40dc-aff2-9472db3c29f6-utilities" (OuterVolumeSpecName: "utilities") pod "80443a02-bea0-40dc-aff2-9472db3c29f6" (UID: "80443a02-bea0-40dc-aff2-9472db3c29f6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:14:22 crc kubenswrapper[4965]: I0219 10:14:22.880841 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80443a02-bea0-40dc-aff2-9472db3c29f6-kube-api-access-9zhhr" (OuterVolumeSpecName: "kube-api-access-9zhhr") pod "80443a02-bea0-40dc-aff2-9472db3c29f6" (UID: "80443a02-bea0-40dc-aff2-9472db3c29f6"). InnerVolumeSpecName "kube-api-access-9zhhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:14:22 crc kubenswrapper[4965]: I0219 10:14:22.972359 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zhhr\" (UniqueName: \"kubernetes.io/projected/80443a02-bea0-40dc-aff2-9472db3c29f6-kube-api-access-9zhhr\") on node \"crc\" DevicePath \"\"" Feb 19 10:14:22 crc kubenswrapper[4965]: I0219 10:14:22.972414 4965 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80443a02-bea0-40dc-aff2-9472db3c29f6-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:14:23 crc kubenswrapper[4965]: I0219 10:14:23.027313 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80443a02-bea0-40dc-aff2-9472db3c29f6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "80443a02-bea0-40dc-aff2-9472db3c29f6" (UID: "80443a02-bea0-40dc-aff2-9472db3c29f6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:14:23 crc kubenswrapper[4965]: I0219 10:14:23.041492 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-b85b-account-create-update-z8f4q"] Feb 19 10:14:23 crc kubenswrapper[4965]: I0219 10:14:23.053317 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-b85b-account-create-update-z8f4q"] Feb 19 10:14:23 crc kubenswrapper[4965]: I0219 10:14:23.074952 4965 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80443a02-bea0-40dc-aff2-9472db3c29f6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:14:23 crc kubenswrapper[4965]: I0219 10:14:23.187230 4965 generic.go:334] "Generic (PLEG): container finished" podID="80443a02-bea0-40dc-aff2-9472db3c29f6" containerID="3f5a49c6d8e44ab76abd8ea5f1aa2b33ddbe9ee9f278138253d250926a85f1c3" exitCode=0 Feb 19 10:14:23 crc kubenswrapper[4965]: I0219 10:14:23.187275 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d4w6n" event={"ID":"80443a02-bea0-40dc-aff2-9472db3c29f6","Type":"ContainerDied","Data":"3f5a49c6d8e44ab76abd8ea5f1aa2b33ddbe9ee9f278138253d250926a85f1c3"} Feb 19 10:14:23 crc kubenswrapper[4965]: I0219 10:14:23.187314 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d4w6n" event={"ID":"80443a02-bea0-40dc-aff2-9472db3c29f6","Type":"ContainerDied","Data":"621484ad8bd9adfc0cd8be09e8168afe6dc0a0782d5d7766a89b7914d501a490"} Feb 19 10:14:23 crc kubenswrapper[4965]: I0219 10:14:23.187337 4965 scope.go:117] "RemoveContainer" containerID="3f5a49c6d8e44ab76abd8ea5f1aa2b33ddbe9ee9f278138253d250926a85f1c3" Feb 19 10:14:23 crc kubenswrapper[4965]: I0219 10:14:23.188025 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d4w6n" Feb 19 10:14:23 crc kubenswrapper[4965]: I0219 10:14:23.212274 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08122272-8ff7-4dad-95ea-c9190baad3ba" path="/var/lib/kubelet/pods/08122272-8ff7-4dad-95ea-c9190baad3ba/volumes" Feb 19 10:14:23 crc kubenswrapper[4965]: I0219 10:14:23.212997 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b83107c-ba00-4687-9c58-94535f5a9a1e" path="/var/lib/kubelet/pods/6b83107c-ba00-4687-9c58-94535f5a9a1e/volumes" Feb 19 10:14:23 crc kubenswrapper[4965]: I0219 10:14:23.214496 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a128eedd-c0da-49f3-8d7f-631d3a66a6d2" path="/var/lib/kubelet/pods/a128eedd-c0da-49f3-8d7f-631d3a66a6d2/volumes" Feb 19 10:14:23 crc kubenswrapper[4965]: I0219 10:14:23.215060 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd7da1a7-3ef7-4e09-bc9e-1e672e6b306f" path="/var/lib/kubelet/pods/fd7da1a7-3ef7-4e09-bc9e-1e672e6b306f/volumes" Feb 19 10:14:23 crc kubenswrapper[4965]: I0219 10:14:23.215349 4965 scope.go:117] "RemoveContainer" containerID="f8a7d2e4a3c7be8dd6317021d563d6cde0b08e12da21e6a050a45b72f9153c33" Feb 19 10:14:23 crc kubenswrapper[4965]: I0219 10:14:23.254258 4965 scope.go:117] "RemoveContainer" containerID="ebf8d0e0a595dc6a01019dd33f314de1001f38cd0351beee4a1ee4ae4ff913a3" Feb 19 10:14:23 crc kubenswrapper[4965]: I0219 10:14:23.261589 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d4w6n"] Feb 19 10:14:23 crc kubenswrapper[4965]: I0219 10:14:23.271681 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-d4w6n"] Feb 19 10:14:23 crc kubenswrapper[4965]: I0219 10:14:23.337883 4965 scope.go:117] "RemoveContainer" containerID="3f5a49c6d8e44ab76abd8ea5f1aa2b33ddbe9ee9f278138253d250926a85f1c3" Feb 19 10:14:23 crc kubenswrapper[4965]: E0219 10:14:23.338907 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f5a49c6d8e44ab76abd8ea5f1aa2b33ddbe9ee9f278138253d250926a85f1c3\": container with ID starting with 3f5a49c6d8e44ab76abd8ea5f1aa2b33ddbe9ee9f278138253d250926a85f1c3 not found: ID does not exist" containerID="3f5a49c6d8e44ab76abd8ea5f1aa2b33ddbe9ee9f278138253d250926a85f1c3" Feb 19 10:14:23 crc kubenswrapper[4965]: I0219 10:14:23.339055 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f5a49c6d8e44ab76abd8ea5f1aa2b33ddbe9ee9f278138253d250926a85f1c3"} err="failed to get container status \"3f5a49c6d8e44ab76abd8ea5f1aa2b33ddbe9ee9f278138253d250926a85f1c3\": rpc error: code = NotFound desc = could not find container \"3f5a49c6d8e44ab76abd8ea5f1aa2b33ddbe9ee9f278138253d250926a85f1c3\": container with ID starting with 3f5a49c6d8e44ab76abd8ea5f1aa2b33ddbe9ee9f278138253d250926a85f1c3 not found: ID does not exist" Feb 19 10:14:23 crc kubenswrapper[4965]: I0219 10:14:23.339084 4965 scope.go:117] "RemoveContainer" containerID="f8a7d2e4a3c7be8dd6317021d563d6cde0b08e12da21e6a050a45b72f9153c33" Feb 19 10:14:23 crc kubenswrapper[4965]: E0219 10:14:23.341707 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8a7d2e4a3c7be8dd6317021d563d6cde0b08e12da21e6a050a45b72f9153c33\": container with ID starting with f8a7d2e4a3c7be8dd6317021d563d6cde0b08e12da21e6a050a45b72f9153c33 not found: ID does not exist" containerID="f8a7d2e4a3c7be8dd6317021d563d6cde0b08e12da21e6a050a45b72f9153c33" Feb 19 10:14:23 crc kubenswrapper[4965]: I0219 10:14:23.341743 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8a7d2e4a3c7be8dd6317021d563d6cde0b08e12da21e6a050a45b72f9153c33"} err="failed to get container status \"f8a7d2e4a3c7be8dd6317021d563d6cde0b08e12da21e6a050a45b72f9153c33\": rpc error: code = NotFound desc = could not find container \"f8a7d2e4a3c7be8dd6317021d563d6cde0b08e12da21e6a050a45b72f9153c33\": container with ID starting with f8a7d2e4a3c7be8dd6317021d563d6cde0b08e12da21e6a050a45b72f9153c33 not found: ID does not exist" Feb 19 10:14:23 crc kubenswrapper[4965]: I0219 10:14:23.341805 4965 scope.go:117] "RemoveContainer" containerID="ebf8d0e0a595dc6a01019dd33f314de1001f38cd0351beee4a1ee4ae4ff913a3" Feb 19 10:14:23 crc kubenswrapper[4965]: E0219 10:14:23.344323 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebf8d0e0a595dc6a01019dd33f314de1001f38cd0351beee4a1ee4ae4ff913a3\": container with ID starting with ebf8d0e0a595dc6a01019dd33f314de1001f38cd0351beee4a1ee4ae4ff913a3 not found: ID does not exist" containerID="ebf8d0e0a595dc6a01019dd33f314de1001f38cd0351beee4a1ee4ae4ff913a3" Feb 19 10:14:23 crc kubenswrapper[4965]: I0219 10:14:23.344352 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebf8d0e0a595dc6a01019dd33f314de1001f38cd0351beee4a1ee4ae4ff913a3"} err="failed to get container status \"ebf8d0e0a595dc6a01019dd33f314de1001f38cd0351beee4a1ee4ae4ff913a3\": rpc error: code = NotFound desc = could not find container \"ebf8d0e0a595dc6a01019dd33f314de1001f38cd0351beee4a1ee4ae4ff913a3\": container with ID starting with ebf8d0e0a595dc6a01019dd33f314de1001f38cd0351beee4a1ee4ae4ff913a3 not found: ID does not exist" Feb 19 10:14:25 crc kubenswrapper[4965]: I0219 10:14:25.221083 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80443a02-bea0-40dc-aff2-9472db3c29f6" path="/var/lib/kubelet/pods/80443a02-bea0-40dc-aff2-9472db3c29f6/volumes" Feb 19 10:14:27 crc kubenswrapper[4965]: I0219 10:14:27.241853 4965 generic.go:334] "Generic (PLEG): container finished" podID="0f72a778-ba2a-4454-bba8-865897b5d656" containerID="ba15508ecf9e4c2e3779443557f18d625cae883fd79c77bc5c9d3bbfabbc095f" exitCode=0 Feb 19 10:14:27 crc kubenswrapper[4965]: I0219 10:14:27.241910 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kpx6m" event={"ID":"0f72a778-ba2a-4454-bba8-865897b5d656","Type":"ContainerDied","Data":"ba15508ecf9e4c2e3779443557f18d625cae883fd79c77bc5c9d3bbfabbc095f"} Feb 19 10:14:28 crc kubenswrapper[4965]: I0219 10:14:28.821007 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kpx6m" Feb 19 10:14:28 crc kubenswrapper[4965]: I0219 10:14:28.929816 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f72a778-ba2a-4454-bba8-865897b5d656-inventory\") pod \"0f72a778-ba2a-4454-bba8-865897b5d656\" (UID: \"0f72a778-ba2a-4454-bba8-865897b5d656\") " Feb 19 10:14:28 crc kubenswrapper[4965]: I0219 10:14:28.929892 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsrnx\" (UniqueName: \"kubernetes.io/projected/0f72a778-ba2a-4454-bba8-865897b5d656-kube-api-access-rsrnx\") pod \"0f72a778-ba2a-4454-bba8-865897b5d656\" (UID: \"0f72a778-ba2a-4454-bba8-865897b5d656\") " Feb 19 10:14:28 crc kubenswrapper[4965]: I0219 10:14:28.930239 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0f72a778-ba2a-4454-bba8-865897b5d656-ssh-key-openstack-edpm-ipam\") pod \"0f72a778-ba2a-4454-bba8-865897b5d656\" (UID: \"0f72a778-ba2a-4454-bba8-865897b5d656\") " Feb 19 10:14:28 crc kubenswrapper[4965]: I0219 10:14:28.934899 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f72a778-ba2a-4454-bba8-865897b5d656-kube-api-access-rsrnx" (OuterVolumeSpecName: "kube-api-access-rsrnx") pod "0f72a778-ba2a-4454-bba8-865897b5d656" (UID: "0f72a778-ba2a-4454-bba8-865897b5d656"). InnerVolumeSpecName "kube-api-access-rsrnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:14:28 crc kubenswrapper[4965]: I0219 10:14:28.958429 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f72a778-ba2a-4454-bba8-865897b5d656-inventory" (OuterVolumeSpecName: "inventory") pod "0f72a778-ba2a-4454-bba8-865897b5d656" (UID: "0f72a778-ba2a-4454-bba8-865897b5d656"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:14:28 crc kubenswrapper[4965]: I0219 10:14:28.974826 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f72a778-ba2a-4454-bba8-865897b5d656-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0f72a778-ba2a-4454-bba8-865897b5d656" (UID: "0f72a778-ba2a-4454-bba8-865897b5d656"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:14:29 crc kubenswrapper[4965]: I0219 10:14:29.033558 4965 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f72a778-ba2a-4454-bba8-865897b5d656-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:14:29 crc kubenswrapper[4965]: I0219 10:14:29.033612 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rsrnx\" (UniqueName: \"kubernetes.io/projected/0f72a778-ba2a-4454-bba8-865897b5d656-kube-api-access-rsrnx\") on node \"crc\" DevicePath \"\"" Feb 19 10:14:29 crc kubenswrapper[4965]: I0219 10:14:29.033640 4965 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0f72a778-ba2a-4454-bba8-865897b5d656-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 10:14:29 crc kubenswrapper[4965]: I0219 10:14:29.266280 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kpx6m" event={"ID":"0f72a778-ba2a-4454-bba8-865897b5d656","Type":"ContainerDied","Data":"205a08fa272506b7dde60e9ea25a4d99eefb3258be94f050f06dbe9fadfb714c"} Feb 19 10:14:29 crc kubenswrapper[4965]: I0219 10:14:29.266350 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="205a08fa272506b7dde60e9ea25a4d99eefb3258be94f050f06dbe9fadfb714c" Feb 19 10:14:29 crc kubenswrapper[4965]: I0219 10:14:29.266467 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kpx6m" Feb 19 10:14:29 crc kubenswrapper[4965]: I0219 10:14:29.372340 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sxwbj"] Feb 19 10:14:29 crc kubenswrapper[4965]: E0219 10:14:29.373016 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f72a778-ba2a-4454-bba8-865897b5d656" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 19 10:14:29 crc kubenswrapper[4965]: I0219 10:14:29.373045 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f72a778-ba2a-4454-bba8-865897b5d656" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 19 10:14:29 crc kubenswrapper[4965]: E0219 10:14:29.373075 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80443a02-bea0-40dc-aff2-9472db3c29f6" containerName="extract-content" Feb 19 10:14:29 crc kubenswrapper[4965]: I0219 10:14:29.373088 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="80443a02-bea0-40dc-aff2-9472db3c29f6" containerName="extract-content" Feb 19 10:14:29 crc kubenswrapper[4965]: E0219 10:14:29.373119 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80443a02-bea0-40dc-aff2-9472db3c29f6" containerName="extract-utilities" Feb 19 10:14:29 crc kubenswrapper[4965]: I0219 10:14:29.373132 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="80443a02-bea0-40dc-aff2-9472db3c29f6" containerName="extract-utilities" Feb 19 10:14:29 crc kubenswrapper[4965]: E0219 10:14:29.373159 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80443a02-bea0-40dc-aff2-9472db3c29f6" containerName="registry-server" Feb 19 10:14:29 crc kubenswrapper[4965]: I0219 10:14:29.373170 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="80443a02-bea0-40dc-aff2-9472db3c29f6" containerName="registry-server" Feb 19 10:14:29 crc kubenswrapper[4965]: I0219 10:14:29.374705 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="80443a02-bea0-40dc-aff2-9472db3c29f6" containerName="registry-server" Feb 19 10:14:29 crc kubenswrapper[4965]: I0219 10:14:29.374834 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f72a778-ba2a-4454-bba8-865897b5d656" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 19 10:14:29 crc kubenswrapper[4965]: I0219 10:14:29.376684 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sxwbj" Feb 19 10:14:29 crc kubenswrapper[4965]: I0219 10:14:29.379234 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cthw6" Feb 19 10:14:29 crc kubenswrapper[4965]: I0219 10:14:29.380340 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:14:29 crc kubenswrapper[4965]: I0219 10:14:29.380442 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 10:14:29 crc kubenswrapper[4965]: I0219 10:14:29.380456 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 10:14:29 crc kubenswrapper[4965]: I0219 10:14:29.389641 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sxwbj"] Feb 19 10:14:29 crc kubenswrapper[4965]: I0219 10:14:29.548768 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/64c1fbe6-a102-40e1-920a-319b6664c77e-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-sxwbj\" (UID: \"64c1fbe6-a102-40e1-920a-319b6664c77e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sxwbj" Feb 19 10:14:29 crc kubenswrapper[4965]: I0219 10:14:29.549400 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/64c1fbe6-a102-40e1-920a-319b6664c77e-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-sxwbj\" (UID: \"64c1fbe6-a102-40e1-920a-319b6664c77e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sxwbj" Feb 19 10:14:29 crc kubenswrapper[4965]: I0219 10:14:29.549441 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hwgh\" (UniqueName: \"kubernetes.io/projected/64c1fbe6-a102-40e1-920a-319b6664c77e-kube-api-access-5hwgh\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-sxwbj\" (UID: \"64c1fbe6-a102-40e1-920a-319b6664c77e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sxwbj" Feb 19 10:14:29 crc kubenswrapper[4965]: I0219 10:14:29.651057 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/64c1fbe6-a102-40e1-920a-319b6664c77e-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-sxwbj\" (UID: \"64c1fbe6-a102-40e1-920a-319b6664c77e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sxwbj" Feb 19 10:14:29 crc kubenswrapper[4965]: I0219 10:14:29.651120 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hwgh\" (UniqueName: \"kubernetes.io/projected/64c1fbe6-a102-40e1-920a-319b6664c77e-kube-api-access-5hwgh\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-sxwbj\" (UID: \"64c1fbe6-a102-40e1-920a-319b6664c77e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sxwbj" Feb 19 10:14:29 crc kubenswrapper[4965]: I0219 10:14:29.651268 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/64c1fbe6-a102-40e1-920a-319b6664c77e-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-sxwbj\" (UID: \"64c1fbe6-a102-40e1-920a-319b6664c77e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sxwbj" Feb 19 10:14:29 crc kubenswrapper[4965]: I0219 10:14:29.656093 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/64c1fbe6-a102-40e1-920a-319b6664c77e-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-sxwbj\" (UID: \"64c1fbe6-a102-40e1-920a-319b6664c77e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sxwbj" Feb 19 10:14:29 crc kubenswrapper[4965]: I0219 10:14:29.667848 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hwgh\" (UniqueName: \"kubernetes.io/projected/64c1fbe6-a102-40e1-920a-319b6664c77e-kube-api-access-5hwgh\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-sxwbj\" (UID: \"64c1fbe6-a102-40e1-920a-319b6664c77e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sxwbj" Feb 19 10:14:29 crc kubenswrapper[4965]: I0219 10:14:29.670901 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/64c1fbe6-a102-40e1-920a-319b6664c77e-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-sxwbj\" (UID: \"64c1fbe6-a102-40e1-920a-319b6664c77e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sxwbj" Feb 19 10:14:29 crc kubenswrapper[4965]: I0219 10:14:29.702634 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sxwbj" Feb 19 10:14:30 crc kubenswrapper[4965]: I0219 10:14:30.323797 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sxwbj"] Feb 19 10:14:31 crc kubenswrapper[4965]: I0219 10:14:31.290950 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sxwbj" event={"ID":"64c1fbe6-a102-40e1-920a-319b6664c77e","Type":"ContainerStarted","Data":"284937a30d79744c748eaa23210a0be14d7cecf2d7cf3f7e61fd0c3d88dd5ced"} Feb 19 10:14:31 crc kubenswrapper[4965]: I0219 10:14:31.291379 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sxwbj" event={"ID":"64c1fbe6-a102-40e1-920a-319b6664c77e","Type":"ContainerStarted","Data":"3eb976e46bad50839e4c791dedfc0256ea597aa2ba50633939a9f8c7730188ae"} Feb 19 10:14:31 crc kubenswrapper[4965]: I0219 10:14:31.311594 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sxwbj" podStartSLOduration=1.9267209749999998 podStartE2EDuration="2.311536463s" podCreationTimestamp="2026-02-19 10:14:29 +0000 UTC" firstStartedPulling="2026-02-19 10:14:30.330790944 +0000 UTC m=+1925.952112254" lastFinishedPulling="2026-02-19 10:14:30.715606432 +0000 UTC m=+1926.336927742" observedRunningTime="2026-02-19 10:14:31.310048857 +0000 UTC m=+1926.931370207" watchObservedRunningTime="2026-02-19 10:14:31.311536463 +0000 UTC m=+1926.932857783" Feb 19 10:14:41 crc kubenswrapper[4965]: I0219 10:14:41.411255 4965 scope.go:117] "RemoveContainer" containerID="5a6e55ba4e494a7e74b24f036da932914ab2bfec978e4cf314b6624b157d1726" Feb 19 10:14:41 crc kubenswrapper[4965]: I0219 10:14:41.466941 4965 scope.go:117] "RemoveContainer" containerID="aad2b0a33d1032ed03bb2c05b9c8d0033cbf75a14ada835aeb73885c2eb9a14c" Feb 19 10:14:41 crc kubenswrapper[4965]: I0219 10:14:41.501608 4965 scope.go:117] "RemoveContainer" containerID="4aa009512bde6c9449b3cd2aec121e4159117a2462f90cdfe4e51b8a11236f69" Feb 19 10:14:41 crc kubenswrapper[4965]: I0219 10:14:41.552265 4965 scope.go:117] "RemoveContainer" containerID="79cbeeb63a28f5c389b9ac0a139d4e62412e586c54b91d1d69a4dc78b98f0110" Feb 19 10:14:41 crc kubenswrapper[4965]: I0219 10:14:41.630484 4965 scope.go:117] "RemoveContainer" containerID="1ed610a5cb8146470f514103c0c093ce71001aa6764d9e2039af708f123150f7" Feb 19 10:14:41 crc kubenswrapper[4965]: I0219 10:14:41.663750 4965 scope.go:117] "RemoveContainer" containerID="5ab05f3592b2f219ed44ac0e86ad608bd92a1ab776be7429a7ef3f0bd8f2b808" Feb 19 10:14:41 crc kubenswrapper[4965]: I0219 10:14:41.715362 4965 scope.go:117] "RemoveContainer" containerID="172a6ea9e25649dca72e46ff39997afa27a809b70090953c37d31aea0aaa238d" Feb 19 10:14:41 crc kubenswrapper[4965]: I0219 10:14:41.738054 4965 scope.go:117] "RemoveContainer" containerID="220ad7a2f95fc4c08b0feaebaa4f9a1b765b451488b3bd9d79b06f35021fe479" Feb 19 10:14:55 crc kubenswrapper[4965]: I0219 10:14:55.050449 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-pkhzc"] Feb 19 10:14:55 crc kubenswrapper[4965]: I0219 10:14:55.062718 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-pkhzc"] Feb 19 10:14:55 crc kubenswrapper[4965]: I0219 10:14:55.218186 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5e000de-4745-47c0-b6e6-8735c626518e" path="/var/lib/kubelet/pods/d5e000de-4745-47c0-b6e6-8735c626518e/volumes" Feb 19 10:15:00 crc kubenswrapper[4965]: I0219 10:15:00.153108 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524935-j4f82"] Feb 19 10:15:00 crc kubenswrapper[4965]: I0219 10:15:00.155251 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-j4f82" Feb 19 10:15:00 crc kubenswrapper[4965]: I0219 10:15:00.157298 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 10:15:00 crc kubenswrapper[4965]: I0219 10:15:00.157787 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 10:15:00 crc kubenswrapper[4965]: I0219 10:15:00.164856 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524935-j4f82"] Feb 19 10:15:00 crc kubenswrapper[4965]: I0219 10:15:00.342695 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2r4h\" (UniqueName: \"kubernetes.io/projected/ff2a3ee5-c9c6-4486-b7ae-da6c9adc10df-kube-api-access-j2r4h\") pod \"collect-profiles-29524935-j4f82\" (UID: \"ff2a3ee5-c9c6-4486-b7ae-da6c9adc10df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-j4f82" Feb 19 10:15:00 crc kubenswrapper[4965]: I0219 10:15:00.342884 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ff2a3ee5-c9c6-4486-b7ae-da6c9adc10df-config-volume\") pod \"collect-profiles-29524935-j4f82\" (UID: \"ff2a3ee5-c9c6-4486-b7ae-da6c9adc10df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-j4f82" Feb 19 10:15:00 crc kubenswrapper[4965]: I0219 10:15:00.342933 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ff2a3ee5-c9c6-4486-b7ae-da6c9adc10df-secret-volume\") pod \"collect-profiles-29524935-j4f82\" (UID: \"ff2a3ee5-c9c6-4486-b7ae-da6c9adc10df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-j4f82" Feb 19 10:15:00 crc kubenswrapper[4965]: I0219 10:15:00.444276 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ff2a3ee5-c9c6-4486-b7ae-da6c9adc10df-config-volume\") pod \"collect-profiles-29524935-j4f82\" (UID: \"ff2a3ee5-c9c6-4486-b7ae-da6c9adc10df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-j4f82" Feb 19 10:15:00 crc kubenswrapper[4965]: I0219 10:15:00.444337 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ff2a3ee5-c9c6-4486-b7ae-da6c9adc10df-secret-volume\") pod \"collect-profiles-29524935-j4f82\" (UID: \"ff2a3ee5-c9c6-4486-b7ae-da6c9adc10df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-j4f82" Feb 19 10:15:00 crc kubenswrapper[4965]: I0219 10:15:00.444443 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2r4h\" (UniqueName: \"kubernetes.io/projected/ff2a3ee5-c9c6-4486-b7ae-da6c9adc10df-kube-api-access-j2r4h\") pod \"collect-profiles-29524935-j4f82\" (UID: \"ff2a3ee5-c9c6-4486-b7ae-da6c9adc10df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-j4f82" Feb 19 10:15:00 crc kubenswrapper[4965]: I0219 10:15:00.445642 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ff2a3ee5-c9c6-4486-b7ae-da6c9adc10df-config-volume\") pod \"collect-profiles-29524935-j4f82\" (UID: \"ff2a3ee5-c9c6-4486-b7ae-da6c9adc10df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-j4f82" Feb 19 10:15:00 crc kubenswrapper[4965]: I0219 10:15:00.453965 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ff2a3ee5-c9c6-4486-b7ae-da6c9adc10df-secret-volume\") pod \"collect-profiles-29524935-j4f82\" (UID: \"ff2a3ee5-c9c6-4486-b7ae-da6c9adc10df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-j4f82" Feb 19 10:15:00 crc kubenswrapper[4965]: I0219 10:15:00.460655 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2r4h\" (UniqueName: \"kubernetes.io/projected/ff2a3ee5-c9c6-4486-b7ae-da6c9adc10df-kube-api-access-j2r4h\") pod \"collect-profiles-29524935-j4f82\" (UID: \"ff2a3ee5-c9c6-4486-b7ae-da6c9adc10df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-j4f82" Feb 19 10:15:00 crc kubenswrapper[4965]: I0219 10:15:00.472581 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-j4f82" Feb 19 10:15:00 crc kubenswrapper[4965]: I0219 10:15:00.915252 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524935-j4f82"] Feb 19 10:15:01 crc kubenswrapper[4965]: I0219 10:15:01.622895 4965 generic.go:334] "Generic (PLEG): container finished" podID="ff2a3ee5-c9c6-4486-b7ae-da6c9adc10df" containerID="970416cf91831295796acce4fec0c1eca686b614781bd93279a9fbc0081b8306" exitCode=0 Feb 19 10:15:01 crc kubenswrapper[4965]: I0219 10:15:01.622968 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-j4f82" event={"ID":"ff2a3ee5-c9c6-4486-b7ae-da6c9adc10df","Type":"ContainerDied","Data":"970416cf91831295796acce4fec0c1eca686b614781bd93279a9fbc0081b8306"} Feb 19 10:15:01 crc kubenswrapper[4965]: I0219 10:15:01.624313 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-j4f82" event={"ID":"ff2a3ee5-c9c6-4486-b7ae-da6c9adc10df","Type":"ContainerStarted","Data":"a585aa36cd35cbc87fdee7d132a2b541eda5141673cf512bbb9e95b297dcc962"} Feb 19 10:15:03 crc kubenswrapper[4965]: I0219 10:15:03.085184 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-j4f82" Feb 19 10:15:03 crc kubenswrapper[4965]: I0219 10:15:03.205287 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ff2a3ee5-c9c6-4486-b7ae-da6c9adc10df-secret-volume\") pod \"ff2a3ee5-c9c6-4486-b7ae-da6c9adc10df\" (UID: \"ff2a3ee5-c9c6-4486-b7ae-da6c9adc10df\") " Feb 19 10:15:03 crc kubenswrapper[4965]: I0219 10:15:03.205431 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2r4h\" (UniqueName: \"kubernetes.io/projected/ff2a3ee5-c9c6-4486-b7ae-da6c9adc10df-kube-api-access-j2r4h\") pod \"ff2a3ee5-c9c6-4486-b7ae-da6c9adc10df\" (UID: \"ff2a3ee5-c9c6-4486-b7ae-da6c9adc10df\") " Feb 19 10:15:03 crc kubenswrapper[4965]: I0219 10:15:03.205634 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ff2a3ee5-c9c6-4486-b7ae-da6c9adc10df-config-volume\") pod \"ff2a3ee5-c9c6-4486-b7ae-da6c9adc10df\" (UID: \"ff2a3ee5-c9c6-4486-b7ae-da6c9adc10df\") " Feb 19 10:15:03 crc kubenswrapper[4965]: I0219 10:15:03.206706 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff2a3ee5-c9c6-4486-b7ae-da6c9adc10df-config-volume" (OuterVolumeSpecName: "config-volume") pod "ff2a3ee5-c9c6-4486-b7ae-da6c9adc10df" (UID: "ff2a3ee5-c9c6-4486-b7ae-da6c9adc10df"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:15:03 crc kubenswrapper[4965]: I0219 10:15:03.213124 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff2a3ee5-c9c6-4486-b7ae-da6c9adc10df-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ff2a3ee5-c9c6-4486-b7ae-da6c9adc10df" (UID: "ff2a3ee5-c9c6-4486-b7ae-da6c9adc10df"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:15:03 crc kubenswrapper[4965]: I0219 10:15:03.213690 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff2a3ee5-c9c6-4486-b7ae-da6c9adc10df-kube-api-access-j2r4h" (OuterVolumeSpecName: "kube-api-access-j2r4h") pod "ff2a3ee5-c9c6-4486-b7ae-da6c9adc10df" (UID: "ff2a3ee5-c9c6-4486-b7ae-da6c9adc10df"). InnerVolumeSpecName "kube-api-access-j2r4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:15:03 crc kubenswrapper[4965]: I0219 10:15:03.324699 4965 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ff2a3ee5-c9c6-4486-b7ae-da6c9adc10df-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 10:15:03 crc kubenswrapper[4965]: I0219 10:15:03.324738 4965 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ff2a3ee5-c9c6-4486-b7ae-da6c9adc10df-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 10:15:03 crc kubenswrapper[4965]: I0219 10:15:03.324751 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2r4h\" (UniqueName: \"kubernetes.io/projected/ff2a3ee5-c9c6-4486-b7ae-da6c9adc10df-kube-api-access-j2r4h\") on node \"crc\" DevicePath \"\"" Feb 19 10:15:03 crc kubenswrapper[4965]: I0219 10:15:03.641976 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-j4f82" event={"ID":"ff2a3ee5-c9c6-4486-b7ae-da6c9adc10df","Type":"ContainerDied","Data":"a585aa36cd35cbc87fdee7d132a2b541eda5141673cf512bbb9e95b297dcc962"} Feb 19 10:15:03 crc kubenswrapper[4965]: I0219 10:15:03.642015 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a585aa36cd35cbc87fdee7d132a2b541eda5141673cf512bbb9e95b297dcc962" Feb 19 10:15:03 crc kubenswrapper[4965]: I0219 10:15:03.642359 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-j4f82" Feb 19 10:15:16 crc kubenswrapper[4965]: I0219 10:15:16.761604 4965 generic.go:334] "Generic (PLEG): container finished" podID="64c1fbe6-a102-40e1-920a-319b6664c77e" containerID="284937a30d79744c748eaa23210a0be14d7cecf2d7cf3f7e61fd0c3d88dd5ced" exitCode=0 Feb 19 10:15:16 crc kubenswrapper[4965]: I0219 10:15:16.761703 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sxwbj" event={"ID":"64c1fbe6-a102-40e1-920a-319b6664c77e","Type":"ContainerDied","Data":"284937a30d79744c748eaa23210a0be14d7cecf2d7cf3f7e61fd0c3d88dd5ced"} Feb 19 10:15:18 crc kubenswrapper[4965]: I0219 10:15:18.272280 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sxwbj" Feb 19 10:15:18 crc kubenswrapper[4965]: I0219 10:15:18.437847 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hwgh\" (UniqueName: \"kubernetes.io/projected/64c1fbe6-a102-40e1-920a-319b6664c77e-kube-api-access-5hwgh\") pod \"64c1fbe6-a102-40e1-920a-319b6664c77e\" (UID: \"64c1fbe6-a102-40e1-920a-319b6664c77e\") " Feb 19 10:15:18 crc kubenswrapper[4965]: I0219 10:15:18.438285 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/64c1fbe6-a102-40e1-920a-319b6664c77e-ssh-key-openstack-edpm-ipam\") pod \"64c1fbe6-a102-40e1-920a-319b6664c77e\" (UID: \"64c1fbe6-a102-40e1-920a-319b6664c77e\") " Feb 19 10:15:18 crc kubenswrapper[4965]: I0219 10:15:18.438321 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/64c1fbe6-a102-40e1-920a-319b6664c77e-inventory\") pod \"64c1fbe6-a102-40e1-920a-319b6664c77e\" (UID: \"64c1fbe6-a102-40e1-920a-319b6664c77e\") " Feb 19 10:15:18 crc kubenswrapper[4965]: I0219 10:15:18.452495 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64c1fbe6-a102-40e1-920a-319b6664c77e-kube-api-access-5hwgh" (OuterVolumeSpecName: "kube-api-access-5hwgh") pod "64c1fbe6-a102-40e1-920a-319b6664c77e" (UID: "64c1fbe6-a102-40e1-920a-319b6664c77e"). InnerVolumeSpecName "kube-api-access-5hwgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:15:18 crc kubenswrapper[4965]: E0219 10:15:18.466630 4965 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64c1fbe6-a102-40e1-920a-319b6664c77e-ssh-key-openstack-edpm-ipam podName:64c1fbe6-a102-40e1-920a-319b6664c77e nodeName:}" failed. No retries permitted until 2026-02-19 10:15:18.966593854 +0000 UTC m=+1974.587915174 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "ssh-key-openstack-edpm-ipam" (UniqueName: "kubernetes.io/secret/64c1fbe6-a102-40e1-920a-319b6664c77e-ssh-key-openstack-edpm-ipam") pod "64c1fbe6-a102-40e1-920a-319b6664c77e" (UID: "64c1fbe6-a102-40e1-920a-319b6664c77e") : error deleting /var/lib/kubelet/pods/64c1fbe6-a102-40e1-920a-319b6664c77e/volume-subpaths: remove /var/lib/kubelet/pods/64c1fbe6-a102-40e1-920a-319b6664c77e/volume-subpaths: no such file or directory Feb 19 10:15:18 crc kubenswrapper[4965]: I0219 10:15:18.470665 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64c1fbe6-a102-40e1-920a-319b6664c77e-inventory" (OuterVolumeSpecName: "inventory") pod "64c1fbe6-a102-40e1-920a-319b6664c77e" (UID: "64c1fbe6-a102-40e1-920a-319b6664c77e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:15:18 crc kubenswrapper[4965]: I0219 10:15:18.541964 4965 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/64c1fbe6-a102-40e1-920a-319b6664c77e-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:15:18 crc kubenswrapper[4965]: I0219 10:15:18.542004 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hwgh\" (UniqueName: \"kubernetes.io/projected/64c1fbe6-a102-40e1-920a-319b6664c77e-kube-api-access-5hwgh\") on node \"crc\" DevicePath \"\"" Feb 19 10:15:18 crc kubenswrapper[4965]: I0219 10:15:18.782457 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sxwbj" Feb 19 10:15:18 crc kubenswrapper[4965]: I0219 10:15:18.782475 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sxwbj" event={"ID":"64c1fbe6-a102-40e1-920a-319b6664c77e","Type":"ContainerDied","Data":"3eb976e46bad50839e4c791dedfc0256ea597aa2ba50633939a9f8c7730188ae"} Feb 19 10:15:18 crc kubenswrapper[4965]: I0219 10:15:18.782891 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3eb976e46bad50839e4c791dedfc0256ea597aa2ba50633939a9f8c7730188ae" Feb 19 10:15:18 crc kubenswrapper[4965]: I0219 10:15:18.871601 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-rz8q9"] Feb 19 10:15:18 crc kubenswrapper[4965]: E0219 10:15:18.872149 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64c1fbe6-a102-40e1-920a-319b6664c77e" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 19 10:15:18 crc kubenswrapper[4965]: I0219 10:15:18.872180 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="64c1fbe6-a102-40e1-920a-319b6664c77e" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 19 10:15:18 crc kubenswrapper[4965]: E0219 10:15:18.872241 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff2a3ee5-c9c6-4486-b7ae-da6c9adc10df" containerName="collect-profiles" Feb 19 10:15:18 crc kubenswrapper[4965]: I0219 10:15:18.872251 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff2a3ee5-c9c6-4486-b7ae-da6c9adc10df" containerName="collect-profiles" Feb 19 10:15:18 crc kubenswrapper[4965]: I0219 10:15:18.872648 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="64c1fbe6-a102-40e1-920a-319b6664c77e" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 19 10:15:18 crc kubenswrapper[4965]: I0219 10:15:18.872671 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff2a3ee5-c9c6-4486-b7ae-da6c9adc10df" containerName="collect-profiles" Feb 19 10:15:18 crc kubenswrapper[4965]: I0219 10:15:18.873853 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-rz8q9" Feb 19 10:15:18 crc kubenswrapper[4965]: I0219 10:15:18.884170 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-rz8q9"] Feb 19 10:15:18 crc kubenswrapper[4965]: I0219 10:15:18.949454 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b34f48f2-8dcc-4e0f-a1db-03a8adcc08e4-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-rz8q9\" (UID: \"b34f48f2-8dcc-4e0f-a1db-03a8adcc08e4\") " pod="openstack/ssh-known-hosts-edpm-deployment-rz8q9" Feb 19 10:15:18 crc kubenswrapper[4965]: I0219 10:15:18.949514 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b34f48f2-8dcc-4e0f-a1db-03a8adcc08e4-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-rz8q9\" (UID: \"b34f48f2-8dcc-4e0f-a1db-03a8adcc08e4\") " pod="openstack/ssh-known-hosts-edpm-deployment-rz8q9" Feb 19 10:15:18 crc kubenswrapper[4965]: I0219 10:15:18.949639 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47bv4\" (UniqueName: \"kubernetes.io/projected/b34f48f2-8dcc-4e0f-a1db-03a8adcc08e4-kube-api-access-47bv4\") pod \"ssh-known-hosts-edpm-deployment-rz8q9\" (UID: \"b34f48f2-8dcc-4e0f-a1db-03a8adcc08e4\") " pod="openstack/ssh-known-hosts-edpm-deployment-rz8q9" Feb 19 10:15:19 crc kubenswrapper[4965]: I0219 10:15:19.050576 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/64c1fbe6-a102-40e1-920a-319b6664c77e-ssh-key-openstack-edpm-ipam\") pod \"64c1fbe6-a102-40e1-920a-319b6664c77e\" (UID: \"64c1fbe6-a102-40e1-920a-319b6664c77e\") " Feb 19 10:15:19 crc kubenswrapper[4965]: I0219 10:15:19.050823 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b34f48f2-8dcc-4e0f-a1db-03a8adcc08e4-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-rz8q9\" (UID: \"b34f48f2-8dcc-4e0f-a1db-03a8adcc08e4\") " pod="openstack/ssh-known-hosts-edpm-deployment-rz8q9" Feb 19 10:15:19 crc kubenswrapper[4965]: I0219 10:15:19.050845 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b34f48f2-8dcc-4e0f-a1db-03a8adcc08e4-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-rz8q9\" (UID: \"b34f48f2-8dcc-4e0f-a1db-03a8adcc08e4\") " pod="openstack/ssh-known-hosts-edpm-deployment-rz8q9" Feb 19 10:15:19 crc kubenswrapper[4965]: I0219 10:15:19.050918 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47bv4\" (UniqueName: \"kubernetes.io/projected/b34f48f2-8dcc-4e0f-a1db-03a8adcc08e4-kube-api-access-47bv4\") pod \"ssh-known-hosts-edpm-deployment-rz8q9\" (UID: \"b34f48f2-8dcc-4e0f-a1db-03a8adcc08e4\") " pod="openstack/ssh-known-hosts-edpm-deployment-rz8q9" Feb 19 10:15:19 crc kubenswrapper[4965]: I0219 10:15:19.054919 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b34f48f2-8dcc-4e0f-a1db-03a8adcc08e4-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-rz8q9\" (UID: \"b34f48f2-8dcc-4e0f-a1db-03a8adcc08e4\") " pod="openstack/ssh-known-hosts-edpm-deployment-rz8q9" Feb 19 10:15:19 crc kubenswrapper[4965]: I0219 10:15:19.054973 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b34f48f2-8dcc-4e0f-a1db-03a8adcc08e4-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-rz8q9\" (UID: \"b34f48f2-8dcc-4e0f-a1db-03a8adcc08e4\") " pod="openstack/ssh-known-hosts-edpm-deployment-rz8q9" Feb 19 10:15:19 crc kubenswrapper[4965]: I0219 10:15:19.062347 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64c1fbe6-a102-40e1-920a-319b6664c77e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "64c1fbe6-a102-40e1-920a-319b6664c77e" (UID: "64c1fbe6-a102-40e1-920a-319b6664c77e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:15:19 crc kubenswrapper[4965]: I0219 10:15:19.072092 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47bv4\" (UniqueName: \"kubernetes.io/projected/b34f48f2-8dcc-4e0f-a1db-03a8adcc08e4-kube-api-access-47bv4\") pod \"ssh-known-hosts-edpm-deployment-rz8q9\" (UID: \"b34f48f2-8dcc-4e0f-a1db-03a8adcc08e4\") " pod="openstack/ssh-known-hosts-edpm-deployment-rz8q9" Feb 19 10:15:19 crc kubenswrapper[4965]: I0219 10:15:19.153643 4965 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/64c1fbe6-a102-40e1-920a-319b6664c77e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 10:15:19 crc kubenswrapper[4965]: I0219 10:15:19.240734 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-rz8q9" Feb 19 10:15:19 crc kubenswrapper[4965]: I0219 10:15:19.774141 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-rz8q9"] Feb 19 10:15:19 crc kubenswrapper[4965]: I0219 10:15:19.778089 4965 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 10:15:19 crc kubenswrapper[4965]: I0219 10:15:19.793229 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-rz8q9" event={"ID":"b34f48f2-8dcc-4e0f-a1db-03a8adcc08e4","Type":"ContainerStarted","Data":"b60d7ef3de38fd3535521a7f0c169fa3b429a8683e7b5bc16e83aa18c5b926e7"} Feb 19 10:15:20 crc kubenswrapper[4965]: I0219 10:15:20.806953 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-rz8q9" event={"ID":"b34f48f2-8dcc-4e0f-a1db-03a8adcc08e4","Type":"ContainerStarted","Data":"cf45e75967d1dca6681c2907a32430980b00959355027f24dea9b82838eb6a9f"} Feb 19 10:15:20 crc kubenswrapper[4965]: I0219 10:15:20.839099 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-rz8q9" podStartSLOduration=2.128574872 podStartE2EDuration="2.839049554s" podCreationTimestamp="2026-02-19 10:15:18 +0000 UTC" firstStartedPulling="2026-02-19 10:15:19.777892076 +0000 UTC m=+1975.399213386" lastFinishedPulling="2026-02-19 10:15:20.488366758 +0000 UTC m=+1976.109688068" observedRunningTime="2026-02-19 10:15:20.825621298 +0000 UTC m=+1976.446942638" watchObservedRunningTime="2026-02-19 10:15:20.839049554 +0000 UTC m=+1976.460370874" Feb 19 10:15:21 crc kubenswrapper[4965]: I0219 10:15:21.065679 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-hlxzf"] Feb 19 10:15:21 crc kubenswrapper[4965]: I0219 10:15:21.081847 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-hlxzf"] Feb 19 10:15:21 crc kubenswrapper[4965]: I0219 10:15:21.221627 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34a7f92d-3391-4c12-8d6b-14b531d39757" path="/var/lib/kubelet/pods/34a7f92d-3391-4c12-8d6b-14b531d39757/volumes" Feb 19 10:15:23 crc kubenswrapper[4965]: I0219 10:15:23.036930 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-k2dlm"] Feb 19 10:15:23 crc kubenswrapper[4965]: I0219 10:15:23.049848 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-k2dlm"] Feb 19 10:15:23 crc kubenswrapper[4965]: I0219 10:15:23.210277 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95695f23-a4c9-4165-9dd6-d897ada26e93" path="/var/lib/kubelet/pods/95695f23-a4c9-4165-9dd6-d897ada26e93/volumes" Feb 19 10:15:27 crc kubenswrapper[4965]: I0219 10:15:27.877368 4965 generic.go:334] "Generic (PLEG): container finished" podID="b34f48f2-8dcc-4e0f-a1db-03a8adcc08e4" containerID="cf45e75967d1dca6681c2907a32430980b00959355027f24dea9b82838eb6a9f" exitCode=0 Feb 19 10:15:27 crc kubenswrapper[4965]: I0219 10:15:27.877435 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-rz8q9" event={"ID":"b34f48f2-8dcc-4e0f-a1db-03a8adcc08e4","Type":"ContainerDied","Data":"cf45e75967d1dca6681c2907a32430980b00959355027f24dea9b82838eb6a9f"} Feb 19 10:15:29 crc kubenswrapper[4965]: I0219 10:15:29.394647 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-rz8q9" Feb 19 10:15:29 crc kubenswrapper[4965]: I0219 10:15:29.476047 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47bv4\" (UniqueName: \"kubernetes.io/projected/b34f48f2-8dcc-4e0f-a1db-03a8adcc08e4-kube-api-access-47bv4\") pod \"b34f48f2-8dcc-4e0f-a1db-03a8adcc08e4\" (UID: \"b34f48f2-8dcc-4e0f-a1db-03a8adcc08e4\") " Feb 19 10:15:29 crc kubenswrapper[4965]: I0219 10:15:29.476336 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b34f48f2-8dcc-4e0f-a1db-03a8adcc08e4-ssh-key-openstack-edpm-ipam\") pod \"b34f48f2-8dcc-4e0f-a1db-03a8adcc08e4\" (UID: \"b34f48f2-8dcc-4e0f-a1db-03a8adcc08e4\") " Feb 19 10:15:29 crc kubenswrapper[4965]: I0219 10:15:29.476392 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b34f48f2-8dcc-4e0f-a1db-03a8adcc08e4-inventory-0\") pod \"b34f48f2-8dcc-4e0f-a1db-03a8adcc08e4\" (UID: \"b34f48f2-8dcc-4e0f-a1db-03a8adcc08e4\") " Feb 19 10:15:29 crc kubenswrapper[4965]: I0219 10:15:29.483296 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b34f48f2-8dcc-4e0f-a1db-03a8adcc08e4-kube-api-access-47bv4" (OuterVolumeSpecName: "kube-api-access-47bv4") pod "b34f48f2-8dcc-4e0f-a1db-03a8adcc08e4" (UID: "b34f48f2-8dcc-4e0f-a1db-03a8adcc08e4"). InnerVolumeSpecName "kube-api-access-47bv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:15:29 crc kubenswrapper[4965]: I0219 10:15:29.507435 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b34f48f2-8dcc-4e0f-a1db-03a8adcc08e4-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "b34f48f2-8dcc-4e0f-a1db-03a8adcc08e4" (UID: "b34f48f2-8dcc-4e0f-a1db-03a8adcc08e4"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:15:29 crc kubenswrapper[4965]: I0219 10:15:29.513687 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b34f48f2-8dcc-4e0f-a1db-03a8adcc08e4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b34f48f2-8dcc-4e0f-a1db-03a8adcc08e4" (UID: "b34f48f2-8dcc-4e0f-a1db-03a8adcc08e4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:15:29 crc kubenswrapper[4965]: I0219 10:15:29.578949 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47bv4\" (UniqueName: \"kubernetes.io/projected/b34f48f2-8dcc-4e0f-a1db-03a8adcc08e4-kube-api-access-47bv4\") on node \"crc\" DevicePath \"\"" Feb 19 10:15:29 crc kubenswrapper[4965]: I0219 10:15:29.579120 4965 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b34f48f2-8dcc-4e0f-a1db-03a8adcc08e4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 10:15:29 crc kubenswrapper[4965]: I0219 10:15:29.579184 4965 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b34f48f2-8dcc-4e0f-a1db-03a8adcc08e4-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:15:29 crc kubenswrapper[4965]: I0219 10:15:29.899732 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-rz8q9" event={"ID":"b34f48f2-8dcc-4e0f-a1db-03a8adcc08e4","Type":"ContainerDied","Data":"b60d7ef3de38fd3535521a7f0c169fa3b429a8683e7b5bc16e83aa18c5b926e7"} Feb 19 10:15:29 crc kubenswrapper[4965]: I0219 10:15:29.899778 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b60d7ef3de38fd3535521a7f0c169fa3b429a8683e7b5bc16e83aa18c5b926e7" Feb 19 10:15:29 crc kubenswrapper[4965]: I0219 10:15:29.899781 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-rz8q9" Feb 19 10:15:30 crc kubenswrapper[4965]: I0219 10:15:30.061327 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-bv8v8"] Feb 19 10:15:30 crc kubenswrapper[4965]: E0219 10:15:30.061905 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b34f48f2-8dcc-4e0f-a1db-03a8adcc08e4" containerName="ssh-known-hosts-edpm-deployment" Feb 19 10:15:30 crc kubenswrapper[4965]: I0219 10:15:30.061923 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="b34f48f2-8dcc-4e0f-a1db-03a8adcc08e4" containerName="ssh-known-hosts-edpm-deployment" Feb 19 10:15:30 crc kubenswrapper[4965]: I0219 10:15:30.062159 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="b34f48f2-8dcc-4e0f-a1db-03a8adcc08e4" containerName="ssh-known-hosts-edpm-deployment" Feb 19 10:15:30 crc kubenswrapper[4965]: I0219 10:15:30.063161 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bv8v8" Feb 19 10:15:30 crc kubenswrapper[4965]: I0219 10:15:30.065339 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 10:15:30 crc kubenswrapper[4965]: I0219 10:15:30.065396 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 10:15:30 crc kubenswrapper[4965]: I0219 10:15:30.065340 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cthw6" Feb 19 10:15:30 crc kubenswrapper[4965]: I0219 10:15:30.067666 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:15:30 crc kubenswrapper[4965]: I0219 10:15:30.073001 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-bv8v8"] Feb 19 10:15:30 crc kubenswrapper[4965]: I0219 10:15:30.198036 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6827b2eb-c6f9-42d2-b11d-ef676213f97f-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bv8v8\" (UID: \"6827b2eb-c6f9-42d2-b11d-ef676213f97f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bv8v8" Feb 19 10:15:30 crc kubenswrapper[4965]: I0219 10:15:30.198158 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6827b2eb-c6f9-42d2-b11d-ef676213f97f-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bv8v8\" (UID: \"6827b2eb-c6f9-42d2-b11d-ef676213f97f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bv8v8" Feb 19 10:15:30 crc kubenswrapper[4965]: I0219 10:15:30.198287 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcdp9\" (UniqueName: \"kubernetes.io/projected/6827b2eb-c6f9-42d2-b11d-ef676213f97f-kube-api-access-vcdp9\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bv8v8\" (UID: \"6827b2eb-c6f9-42d2-b11d-ef676213f97f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bv8v8" Feb 19 10:15:30 crc kubenswrapper[4965]: I0219 10:15:30.300424 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6827b2eb-c6f9-42d2-b11d-ef676213f97f-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bv8v8\" (UID: \"6827b2eb-c6f9-42d2-b11d-ef676213f97f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bv8v8" Feb 19 10:15:30 crc kubenswrapper[4965]: I0219 10:15:30.300529 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6827b2eb-c6f9-42d2-b11d-ef676213f97f-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bv8v8\" (UID: \"6827b2eb-c6f9-42d2-b11d-ef676213f97f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bv8v8" Feb 19 10:15:30 crc kubenswrapper[4965]: I0219 10:15:30.300599 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcdp9\" (UniqueName: \"kubernetes.io/projected/6827b2eb-c6f9-42d2-b11d-ef676213f97f-kube-api-access-vcdp9\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bv8v8\" (UID: \"6827b2eb-c6f9-42d2-b11d-ef676213f97f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bv8v8" Feb 19 10:15:30 crc kubenswrapper[4965]: I0219 10:15:30.310945 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6827b2eb-c6f9-42d2-b11d-ef676213f97f-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bv8v8\" (UID: \"6827b2eb-c6f9-42d2-b11d-ef676213f97f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bv8v8" Feb 19 10:15:30 crc kubenswrapper[4965]: I0219 10:15:30.311037 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6827b2eb-c6f9-42d2-b11d-ef676213f97f-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bv8v8\" (UID: \"6827b2eb-c6f9-42d2-b11d-ef676213f97f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bv8v8" Feb 19 10:15:30 crc kubenswrapper[4965]: I0219 10:15:30.331989 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcdp9\" (UniqueName: \"kubernetes.io/projected/6827b2eb-c6f9-42d2-b11d-ef676213f97f-kube-api-access-vcdp9\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bv8v8\" (UID: \"6827b2eb-c6f9-42d2-b11d-ef676213f97f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bv8v8" Feb 19 10:15:30 crc kubenswrapper[4965]: I0219 10:15:30.382781 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bv8v8" Feb 19 10:15:30 crc kubenswrapper[4965]: I0219 10:15:30.919433 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-bv8v8"] Feb 19 10:15:31 crc kubenswrapper[4965]: I0219 10:15:31.917897 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bv8v8" event={"ID":"6827b2eb-c6f9-42d2-b11d-ef676213f97f","Type":"ContainerStarted","Data":"c80b331bf3d32642ed842df523a224a8744a7f8525c5086d47c480e405533c14"} Feb 19 10:15:31 crc kubenswrapper[4965]: I0219 10:15:31.918181 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bv8v8" event={"ID":"6827b2eb-c6f9-42d2-b11d-ef676213f97f","Type":"ContainerStarted","Data":"318126a0fbb1cc0660328e68bafa66c512528f63ac1e89b95c65b40c9c4444e5"} Feb 19 10:15:31 crc kubenswrapper[4965]: I0219 10:15:31.939921 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bv8v8" podStartSLOduration=1.49002513 podStartE2EDuration="1.939899734s" podCreationTimestamp="2026-02-19 10:15:30 +0000 UTC" firstStartedPulling="2026-02-19 10:15:30.921906954 +0000 UTC m=+1986.543228264" lastFinishedPulling="2026-02-19 10:15:31.371781548 +0000 UTC m=+1986.993102868" observedRunningTime="2026-02-19 10:15:31.931666874 +0000 UTC m=+1987.552988204" watchObservedRunningTime="2026-02-19 10:15:31.939899734 +0000 UTC m=+1987.561221034" Feb 19 10:15:38 crc kubenswrapper[4965]: I0219 10:15:38.981854 4965 generic.go:334] "Generic (PLEG): container finished" podID="6827b2eb-c6f9-42d2-b11d-ef676213f97f" containerID="c80b331bf3d32642ed842df523a224a8744a7f8525c5086d47c480e405533c14" exitCode=0 Feb 19 10:15:38 crc kubenswrapper[4965]: I0219 10:15:38.981974 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bv8v8" event={"ID":"6827b2eb-c6f9-42d2-b11d-ef676213f97f","Type":"ContainerDied","Data":"c80b331bf3d32642ed842df523a224a8744a7f8525c5086d47c480e405533c14"} Feb 19 10:15:40 crc kubenswrapper[4965]: I0219 10:15:40.592464 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bv8v8" Feb 19 10:15:40 crc kubenswrapper[4965]: I0219 10:15:40.744010 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6827b2eb-c6f9-42d2-b11d-ef676213f97f-ssh-key-openstack-edpm-ipam\") pod \"6827b2eb-c6f9-42d2-b11d-ef676213f97f\" (UID: \"6827b2eb-c6f9-42d2-b11d-ef676213f97f\") " Feb 19 10:15:40 crc kubenswrapper[4965]: I0219 10:15:40.744233 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcdp9\" (UniqueName: \"kubernetes.io/projected/6827b2eb-c6f9-42d2-b11d-ef676213f97f-kube-api-access-vcdp9\") pod \"6827b2eb-c6f9-42d2-b11d-ef676213f97f\" (UID: \"6827b2eb-c6f9-42d2-b11d-ef676213f97f\") " Feb 19 10:15:40 crc kubenswrapper[4965]: I0219 10:15:40.744323 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6827b2eb-c6f9-42d2-b11d-ef676213f97f-inventory\") pod \"6827b2eb-c6f9-42d2-b11d-ef676213f97f\" (UID: \"6827b2eb-c6f9-42d2-b11d-ef676213f97f\") " Feb 19 10:15:40 crc kubenswrapper[4965]: I0219 10:15:40.750099 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6827b2eb-c6f9-42d2-b11d-ef676213f97f-kube-api-access-vcdp9" (OuterVolumeSpecName: "kube-api-access-vcdp9") pod "6827b2eb-c6f9-42d2-b11d-ef676213f97f" (UID: "6827b2eb-c6f9-42d2-b11d-ef676213f97f"). InnerVolumeSpecName "kube-api-access-vcdp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:15:40 crc kubenswrapper[4965]: I0219 10:15:40.784723 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6827b2eb-c6f9-42d2-b11d-ef676213f97f-inventory" (OuterVolumeSpecName: "inventory") pod "6827b2eb-c6f9-42d2-b11d-ef676213f97f" (UID: "6827b2eb-c6f9-42d2-b11d-ef676213f97f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:15:40 crc kubenswrapper[4965]: I0219 10:15:40.799047 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6827b2eb-c6f9-42d2-b11d-ef676213f97f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6827b2eb-c6f9-42d2-b11d-ef676213f97f" (UID: "6827b2eb-c6f9-42d2-b11d-ef676213f97f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:15:40 crc kubenswrapper[4965]: I0219 10:15:40.847257 4965 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6827b2eb-c6f9-42d2-b11d-ef676213f97f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 10:15:40 crc kubenswrapper[4965]: I0219 10:15:40.847293 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcdp9\" (UniqueName: \"kubernetes.io/projected/6827b2eb-c6f9-42d2-b11d-ef676213f97f-kube-api-access-vcdp9\") on node \"crc\" DevicePath \"\"" Feb 19 10:15:40 crc kubenswrapper[4965]: I0219 10:15:40.847305 4965 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6827b2eb-c6f9-42d2-b11d-ef676213f97f-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:15:41 crc kubenswrapper[4965]: I0219 10:15:41.013451 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bv8v8" event={"ID":"6827b2eb-c6f9-42d2-b11d-ef676213f97f","Type":"ContainerDied","Data":"318126a0fbb1cc0660328e68bafa66c512528f63ac1e89b95c65b40c9c4444e5"} Feb 19 10:15:41 crc kubenswrapper[4965]: I0219 10:15:41.013493 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="318126a0fbb1cc0660328e68bafa66c512528f63ac1e89b95c65b40c9c4444e5" Feb 19 10:15:41 crc kubenswrapper[4965]: I0219 10:15:41.013526 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bv8v8" Feb 19 10:15:41 crc kubenswrapper[4965]: I0219 10:15:41.090270 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vprz4"] Feb 19 10:15:41 crc kubenswrapper[4965]: E0219 10:15:41.090910 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6827b2eb-c6f9-42d2-b11d-ef676213f97f" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 19 10:15:41 crc kubenswrapper[4965]: I0219 10:15:41.090975 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="6827b2eb-c6f9-42d2-b11d-ef676213f97f" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 19 10:15:41 crc kubenswrapper[4965]: I0219 10:15:41.091230 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="6827b2eb-c6f9-42d2-b11d-ef676213f97f" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 19 10:15:41 crc kubenswrapper[4965]: I0219 10:15:41.091958 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vprz4" Feb 19 10:15:41 crc kubenswrapper[4965]: I0219 10:15:41.094108 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 10:15:41 crc kubenswrapper[4965]: I0219 10:15:41.094421 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 10:15:41 crc kubenswrapper[4965]: I0219 10:15:41.095125 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cthw6" Feb 19 10:15:41 crc kubenswrapper[4965]: I0219 10:15:41.096289 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:15:41 crc kubenswrapper[4965]: I0219 10:15:41.099867 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vprz4"] Feb 19 10:15:41 crc kubenswrapper[4965]: I0219 10:15:41.257842 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6991c5fe-b928-4ea6-a3e5-bb8dbf6f9763-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vprz4\" (UID: \"6991c5fe-b928-4ea6-a3e5-bb8dbf6f9763\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vprz4" Feb 19 10:15:41 crc kubenswrapper[4965]: I0219 10:15:41.257940 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hwhx\" (UniqueName: \"kubernetes.io/projected/6991c5fe-b928-4ea6-a3e5-bb8dbf6f9763-kube-api-access-8hwhx\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vprz4\" (UID: \"6991c5fe-b928-4ea6-a3e5-bb8dbf6f9763\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vprz4" Feb 19 10:15:41 crc kubenswrapper[4965]: I0219 10:15:41.258108 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6991c5fe-b928-4ea6-a3e5-bb8dbf6f9763-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vprz4\" (UID: \"6991c5fe-b928-4ea6-a3e5-bb8dbf6f9763\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vprz4" Feb 19 10:15:41 crc kubenswrapper[4965]: I0219 10:15:41.360392 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6991c5fe-b928-4ea6-a3e5-bb8dbf6f9763-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vprz4\" (UID: \"6991c5fe-b928-4ea6-a3e5-bb8dbf6f9763\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vprz4" Feb 19 10:15:41 crc kubenswrapper[4965]: I0219 10:15:41.360496 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hwhx\" (UniqueName: \"kubernetes.io/projected/6991c5fe-b928-4ea6-a3e5-bb8dbf6f9763-kube-api-access-8hwhx\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vprz4\" (UID: \"6991c5fe-b928-4ea6-a3e5-bb8dbf6f9763\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vprz4" Feb 19 10:15:41 crc kubenswrapper[4965]: I0219 10:15:41.360783 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6991c5fe-b928-4ea6-a3e5-bb8dbf6f9763-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vprz4\" (UID: \"6991c5fe-b928-4ea6-a3e5-bb8dbf6f9763\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vprz4" Feb 19 10:15:41 crc kubenswrapper[4965]: I0219 10:15:41.366521 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6991c5fe-b928-4ea6-a3e5-bb8dbf6f9763-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vprz4\" (UID: \"6991c5fe-b928-4ea6-a3e5-bb8dbf6f9763\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vprz4" Feb 19 10:15:41 crc kubenswrapper[4965]: I0219 10:15:41.369052 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6991c5fe-b928-4ea6-a3e5-bb8dbf6f9763-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vprz4\" (UID: \"6991c5fe-b928-4ea6-a3e5-bb8dbf6f9763\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vprz4" Feb 19 10:15:41 crc kubenswrapper[4965]: I0219 10:15:41.386958 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hwhx\" (UniqueName: \"kubernetes.io/projected/6991c5fe-b928-4ea6-a3e5-bb8dbf6f9763-kube-api-access-8hwhx\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vprz4\" (UID: \"6991c5fe-b928-4ea6-a3e5-bb8dbf6f9763\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vprz4" Feb 19 10:15:41 crc kubenswrapper[4965]: I0219 10:15:41.412965 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vprz4" Feb 19 10:15:41 crc kubenswrapper[4965]: I0219 10:15:41.924646 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vprz4"] Feb 19 10:15:41 crc kubenswrapper[4965]: I0219 10:15:41.928456 4965 scope.go:117] "RemoveContainer" containerID="e6e39480141c2b298b0714ff7b8a5bf42287acb818abf00252f976c3aba872c4" Feb 19 10:15:42 crc kubenswrapper[4965]: I0219 10:15:42.023314 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vprz4" event={"ID":"6991c5fe-b928-4ea6-a3e5-bb8dbf6f9763","Type":"ContainerStarted","Data":"ce63b87b882887ce0e3a70ce112d02dd18d4bdc007880ba92f4dd17868d38d3e"} Feb 19 10:15:42 crc kubenswrapper[4965]: I0219 10:15:42.035133 4965 scope.go:117] "RemoveContainer" containerID="17e34b954630a15dc81e81259a8da845f60d1930dfd17fdc208b22d4eea61e55" Feb 19 10:15:42 crc kubenswrapper[4965]: I0219 10:15:42.082911 4965 scope.go:117] "RemoveContainer" containerID="4ca0cb278135cdd36718827ee36f015f4fe1729fe7693d3cd38cc8ff8e2ced90" Feb 19 10:15:44 crc kubenswrapper[4965]: I0219 10:15:44.063128 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vprz4" event={"ID":"6991c5fe-b928-4ea6-a3e5-bb8dbf6f9763","Type":"ContainerStarted","Data":"fd9835d3e3cab352620ef13d5c9c1e0b76826f8137e8b626eb99cd6c445e3571"} Feb 19 10:15:44 crc kubenswrapper[4965]: I0219 10:15:44.094833 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vprz4" podStartSLOduration=1.846415435 podStartE2EDuration="3.094800829s" podCreationTimestamp="2026-02-19 10:15:41 +0000 UTC" firstStartedPulling="2026-02-19 10:15:41.928592638 +0000 UTC m=+1997.549913948" lastFinishedPulling="2026-02-19 10:15:43.176978032 +0000 UTC m=+1998.798299342" observedRunningTime="2026-02-19 10:15:44.087380319 +0000 UTC m=+1999.708701669" watchObservedRunningTime="2026-02-19 10:15:44.094800829 +0000 UTC m=+1999.716122179" Feb 19 10:15:46 crc kubenswrapper[4965]: I0219 10:15:46.601313 4965 patch_prober.go:28] interesting pod/machine-config-daemon-7mhh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:15:46 crc kubenswrapper[4965]: I0219 10:15:46.601752 4965 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:15:53 crc kubenswrapper[4965]: I0219 10:15:53.159233 4965 generic.go:334] "Generic (PLEG): container finished" podID="6991c5fe-b928-4ea6-a3e5-bb8dbf6f9763" containerID="fd9835d3e3cab352620ef13d5c9c1e0b76826f8137e8b626eb99cd6c445e3571" exitCode=0 Feb 19 10:15:53 crc kubenswrapper[4965]: I0219 10:15:53.159303 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vprz4" event={"ID":"6991c5fe-b928-4ea6-a3e5-bb8dbf6f9763","Type":"ContainerDied","Data":"fd9835d3e3cab352620ef13d5c9c1e0b76826f8137e8b626eb99cd6c445e3571"} Feb 19 10:15:54 crc kubenswrapper[4965]: I0219 10:15:54.681956 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vprz4" Feb 19 10:15:54 crc kubenswrapper[4965]: I0219 10:15:54.783837 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6991c5fe-b928-4ea6-a3e5-bb8dbf6f9763-ssh-key-openstack-edpm-ipam\") pod \"6991c5fe-b928-4ea6-a3e5-bb8dbf6f9763\" (UID: \"6991c5fe-b928-4ea6-a3e5-bb8dbf6f9763\") " Feb 19 10:15:54 crc kubenswrapper[4965]: I0219 10:15:54.783955 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6991c5fe-b928-4ea6-a3e5-bb8dbf6f9763-inventory\") pod \"6991c5fe-b928-4ea6-a3e5-bb8dbf6f9763\" (UID: \"6991c5fe-b928-4ea6-a3e5-bb8dbf6f9763\") " Feb 19 10:15:54 crc kubenswrapper[4965]: I0219 10:15:54.784096 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hwhx\" (UniqueName: \"kubernetes.io/projected/6991c5fe-b928-4ea6-a3e5-bb8dbf6f9763-kube-api-access-8hwhx\") pod \"6991c5fe-b928-4ea6-a3e5-bb8dbf6f9763\" (UID: \"6991c5fe-b928-4ea6-a3e5-bb8dbf6f9763\") " Feb 19 10:15:54 crc kubenswrapper[4965]: I0219 10:15:54.790537 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6991c5fe-b928-4ea6-a3e5-bb8dbf6f9763-kube-api-access-8hwhx" (OuterVolumeSpecName: "kube-api-access-8hwhx") pod "6991c5fe-b928-4ea6-a3e5-bb8dbf6f9763" (UID: "6991c5fe-b928-4ea6-a3e5-bb8dbf6f9763"). InnerVolumeSpecName "kube-api-access-8hwhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:15:54 crc kubenswrapper[4965]: I0219 10:15:54.831994 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6991c5fe-b928-4ea6-a3e5-bb8dbf6f9763-inventory" (OuterVolumeSpecName: "inventory") pod "6991c5fe-b928-4ea6-a3e5-bb8dbf6f9763" (UID: "6991c5fe-b928-4ea6-a3e5-bb8dbf6f9763"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:15:54 crc kubenswrapper[4965]: I0219 10:15:54.842601 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6991c5fe-b928-4ea6-a3e5-bb8dbf6f9763-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6991c5fe-b928-4ea6-a3e5-bb8dbf6f9763" (UID: "6991c5fe-b928-4ea6-a3e5-bb8dbf6f9763"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:15:54 crc kubenswrapper[4965]: I0219 10:15:54.886696 4965 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6991c5fe-b928-4ea6-a3e5-bb8dbf6f9763-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 10:15:54 crc kubenswrapper[4965]: I0219 10:15:54.886734 4965 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6991c5fe-b928-4ea6-a3e5-bb8dbf6f9763-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:15:54 crc kubenswrapper[4965]: I0219 10:15:54.886745 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hwhx\" (UniqueName: \"kubernetes.io/projected/6991c5fe-b928-4ea6-a3e5-bb8dbf6f9763-kube-api-access-8hwhx\") on node \"crc\" DevicePath \"\"" Feb 19 10:15:55 crc kubenswrapper[4965]: I0219 10:15:55.178332 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vprz4" event={"ID":"6991c5fe-b928-4ea6-a3e5-bb8dbf6f9763","Type":"ContainerDied","Data":"ce63b87b882887ce0e3a70ce112d02dd18d4bdc007880ba92f4dd17868d38d3e"} Feb 19 10:15:55 crc kubenswrapper[4965]: I0219 10:15:55.178406 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce63b87b882887ce0e3a70ce112d02dd18d4bdc007880ba92f4dd17868d38d3e" Feb 19 10:15:55 crc kubenswrapper[4965]: I0219 10:15:55.178552 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vprz4" Feb 19 10:15:55 crc kubenswrapper[4965]: I0219 10:15:55.351973 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gtd56"] Feb 19 10:15:55 crc kubenswrapper[4965]: E0219 10:15:55.352491 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6991c5fe-b928-4ea6-a3e5-bb8dbf6f9763" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 19 10:15:55 crc kubenswrapper[4965]: I0219 10:15:55.352519 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="6991c5fe-b928-4ea6-a3e5-bb8dbf6f9763" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 19 10:15:55 crc kubenswrapper[4965]: I0219 10:15:55.352750 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="6991c5fe-b928-4ea6-a3e5-bb8dbf6f9763" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 19 10:15:55 crc kubenswrapper[4965]: I0219 10:15:55.353536 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gtd56" Feb 19 10:15:55 crc kubenswrapper[4965]: I0219 10:15:55.357980 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cthw6" Feb 19 10:15:55 crc kubenswrapper[4965]: I0219 10:15:55.358268 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Feb 19 10:15:55 crc kubenswrapper[4965]: I0219 10:15:55.358346 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:15:55 crc kubenswrapper[4965]: I0219 10:15:55.358504 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 10:15:55 crc kubenswrapper[4965]: I0219 10:15:55.358560 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 19 10:15:55 crc kubenswrapper[4965]: I0219 10:15:55.358590 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 19 10:15:55 crc kubenswrapper[4965]: I0219 10:15:55.367552 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 19 10:15:55 crc kubenswrapper[4965]: I0219 10:15:55.368365 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 10:15:55 crc kubenswrapper[4965]: I0219 10:15:55.381051 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gtd56"] Feb 19 10:15:55 crc kubenswrapper[4965]: I0219 10:15:55.499421 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/25080ebe-a4ea-4698-b64c-b7064ff93db6-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gtd56\" (UID: \"25080ebe-a4ea-4698-b64c-b7064ff93db6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gtd56" Feb 19 10:15:55 crc kubenswrapper[4965]: I0219 10:15:55.499488 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/25080ebe-a4ea-4698-b64c-b7064ff93db6-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gtd56\" (UID: \"25080ebe-a4ea-4698-b64c-b7064ff93db6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gtd56" Feb 19 10:15:55 crc kubenswrapper[4965]: I0219 10:15:55.500281 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25080ebe-a4ea-4698-b64c-b7064ff93db6-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gtd56\" (UID: \"25080ebe-a4ea-4698-b64c-b7064ff93db6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gtd56" Feb 19 10:15:55 crc kubenswrapper[4965]: I0219 10:15:55.500348 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/25080ebe-a4ea-4698-b64c-b7064ff93db6-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gtd56\" (UID: \"25080ebe-a4ea-4698-b64c-b7064ff93db6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gtd56" Feb 19 10:15:55 crc kubenswrapper[4965]: I0219 10:15:55.500420 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25080ebe-a4ea-4698-b64c-b7064ff93db6-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gtd56\" (UID: \"25080ebe-a4ea-4698-b64c-b7064ff93db6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gtd56" Feb 19 10:15:55 crc kubenswrapper[4965]: I0219 10:15:55.500451 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25080ebe-a4ea-4698-b64c-b7064ff93db6-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gtd56\" (UID: \"25080ebe-a4ea-4698-b64c-b7064ff93db6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gtd56" Feb 19 10:15:55 crc kubenswrapper[4965]: I0219 10:15:55.500468 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/25080ebe-a4ea-4698-b64c-b7064ff93db6-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gtd56\" (UID: \"25080ebe-a4ea-4698-b64c-b7064ff93db6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gtd56" Feb 19 10:15:55 crc kubenswrapper[4965]: I0219 10:15:55.500505 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25080ebe-a4ea-4698-b64c-b7064ff93db6-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gtd56\" (UID: \"25080ebe-a4ea-4698-b64c-b7064ff93db6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gtd56" Feb 19 10:15:55 crc kubenswrapper[4965]: I0219 10:15:55.500635 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25080ebe-a4ea-4698-b64c-b7064ff93db6-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gtd56\" (UID: \"25080ebe-a4ea-4698-b64c-b7064ff93db6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gtd56" Feb 19 10:15:55 crc kubenswrapper[4965]: I0219 10:15:55.500699 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/25080ebe-a4ea-4698-b64c-b7064ff93db6-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gtd56\" (UID: \"25080ebe-a4ea-4698-b64c-b7064ff93db6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gtd56" Feb 19 10:15:55 crc kubenswrapper[4965]: I0219 10:15:55.500758 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25080ebe-a4ea-4698-b64c-b7064ff93db6-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gtd56\" (UID: \"25080ebe-a4ea-4698-b64c-b7064ff93db6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gtd56" Feb 19 10:15:55 crc kubenswrapper[4965]: I0219 10:15:55.500793 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qctgf\" (UniqueName: \"kubernetes.io/projected/25080ebe-a4ea-4698-b64c-b7064ff93db6-kube-api-access-qctgf\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gtd56\" (UID: \"25080ebe-a4ea-4698-b64c-b7064ff93db6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gtd56" Feb 19 10:15:55 crc kubenswrapper[4965]: I0219 10:15:55.500897 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25080ebe-a4ea-4698-b64c-b7064ff93db6-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gtd56\" (UID: \"25080ebe-a4ea-4698-b64c-b7064ff93db6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gtd56" Feb 19 10:15:55 crc kubenswrapper[4965]: I0219 10:15:55.500969 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/25080ebe-a4ea-4698-b64c-b7064ff93db6-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gtd56\" (UID: \"25080ebe-a4ea-4698-b64c-b7064ff93db6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gtd56" Feb 19 10:15:55 crc kubenswrapper[4965]: I0219 10:15:55.603315 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/25080ebe-a4ea-4698-b64c-b7064ff93db6-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gtd56\" (UID: \"25080ebe-a4ea-4698-b64c-b7064ff93db6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gtd56" Feb 19 10:15:55 crc kubenswrapper[4965]: I0219 10:15:55.603380 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25080ebe-a4ea-4698-b64c-b7064ff93db6-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gtd56\" (UID: \"25080ebe-a4ea-4698-b64c-b7064ff93db6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gtd56" Feb 19 10:15:55 crc kubenswrapper[4965]: I0219 10:15:55.603420 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qctgf\" (UniqueName: \"kubernetes.io/projected/25080ebe-a4ea-4698-b64c-b7064ff93db6-kube-api-access-qctgf\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gtd56\" (UID: \"25080ebe-a4ea-4698-b64c-b7064ff93db6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gtd56" Feb 19 10:15:55 crc kubenswrapper[4965]: I0219 10:15:55.603484 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25080ebe-a4ea-4698-b64c-b7064ff93db6-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gtd56\" (UID: \"25080ebe-a4ea-4698-b64c-b7064ff93db6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gtd56" Feb 19 10:15:55 crc kubenswrapper[4965]: I0219 10:15:55.603539 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/25080ebe-a4ea-4698-b64c-b7064ff93db6-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gtd56\" (UID: \"25080ebe-a4ea-4698-b64c-b7064ff93db6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gtd56" Feb 19 10:15:55 crc kubenswrapper[4965]: I0219 10:15:55.603578 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/25080ebe-a4ea-4698-b64c-b7064ff93db6-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gtd56\" (UID: \"25080ebe-a4ea-4698-b64c-b7064ff93db6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gtd56" Feb 19 10:15:55 crc kubenswrapper[4965]: I0219 10:15:55.603601 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/25080ebe-a4ea-4698-b64c-b7064ff93db6-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gtd56\" (UID: \"25080ebe-a4ea-4698-b64c-b7064ff93db6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gtd56" Feb 19 10:15:55 crc kubenswrapper[4965]: I0219 10:15:55.603640 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25080ebe-a4ea-4698-b64c-b7064ff93db6-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gtd56\" (UID: \"25080ebe-a4ea-4698-b64c-b7064ff93db6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gtd56" Feb 19 10:15:55 crc kubenswrapper[4965]: I0219 10:15:55.603676 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/25080ebe-a4ea-4698-b64c-b7064ff93db6-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gtd56\" (UID: \"25080ebe-a4ea-4698-b64c-b7064ff93db6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gtd56" Feb 19 10:15:55 crc kubenswrapper[4965]: I0219 10:15:55.603720 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25080ebe-a4ea-4698-b64c-b7064ff93db6-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gtd56\" (UID: \"25080ebe-a4ea-4698-b64c-b7064ff93db6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gtd56" Feb 19 10:15:55 crc kubenswrapper[4965]: I0219 10:15:55.603749 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/25080ebe-a4ea-4698-b64c-b7064ff93db6-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gtd56\" (UID: \"25080ebe-a4ea-4698-b64c-b7064ff93db6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gtd56" Feb 19 10:15:55 crc kubenswrapper[4965]: I0219 10:15:55.603771 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25080ebe-a4ea-4698-b64c-b7064ff93db6-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gtd56\" (UID: \"25080ebe-a4ea-4698-b64c-b7064ff93db6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gtd56" Feb 19 10:15:55 crc kubenswrapper[4965]: I0219 10:15:55.603813 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25080ebe-a4ea-4698-b64c-b7064ff93db6-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gtd56\" (UID: \"25080ebe-a4ea-4698-b64c-b7064ff93db6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gtd56" Feb 19 10:15:55 crc kubenswrapper[4965]: I0219 10:15:55.603849 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25080ebe-a4ea-4698-b64c-b7064ff93db6-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gtd56\" (UID: \"25080ebe-a4ea-4698-b64c-b7064ff93db6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gtd56" Feb 19 10:15:55 crc kubenswrapper[4965]: I0219 10:15:55.608612 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25080ebe-a4ea-4698-b64c-b7064ff93db6-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gtd56\" (UID: \"25080ebe-a4ea-4698-b64c-b7064ff93db6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gtd56" Feb 19 10:15:55 crc kubenswrapper[4965]: I0219 10:15:55.609410 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/25080ebe-a4ea-4698-b64c-b7064ff93db6-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gtd56\" (UID: \"25080ebe-a4ea-4698-b64c-b7064ff93db6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gtd56" Feb 19 10:15:55 crc kubenswrapper[4965]: I0219 10:15:55.609414 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/25080ebe-a4ea-4698-b64c-b7064ff93db6-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gtd56\" (UID: \"25080ebe-a4ea-4698-b64c-b7064ff93db6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gtd56" Feb 19 10:15:55 crc kubenswrapper[4965]: I0219 10:15:55.609450 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/25080ebe-a4ea-4698-b64c-b7064ff93db6-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gtd56\" (UID: \"25080ebe-a4ea-4698-b64c-b7064ff93db6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gtd56" Feb 19 10:15:55 crc kubenswrapper[4965]: I0219 10:15:55.610650 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25080ebe-a4ea-4698-b64c-b7064ff93db6-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gtd56\" (UID: \"25080ebe-a4ea-4698-b64c-b7064ff93db6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gtd56" Feb 19 10:15:55 crc kubenswrapper[4965]: I0219 10:15:55.611176 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25080ebe-a4ea-4698-b64c-b7064ff93db6-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gtd56\" (UID: \"25080ebe-a4ea-4698-b64c-b7064ff93db6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gtd56" Feb 19 10:15:55 crc kubenswrapper[4965]: I0219 10:15:55.611690 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25080ebe-a4ea-4698-b64c-b7064ff93db6-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gtd56\" (UID: \"25080ebe-a4ea-4698-b64c-b7064ff93db6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gtd56" Feb 19 10:15:55 crc kubenswrapper[4965]: I0219 10:15:55.614495 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25080ebe-a4ea-4698-b64c-b7064ff93db6-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gtd56\" (UID: \"25080ebe-a4ea-4698-b64c-b7064ff93db6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gtd56" Feb 19 10:15:55 crc kubenswrapper[4965]: I0219 10:15:55.614848 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/25080ebe-a4ea-4698-b64c-b7064ff93db6-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gtd56\" (UID: \"25080ebe-a4ea-4698-b64c-b7064ff93db6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gtd56" Feb 19 10:15:55 crc kubenswrapper[4965]: I0219 10:15:55.615256 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25080ebe-a4ea-4698-b64c-b7064ff93db6-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gtd56\" (UID: \"25080ebe-a4ea-4698-b64c-b7064ff93db6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gtd56" Feb 19 10:15:55 crc kubenswrapper[4965]: I0219 10:15:55.618695 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/25080ebe-a4ea-4698-b64c-b7064ff93db6-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gtd56\" (UID: \"25080ebe-a4ea-4698-b64c-b7064ff93db6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gtd56" Feb 19 10:15:55 crc kubenswrapper[4965]: I0219 10:15:55.624858 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/25080ebe-a4ea-4698-b64c-b7064ff93db6-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gtd56\" (UID: \"25080ebe-a4ea-4698-b64c-b7064ff93db6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gtd56" Feb 19 10:15:55 crc kubenswrapper[4965]: I0219 10:15:55.625697 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25080ebe-a4ea-4698-b64c-b7064ff93db6-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gtd56\" (UID: \"25080ebe-a4ea-4698-b64c-b7064ff93db6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gtd56" Feb 19 10:15:55 crc kubenswrapper[4965]: I0219 10:15:55.636787 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qctgf\" (UniqueName: \"kubernetes.io/projected/25080ebe-a4ea-4698-b64c-b7064ff93db6-kube-api-access-qctgf\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gtd56\" (UID: \"25080ebe-a4ea-4698-b64c-b7064ff93db6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gtd56" Feb 19 10:15:55 crc kubenswrapper[4965]: I0219 10:15:55.674616 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gtd56" Feb 19 10:15:56 crc kubenswrapper[4965]: I0219 10:15:56.249399 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gtd56"] Feb 19 10:15:57 crc kubenswrapper[4965]: I0219 10:15:57.215796 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gtd56" event={"ID":"25080ebe-a4ea-4698-b64c-b7064ff93db6","Type":"ContainerStarted","Data":"4d4403c88a01e6f635f56e98e5c3f21a3e7c5246eb5f215ce56dae19fef31ea7"} Feb 19 10:15:57 crc kubenswrapper[4965]: I0219 10:15:57.216237 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gtd56" event={"ID":"25080ebe-a4ea-4698-b64c-b7064ff93db6","Type":"ContainerStarted","Data":"b449e54b42e0a7331b8dfe555430c3717979f9a60d32df50d308e541f2729bae"} Feb 19 10:16:07 crc kubenswrapper[4965]: I0219 10:16:07.030540 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gtd56" podStartSLOduration=11.386866193 podStartE2EDuration="12.030522872s" podCreationTimestamp="2026-02-19 10:15:55 +0000 UTC" firstStartedPulling="2026-02-19 10:15:56.251276992 +0000 UTC m=+2011.872598332" lastFinishedPulling="2026-02-19 10:15:56.894933701 +0000 UTC m=+2012.516255011" observedRunningTime="2026-02-19 10:15:58.242087224 +0000 UTC m=+2013.863408584" watchObservedRunningTime="2026-02-19 10:16:07.030522872 +0000 UTC m=+2022.651844182" Feb 19 10:16:07 crc kubenswrapper[4965]: I0219 10:16:07.039720 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-rdjn2"] Feb 19 10:16:07 crc kubenswrapper[4965]: I0219 10:16:07.050223 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-rdjn2"] Feb 19 10:16:07 crc kubenswrapper[4965]: I0219 10:16:07.211732 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b21ddc99-df08-4635-996d-872a7c3f6f3b" path="/var/lib/kubelet/pods/b21ddc99-df08-4635-996d-872a7c3f6f3b/volumes" Feb 19 10:16:16 crc kubenswrapper[4965]: I0219 10:16:16.601038 4965 patch_prober.go:28] interesting pod/machine-config-daemon-7mhh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:16:16 crc kubenswrapper[4965]: I0219 10:16:16.601618 4965 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:16:32 crc kubenswrapper[4965]: I0219 10:16:32.527257 4965 generic.go:334] "Generic (PLEG): container finished" podID="25080ebe-a4ea-4698-b64c-b7064ff93db6" containerID="4d4403c88a01e6f635f56e98e5c3f21a3e7c5246eb5f215ce56dae19fef31ea7" exitCode=0 Feb 19 10:16:32 crc kubenswrapper[4965]: I0219 10:16:32.527367 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gtd56" event={"ID":"25080ebe-a4ea-4698-b64c-b7064ff93db6","Type":"ContainerDied","Data":"4d4403c88a01e6f635f56e98e5c3f21a3e7c5246eb5f215ce56dae19fef31ea7"} Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.212717 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gtd56" Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.409620 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/25080ebe-a4ea-4698-b64c-b7064ff93db6-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"25080ebe-a4ea-4698-b64c-b7064ff93db6\" (UID: \"25080ebe-a4ea-4698-b64c-b7064ff93db6\") " Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.409705 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25080ebe-a4ea-4698-b64c-b7064ff93db6-bootstrap-combined-ca-bundle\") pod \"25080ebe-a4ea-4698-b64c-b7064ff93db6\" (UID: \"25080ebe-a4ea-4698-b64c-b7064ff93db6\") " Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.409748 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25080ebe-a4ea-4698-b64c-b7064ff93db6-libvirt-combined-ca-bundle\") pod \"25080ebe-a4ea-4698-b64c-b7064ff93db6\" (UID: \"25080ebe-a4ea-4698-b64c-b7064ff93db6\") " Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.409869 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/25080ebe-a4ea-4698-b64c-b7064ff93db6-openstack-edpm-ipam-ovn-default-certs-0\") pod \"25080ebe-a4ea-4698-b64c-b7064ff93db6\" (UID: \"25080ebe-a4ea-4698-b64c-b7064ff93db6\") " Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.409935 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25080ebe-a4ea-4698-b64c-b7064ff93db6-ovn-combined-ca-bundle\") pod \"25080ebe-a4ea-4698-b64c-b7064ff93db6\" (UID: \"25080ebe-a4ea-4698-b64c-b7064ff93db6\") " Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.410018 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qctgf\" (UniqueName: \"kubernetes.io/projected/25080ebe-a4ea-4698-b64c-b7064ff93db6-kube-api-access-qctgf\") pod \"25080ebe-a4ea-4698-b64c-b7064ff93db6\" (UID: \"25080ebe-a4ea-4698-b64c-b7064ff93db6\") " Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.410057 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25080ebe-a4ea-4698-b64c-b7064ff93db6-telemetry-combined-ca-bundle\") pod \"25080ebe-a4ea-4698-b64c-b7064ff93db6\" (UID: \"25080ebe-a4ea-4698-b64c-b7064ff93db6\") " Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.410099 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25080ebe-a4ea-4698-b64c-b7064ff93db6-nova-combined-ca-bundle\") pod \"25080ebe-a4ea-4698-b64c-b7064ff93db6\" (UID: \"25080ebe-a4ea-4698-b64c-b7064ff93db6\") " Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.410149 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/25080ebe-a4ea-4698-b64c-b7064ff93db6-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"25080ebe-a4ea-4698-b64c-b7064ff93db6\" (UID: \"25080ebe-a4ea-4698-b64c-b7064ff93db6\") " Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.410180 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25080ebe-a4ea-4698-b64c-b7064ff93db6-neutron-metadata-combined-ca-bundle\") pod \"25080ebe-a4ea-4698-b64c-b7064ff93db6\" (UID: \"25080ebe-a4ea-4698-b64c-b7064ff93db6\") " Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.410271 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25080ebe-a4ea-4698-b64c-b7064ff93db6-repo-setup-combined-ca-bundle\") pod \"25080ebe-a4ea-4698-b64c-b7064ff93db6\" (UID: \"25080ebe-a4ea-4698-b64c-b7064ff93db6\") " Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.410355 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/25080ebe-a4ea-4698-b64c-b7064ff93db6-ssh-key-openstack-edpm-ipam\") pod \"25080ebe-a4ea-4698-b64c-b7064ff93db6\" (UID: \"25080ebe-a4ea-4698-b64c-b7064ff93db6\") " Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.410406 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/25080ebe-a4ea-4698-b64c-b7064ff93db6-inventory\") pod \"25080ebe-a4ea-4698-b64c-b7064ff93db6\" (UID: \"25080ebe-a4ea-4698-b64c-b7064ff93db6\") " Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.410519 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/25080ebe-a4ea-4698-b64c-b7064ff93db6-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"25080ebe-a4ea-4698-b64c-b7064ff93db6\" (UID: \"25080ebe-a4ea-4698-b64c-b7064ff93db6\") " Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.417045 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25080ebe-a4ea-4698-b64c-b7064ff93db6-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "25080ebe-a4ea-4698-b64c-b7064ff93db6" (UID: "25080ebe-a4ea-4698-b64c-b7064ff93db6"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.417220 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25080ebe-a4ea-4698-b64c-b7064ff93db6-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "25080ebe-a4ea-4698-b64c-b7064ff93db6" (UID: "25080ebe-a4ea-4698-b64c-b7064ff93db6"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.417599 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25080ebe-a4ea-4698-b64c-b7064ff93db6-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "25080ebe-a4ea-4698-b64c-b7064ff93db6" (UID: "25080ebe-a4ea-4698-b64c-b7064ff93db6"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.418305 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25080ebe-a4ea-4698-b64c-b7064ff93db6-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "25080ebe-a4ea-4698-b64c-b7064ff93db6" (UID: "25080ebe-a4ea-4698-b64c-b7064ff93db6"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.418890 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25080ebe-a4ea-4698-b64c-b7064ff93db6-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "25080ebe-a4ea-4698-b64c-b7064ff93db6" (UID: "25080ebe-a4ea-4698-b64c-b7064ff93db6"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.419759 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25080ebe-a4ea-4698-b64c-b7064ff93db6-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "25080ebe-a4ea-4698-b64c-b7064ff93db6" (UID: "25080ebe-a4ea-4698-b64c-b7064ff93db6"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.421515 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25080ebe-a4ea-4698-b64c-b7064ff93db6-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "25080ebe-a4ea-4698-b64c-b7064ff93db6" (UID: "25080ebe-a4ea-4698-b64c-b7064ff93db6"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.422666 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25080ebe-a4ea-4698-b64c-b7064ff93db6-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "25080ebe-a4ea-4698-b64c-b7064ff93db6" (UID: "25080ebe-a4ea-4698-b64c-b7064ff93db6"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.426532 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25080ebe-a4ea-4698-b64c-b7064ff93db6-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "25080ebe-a4ea-4698-b64c-b7064ff93db6" (UID: "25080ebe-a4ea-4698-b64c-b7064ff93db6"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.426667 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25080ebe-a4ea-4698-b64c-b7064ff93db6-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "25080ebe-a4ea-4698-b64c-b7064ff93db6" (UID: "25080ebe-a4ea-4698-b64c-b7064ff93db6"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.426701 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25080ebe-a4ea-4698-b64c-b7064ff93db6-kube-api-access-qctgf" (OuterVolumeSpecName: "kube-api-access-qctgf") pod "25080ebe-a4ea-4698-b64c-b7064ff93db6" (UID: "25080ebe-a4ea-4698-b64c-b7064ff93db6"). InnerVolumeSpecName "kube-api-access-qctgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.428040 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25080ebe-a4ea-4698-b64c-b7064ff93db6-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "25080ebe-a4ea-4698-b64c-b7064ff93db6" (UID: "25080ebe-a4ea-4698-b64c-b7064ff93db6"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.453521 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25080ebe-a4ea-4698-b64c-b7064ff93db6-inventory" (OuterVolumeSpecName: "inventory") pod "25080ebe-a4ea-4698-b64c-b7064ff93db6" (UID: "25080ebe-a4ea-4698-b64c-b7064ff93db6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.456313 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25080ebe-a4ea-4698-b64c-b7064ff93db6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "25080ebe-a4ea-4698-b64c-b7064ff93db6" (UID: "25080ebe-a4ea-4698-b64c-b7064ff93db6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.513399 4965 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/25080ebe-a4ea-4698-b64c-b7064ff93db6-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.513456 4965 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/25080ebe-a4ea-4698-b64c-b7064ff93db6-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.513474 4965 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25080ebe-a4ea-4698-b64c-b7064ff93db6-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.513488 4965 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25080ebe-a4ea-4698-b64c-b7064ff93db6-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.513501 4965 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/25080ebe-a4ea-4698-b64c-b7064ff93db6-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.513515 4965 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25080ebe-a4ea-4698-b64c-b7064ff93db6-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.513531 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qctgf\" (UniqueName: \"kubernetes.io/projected/25080ebe-a4ea-4698-b64c-b7064ff93db6-kube-api-access-qctgf\") on node \"crc\" DevicePath \"\"" Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.513543 4965 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25080ebe-a4ea-4698-b64c-b7064ff93db6-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.513555 4965 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25080ebe-a4ea-4698-b64c-b7064ff93db6-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.513568 4965 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/25080ebe-a4ea-4698-b64c-b7064ff93db6-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.513583 4965 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25080ebe-a4ea-4698-b64c-b7064ff93db6-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.513596 4965 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25080ebe-a4ea-4698-b64c-b7064ff93db6-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.513609 4965 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/25080ebe-a4ea-4698-b64c-b7064ff93db6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.513621 4965 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/25080ebe-a4ea-4698-b64c-b7064ff93db6-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.549800 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gtd56" event={"ID":"25080ebe-a4ea-4698-b64c-b7064ff93db6","Type":"ContainerDied","Data":"b449e54b42e0a7331b8dfe555430c3717979f9a60d32df50d308e541f2729bae"} Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.550019 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b449e54b42e0a7331b8dfe555430c3717979f9a60d32df50d308e541f2729bae" Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.549858 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gtd56" Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.688528 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-xd7zh"] Feb 19 10:16:34 crc kubenswrapper[4965]: E0219 10:16:34.689283 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25080ebe-a4ea-4698-b64c-b7064ff93db6" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.689361 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="25080ebe-a4ea-4698-b64c-b7064ff93db6" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.689641 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="25080ebe-a4ea-4698-b64c-b7064ff93db6" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.690421 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xd7zh" Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.694268 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.694642 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.695647 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.695647 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.696296 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cthw6" Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.732632 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-xd7zh"] Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.821102 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24c52aa6-9277-4040-8262-1bac8005a463-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xd7zh\" (UID: \"24c52aa6-9277-4040-8262-1bac8005a463\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xd7zh" Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.822545 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/24c52aa6-9277-4040-8262-1bac8005a463-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xd7zh\" (UID: \"24c52aa6-9277-4040-8262-1bac8005a463\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xd7zh" Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.822702 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24c52aa6-9277-4040-8262-1bac8005a463-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xd7zh\" (UID: \"24c52aa6-9277-4040-8262-1bac8005a463\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xd7zh" Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.822869 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brcqs\" (UniqueName: \"kubernetes.io/projected/24c52aa6-9277-4040-8262-1bac8005a463-kube-api-access-brcqs\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xd7zh\" (UID: \"24c52aa6-9277-4040-8262-1bac8005a463\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xd7zh" Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.823191 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/24c52aa6-9277-4040-8262-1bac8005a463-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xd7zh\" (UID: \"24c52aa6-9277-4040-8262-1bac8005a463\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xd7zh" Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.927897 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/24c52aa6-9277-4040-8262-1bac8005a463-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xd7zh\" (UID: \"24c52aa6-9277-4040-8262-1bac8005a463\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xd7zh" Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.928370 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24c52aa6-9277-4040-8262-1bac8005a463-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xd7zh\" (UID: \"24c52aa6-9277-4040-8262-1bac8005a463\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xd7zh" Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.928754 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/24c52aa6-9277-4040-8262-1bac8005a463-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xd7zh\" (UID: \"24c52aa6-9277-4040-8262-1bac8005a463\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xd7zh" Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.929004 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24c52aa6-9277-4040-8262-1bac8005a463-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xd7zh\" (UID: \"24c52aa6-9277-4040-8262-1bac8005a463\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xd7zh" Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.929425 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brcqs\" (UniqueName: \"kubernetes.io/projected/24c52aa6-9277-4040-8262-1bac8005a463-kube-api-access-brcqs\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xd7zh\" (UID: \"24c52aa6-9277-4040-8262-1bac8005a463\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xd7zh" Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.930797 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/24c52aa6-9277-4040-8262-1bac8005a463-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xd7zh\" (UID: \"24c52aa6-9277-4040-8262-1bac8005a463\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xd7zh" Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.937706 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24c52aa6-9277-4040-8262-1bac8005a463-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xd7zh\" (UID: \"24c52aa6-9277-4040-8262-1bac8005a463\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xd7zh" Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.937892 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24c52aa6-9277-4040-8262-1bac8005a463-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xd7zh\" (UID: \"24c52aa6-9277-4040-8262-1bac8005a463\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xd7zh" Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.938090 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/24c52aa6-9277-4040-8262-1bac8005a463-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xd7zh\" (UID: \"24c52aa6-9277-4040-8262-1bac8005a463\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xd7zh" Feb 19 10:16:34 crc kubenswrapper[4965]: I0219 10:16:34.948721 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brcqs\" (UniqueName: \"kubernetes.io/projected/24c52aa6-9277-4040-8262-1bac8005a463-kube-api-access-brcqs\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xd7zh\" (UID: \"24c52aa6-9277-4040-8262-1bac8005a463\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xd7zh" Feb 19 10:16:35 crc kubenswrapper[4965]: I0219 10:16:35.016582 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xd7zh" Feb 19 10:16:35 crc kubenswrapper[4965]: I0219 10:16:35.618001 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-xd7zh"] Feb 19 10:16:36 crc kubenswrapper[4965]: I0219 10:16:36.575949 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xd7zh" event={"ID":"24c52aa6-9277-4040-8262-1bac8005a463","Type":"ContainerStarted","Data":"cbba295272fe9636505a138952c150968aac0b814640fd54c2dc24fba89d53bd"} Feb 19 10:16:36 crc kubenswrapper[4965]: I0219 10:16:36.576543 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xd7zh" event={"ID":"24c52aa6-9277-4040-8262-1bac8005a463","Type":"ContainerStarted","Data":"f05a9d7e8af07b8ce5953ef16369c340aa83b1bb53dbb58657fa7ff9c6f89b0f"} Feb 19 10:16:36 crc kubenswrapper[4965]: I0219 10:16:36.600793 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xd7zh" podStartSLOduration=2.161125442 podStartE2EDuration="2.600776218s" podCreationTimestamp="2026-02-19 10:16:34 +0000 UTC" firstStartedPulling="2026-02-19 10:16:35.619141271 +0000 UTC m=+2051.240462581" lastFinishedPulling="2026-02-19 10:16:36.058792047 +0000 UTC m=+2051.680113357" observedRunningTime="2026-02-19 10:16:36.589418062 +0000 UTC m=+2052.210739372" watchObservedRunningTime="2026-02-19 10:16:36.600776218 +0000 UTC m=+2052.222097528" Feb 19 10:16:42 crc kubenswrapper[4965]: I0219 10:16:42.264457 4965 scope.go:117] "RemoveContainer" containerID="9b278ef6932d007329c5791a7c5476eeda2384272529608de619fae7a14c7687" Feb 19 10:16:46 crc kubenswrapper[4965]: I0219 10:16:46.601761 4965 patch_prober.go:28] interesting pod/machine-config-daemon-7mhh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:16:46 crc kubenswrapper[4965]: I0219 10:16:46.602141 4965 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:16:46 crc kubenswrapper[4965]: I0219 10:16:46.602188 4965 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" Feb 19 10:16:46 crc kubenswrapper[4965]: I0219 10:16:46.602937 4965 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6e790f4d4e6658655db9f91927db114ee9b37405e8ae4a7d350746d0c209e2f2"} pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 10:16:46 crc kubenswrapper[4965]: I0219 10:16:46.602981 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerName="machine-config-daemon" containerID="cri-o://6e790f4d4e6658655db9f91927db114ee9b37405e8ae4a7d350746d0c209e2f2" gracePeriod=600 Feb 19 10:16:46 crc kubenswrapper[4965]: I0219 10:16:46.761742 4965 generic.go:334] "Generic (PLEG): container finished" podID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerID="6e790f4d4e6658655db9f91927db114ee9b37405e8ae4a7d350746d0c209e2f2" exitCode=0 Feb 19 10:16:46 crc kubenswrapper[4965]: I0219 10:16:46.761777 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" event={"ID":"63ef3eb8-6103-492d-b6ef-f16081d15e83","Type":"ContainerDied","Data":"6e790f4d4e6658655db9f91927db114ee9b37405e8ae4a7d350746d0c209e2f2"} Feb 19 10:16:46 crc kubenswrapper[4965]: I0219 10:16:46.761815 4965 scope.go:117] "RemoveContainer" containerID="76f06bc02934238a40bb54d8f37e941fa531c6ec466807a5bec720886092509c" Feb 19 10:16:47 crc kubenswrapper[4965]: I0219 10:16:47.772030 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" event={"ID":"63ef3eb8-6103-492d-b6ef-f16081d15e83","Type":"ContainerStarted","Data":"0adc6e28f055d539583ad7bb06c2cd0eb874f0b736c831a9dc5b0964a3e19dcf"} Feb 19 10:16:56 crc kubenswrapper[4965]: I0219 10:16:56.039913 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-db-sync-nk4c7"] Feb 19 10:16:56 crc kubenswrapper[4965]: I0219 10:16:56.049370 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-db-sync-nk4c7"] Feb 19 10:16:57 crc kubenswrapper[4965]: I0219 10:16:57.216298 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc70a082-b592-4eb9-80f9-481a1bd9fe0c" path="/var/lib/kubelet/pods/cc70a082-b592-4eb9-80f9-481a1bd9fe0c/volumes" Feb 19 10:17:04 crc kubenswrapper[4965]: I0219 10:17:04.038035 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-storageinit-7mdwh"] Feb 19 10:17:04 crc kubenswrapper[4965]: I0219 10:17:04.053456 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-storageinit-7mdwh"] Feb 19 10:17:05 crc kubenswrapper[4965]: I0219 10:17:05.214446 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4c84601-c0b3-46eb-8323-08b550442026" path="/var/lib/kubelet/pods/e4c84601-c0b3-46eb-8323-08b550442026/volumes" Feb 19 10:17:34 crc kubenswrapper[4965]: I0219 10:17:34.215239 4965 generic.go:334] "Generic (PLEG): container finished" podID="24c52aa6-9277-4040-8262-1bac8005a463" containerID="cbba295272fe9636505a138952c150968aac0b814640fd54c2dc24fba89d53bd" exitCode=0 Feb 19 10:17:34 crc kubenswrapper[4965]: I0219 10:17:34.215349 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xd7zh" event={"ID":"24c52aa6-9277-4040-8262-1bac8005a463","Type":"ContainerDied","Data":"cbba295272fe9636505a138952c150968aac0b814640fd54c2dc24fba89d53bd"} Feb 19 10:17:35 crc kubenswrapper[4965]: I0219 10:17:35.788773 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xd7zh" Feb 19 10:17:35 crc kubenswrapper[4965]: I0219 10:17:35.918341 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/24c52aa6-9277-4040-8262-1bac8005a463-ssh-key-openstack-edpm-ipam\") pod \"24c52aa6-9277-4040-8262-1bac8005a463\" (UID: \"24c52aa6-9277-4040-8262-1bac8005a463\") " Feb 19 10:17:35 crc kubenswrapper[4965]: I0219 10:17:35.918557 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/24c52aa6-9277-4040-8262-1bac8005a463-ovncontroller-config-0\") pod \"24c52aa6-9277-4040-8262-1bac8005a463\" (UID: \"24c52aa6-9277-4040-8262-1bac8005a463\") " Feb 19 10:17:35 crc kubenswrapper[4965]: I0219 10:17:35.918689 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brcqs\" (UniqueName: \"kubernetes.io/projected/24c52aa6-9277-4040-8262-1bac8005a463-kube-api-access-brcqs\") pod \"24c52aa6-9277-4040-8262-1bac8005a463\" (UID: \"24c52aa6-9277-4040-8262-1bac8005a463\") " Feb 19 10:17:35 crc kubenswrapper[4965]: I0219 10:17:35.918753 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24c52aa6-9277-4040-8262-1bac8005a463-inventory\") pod \"24c52aa6-9277-4040-8262-1bac8005a463\" (UID: \"24c52aa6-9277-4040-8262-1bac8005a463\") " Feb 19 10:17:35 crc kubenswrapper[4965]: I0219 10:17:35.918913 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24c52aa6-9277-4040-8262-1bac8005a463-ovn-combined-ca-bundle\") pod \"24c52aa6-9277-4040-8262-1bac8005a463\" (UID: \"24c52aa6-9277-4040-8262-1bac8005a463\") " Feb 19 10:17:35 crc kubenswrapper[4965]: I0219 10:17:35.926300 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24c52aa6-9277-4040-8262-1bac8005a463-kube-api-access-brcqs" (OuterVolumeSpecName: "kube-api-access-brcqs") pod "24c52aa6-9277-4040-8262-1bac8005a463" (UID: "24c52aa6-9277-4040-8262-1bac8005a463"). InnerVolumeSpecName "kube-api-access-brcqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:17:35 crc kubenswrapper[4965]: I0219 10:17:35.937014 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24c52aa6-9277-4040-8262-1bac8005a463-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "24c52aa6-9277-4040-8262-1bac8005a463" (UID: "24c52aa6-9277-4040-8262-1bac8005a463"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:17:35 crc kubenswrapper[4965]: I0219 10:17:35.956538 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24c52aa6-9277-4040-8262-1bac8005a463-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "24c52aa6-9277-4040-8262-1bac8005a463" (UID: "24c52aa6-9277-4040-8262-1bac8005a463"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:17:35 crc kubenswrapper[4965]: I0219 10:17:35.957476 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24c52aa6-9277-4040-8262-1bac8005a463-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "24c52aa6-9277-4040-8262-1bac8005a463" (UID: "24c52aa6-9277-4040-8262-1bac8005a463"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:17:35 crc kubenswrapper[4965]: I0219 10:17:35.962168 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24c52aa6-9277-4040-8262-1bac8005a463-inventory" (OuterVolumeSpecName: "inventory") pod "24c52aa6-9277-4040-8262-1bac8005a463" (UID: "24c52aa6-9277-4040-8262-1bac8005a463"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:17:36 crc kubenswrapper[4965]: I0219 10:17:36.021704 4965 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/24c52aa6-9277-4040-8262-1bac8005a463-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:17:36 crc kubenswrapper[4965]: I0219 10:17:36.021730 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brcqs\" (UniqueName: \"kubernetes.io/projected/24c52aa6-9277-4040-8262-1bac8005a463-kube-api-access-brcqs\") on node \"crc\" DevicePath \"\"" Feb 19 10:17:36 crc kubenswrapper[4965]: I0219 10:17:36.021740 4965 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24c52aa6-9277-4040-8262-1bac8005a463-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:17:36 crc kubenswrapper[4965]: I0219 10:17:36.021749 4965 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24c52aa6-9277-4040-8262-1bac8005a463-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:17:36 crc kubenswrapper[4965]: I0219 10:17:36.021757 4965 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/24c52aa6-9277-4040-8262-1bac8005a463-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 10:17:36 crc kubenswrapper[4965]: I0219 10:17:36.236507 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xd7zh" event={"ID":"24c52aa6-9277-4040-8262-1bac8005a463","Type":"ContainerDied","Data":"f05a9d7e8af07b8ce5953ef16369c340aa83b1bb53dbb58657fa7ff9c6f89b0f"} Feb 19 10:17:36 crc kubenswrapper[4965]: I0219 10:17:36.236573 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f05a9d7e8af07b8ce5953ef16369c340aa83b1bb53dbb58657fa7ff9c6f89b0f" Feb 19 10:17:36 crc kubenswrapper[4965]: I0219 10:17:36.236631 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xd7zh" Feb 19 10:17:36 crc kubenswrapper[4965]: I0219 10:17:36.333263 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dnm76"] Feb 19 10:17:36 crc kubenswrapper[4965]: E0219 10:17:36.333715 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24c52aa6-9277-4040-8262-1bac8005a463" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 19 10:17:36 crc kubenswrapper[4965]: I0219 10:17:36.333737 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="24c52aa6-9277-4040-8262-1bac8005a463" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 19 10:17:36 crc kubenswrapper[4965]: I0219 10:17:36.333958 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="24c52aa6-9277-4040-8262-1bac8005a463" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 19 10:17:36 crc kubenswrapper[4965]: I0219 10:17:36.334755 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dnm76" Feb 19 10:17:36 crc kubenswrapper[4965]: I0219 10:17:36.339417 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 10:17:36 crc kubenswrapper[4965]: I0219 10:17:36.339584 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 10:17:36 crc kubenswrapper[4965]: I0219 10:17:36.340317 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:17:36 crc kubenswrapper[4965]: I0219 10:17:36.342478 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 19 10:17:36 crc kubenswrapper[4965]: I0219 10:17:36.342815 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 19 10:17:36 crc kubenswrapper[4965]: I0219 10:17:36.342986 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cthw6" Feb 19 10:17:36 crc kubenswrapper[4965]: I0219 10:17:36.357952 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dnm76"] Feb 19 10:17:36 crc kubenswrapper[4965]: I0219 10:17:36.429990 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gklnx\" (UniqueName: \"kubernetes.io/projected/1189041a-04c1-4fa1-9c71-daf77ef8b3fe-kube-api-access-gklnx\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dnm76\" (UID: \"1189041a-04c1-4fa1-9c71-daf77ef8b3fe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dnm76" Feb 19 10:17:36 crc kubenswrapper[4965]: I0219 10:17:36.430034 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1189041a-04c1-4fa1-9c71-daf77ef8b3fe-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dnm76\" (UID: \"1189041a-04c1-4fa1-9c71-daf77ef8b3fe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dnm76" Feb 19 10:17:36 crc kubenswrapper[4965]: I0219 10:17:36.430082 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/1189041a-04c1-4fa1-9c71-daf77ef8b3fe-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dnm76\" (UID: \"1189041a-04c1-4fa1-9c71-daf77ef8b3fe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dnm76" Feb 19 10:17:36 crc kubenswrapper[4965]: I0219 10:17:36.430130 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1189041a-04c1-4fa1-9c71-daf77ef8b3fe-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dnm76\" (UID: \"1189041a-04c1-4fa1-9c71-daf77ef8b3fe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dnm76" Feb 19 10:17:36 crc kubenswrapper[4965]: I0219 10:17:36.430160 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1189041a-04c1-4fa1-9c71-daf77ef8b3fe-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dnm76\" (UID: \"1189041a-04c1-4fa1-9c71-daf77ef8b3fe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dnm76" Feb 19 10:17:36 crc kubenswrapper[4965]: I0219 10:17:36.430224 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/1189041a-04c1-4fa1-9c71-daf77ef8b3fe-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dnm76\" (UID: \"1189041a-04c1-4fa1-9c71-daf77ef8b3fe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dnm76" Feb 19 10:17:36 crc kubenswrapper[4965]: I0219 10:17:36.531493 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/1189041a-04c1-4fa1-9c71-daf77ef8b3fe-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dnm76\" (UID: \"1189041a-04c1-4fa1-9c71-daf77ef8b3fe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dnm76" Feb 19 10:17:36 crc kubenswrapper[4965]: I0219 10:17:36.531623 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1189041a-04c1-4fa1-9c71-daf77ef8b3fe-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dnm76\" (UID: \"1189041a-04c1-4fa1-9c71-daf77ef8b3fe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dnm76" Feb 19 10:17:36 crc kubenswrapper[4965]: I0219 10:17:36.531664 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gklnx\" (UniqueName: \"kubernetes.io/projected/1189041a-04c1-4fa1-9c71-daf77ef8b3fe-kube-api-access-gklnx\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dnm76\" (UID: \"1189041a-04c1-4fa1-9c71-daf77ef8b3fe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dnm76" Feb 19 10:17:36 crc kubenswrapper[4965]: I0219 10:17:36.531722 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/1189041a-04c1-4fa1-9c71-daf77ef8b3fe-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dnm76\" (UID: \"1189041a-04c1-4fa1-9c71-daf77ef8b3fe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dnm76" Feb 19 10:17:36 crc kubenswrapper[4965]: I0219 10:17:36.531783 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1189041a-04c1-4fa1-9c71-daf77ef8b3fe-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dnm76\" (UID: \"1189041a-04c1-4fa1-9c71-daf77ef8b3fe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dnm76" Feb 19 10:17:36 crc kubenswrapper[4965]: I0219 10:17:36.531820 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1189041a-04c1-4fa1-9c71-daf77ef8b3fe-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dnm76\" (UID: \"1189041a-04c1-4fa1-9c71-daf77ef8b3fe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dnm76" Feb 19 10:17:36 crc kubenswrapper[4965]: I0219 10:17:36.537398 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1189041a-04c1-4fa1-9c71-daf77ef8b3fe-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dnm76\" (UID: \"1189041a-04c1-4fa1-9c71-daf77ef8b3fe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dnm76" Feb 19 10:17:36 crc kubenswrapper[4965]: I0219 10:17:36.537941 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1189041a-04c1-4fa1-9c71-daf77ef8b3fe-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dnm76\" (UID: \"1189041a-04c1-4fa1-9c71-daf77ef8b3fe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dnm76" Feb 19 10:17:36 crc kubenswrapper[4965]: I0219 10:17:36.538708 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1189041a-04c1-4fa1-9c71-daf77ef8b3fe-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dnm76\" (UID: \"1189041a-04c1-4fa1-9c71-daf77ef8b3fe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dnm76" Feb 19 10:17:36 crc kubenswrapper[4965]: I0219 10:17:36.545079 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/1189041a-04c1-4fa1-9c71-daf77ef8b3fe-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dnm76\" (UID: \"1189041a-04c1-4fa1-9c71-daf77ef8b3fe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dnm76" Feb 19 10:17:36 crc kubenswrapper[4965]: I0219 10:17:36.545847 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/1189041a-04c1-4fa1-9c71-daf77ef8b3fe-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dnm76\" (UID: \"1189041a-04c1-4fa1-9c71-daf77ef8b3fe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dnm76" Feb 19 10:17:36 crc kubenswrapper[4965]: I0219 10:17:36.558990 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gklnx\" (UniqueName: \"kubernetes.io/projected/1189041a-04c1-4fa1-9c71-daf77ef8b3fe-kube-api-access-gklnx\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dnm76\" (UID: \"1189041a-04c1-4fa1-9c71-daf77ef8b3fe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dnm76" Feb 19 10:17:36 crc kubenswrapper[4965]: I0219 10:17:36.660037 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dnm76" Feb 19 10:17:37 crc kubenswrapper[4965]: I0219 10:17:37.191206 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dnm76"] Feb 19 10:17:37 crc kubenswrapper[4965]: I0219 10:17:37.261512 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dnm76" event={"ID":"1189041a-04c1-4fa1-9c71-daf77ef8b3fe","Type":"ContainerStarted","Data":"734480938a3c87dc4889e4c2105f25f8e1cb006e390778a687dcfd9b904380a8"} Feb 19 10:17:38 crc kubenswrapper[4965]: I0219 10:17:38.276376 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dnm76" event={"ID":"1189041a-04c1-4fa1-9c71-daf77ef8b3fe","Type":"ContainerStarted","Data":"981e08d91c9f171a2dd23aa6376e00fdfa97003cebf6b65fcea3da577a0787d2"} Feb 19 10:17:38 crc kubenswrapper[4965]: I0219 10:17:38.310645 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dnm76" podStartSLOduration=1.815395368 podStartE2EDuration="2.310625914s" podCreationTimestamp="2026-02-19 10:17:36 +0000 UTC" firstStartedPulling="2026-02-19 10:17:37.198308243 +0000 UTC m=+2112.819629553" lastFinishedPulling="2026-02-19 10:17:37.693538789 +0000 UTC m=+2113.314860099" observedRunningTime="2026-02-19 10:17:38.308909162 +0000 UTC m=+2113.930230472" watchObservedRunningTime="2026-02-19 10:17:38.310625914 +0000 UTC m=+2113.931947234" Feb 19 10:17:42 crc kubenswrapper[4965]: I0219 10:17:42.341116 4965 scope.go:117] "RemoveContainer" containerID="5c703a6bcc904ca4ef8f7c4cf2370e88d7291a0744a11e85b47b9d8529927454" Feb 19 10:17:42 crc kubenswrapper[4965]: I0219 10:17:42.373753 4965 scope.go:117] "RemoveContainer" containerID="0e4ccd393278b74e5792acea06d6e0a4617cf02ba8e34034eed444c716de21e0" Feb 19 10:17:45 crc kubenswrapper[4965]: I0219 10:17:45.357315 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qvmvr"] Feb 19 10:17:45 crc kubenswrapper[4965]: I0219 10:17:45.362213 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qvmvr" Feb 19 10:17:45 crc kubenswrapper[4965]: I0219 10:17:45.368611 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qvmvr"] Feb 19 10:17:45 crc kubenswrapper[4965]: I0219 10:17:45.427714 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9aa1b763-ba6e-41be-9b19-88b769352327-utilities\") pod \"community-operators-qvmvr\" (UID: \"9aa1b763-ba6e-41be-9b19-88b769352327\") " pod="openshift-marketplace/community-operators-qvmvr" Feb 19 10:17:45 crc kubenswrapper[4965]: I0219 10:17:45.428085 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9aa1b763-ba6e-41be-9b19-88b769352327-catalog-content\") pod \"community-operators-qvmvr\" (UID: \"9aa1b763-ba6e-41be-9b19-88b769352327\") " pod="openshift-marketplace/community-operators-qvmvr" Feb 19 10:17:45 crc kubenswrapper[4965]: I0219 10:17:45.428246 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vfwh\" (UniqueName: \"kubernetes.io/projected/9aa1b763-ba6e-41be-9b19-88b769352327-kube-api-access-9vfwh\") pod \"community-operators-qvmvr\" (UID: \"9aa1b763-ba6e-41be-9b19-88b769352327\") " pod="openshift-marketplace/community-operators-qvmvr" Feb 19 10:17:45 crc kubenswrapper[4965]: I0219 10:17:45.530233 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vfwh\" (UniqueName: \"kubernetes.io/projected/9aa1b763-ba6e-41be-9b19-88b769352327-kube-api-access-9vfwh\") pod \"community-operators-qvmvr\" (UID: \"9aa1b763-ba6e-41be-9b19-88b769352327\") " pod="openshift-marketplace/community-operators-qvmvr" Feb 19 10:17:45 crc kubenswrapper[4965]: I0219 10:17:45.530471 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9aa1b763-ba6e-41be-9b19-88b769352327-utilities\") pod \"community-operators-qvmvr\" (UID: \"9aa1b763-ba6e-41be-9b19-88b769352327\") " pod="openshift-marketplace/community-operators-qvmvr" Feb 19 10:17:45 crc kubenswrapper[4965]: I0219 10:17:45.530588 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9aa1b763-ba6e-41be-9b19-88b769352327-catalog-content\") pod \"community-operators-qvmvr\" (UID: \"9aa1b763-ba6e-41be-9b19-88b769352327\") " pod="openshift-marketplace/community-operators-qvmvr" Feb 19 10:17:45 crc kubenswrapper[4965]: I0219 10:17:45.531041 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9aa1b763-ba6e-41be-9b19-88b769352327-utilities\") pod \"community-operators-qvmvr\" (UID: \"9aa1b763-ba6e-41be-9b19-88b769352327\") " pod="openshift-marketplace/community-operators-qvmvr" Feb 19 10:17:45 crc kubenswrapper[4965]: I0219 10:17:45.531162 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9aa1b763-ba6e-41be-9b19-88b769352327-catalog-content\") pod \"community-operators-qvmvr\" (UID: \"9aa1b763-ba6e-41be-9b19-88b769352327\") " pod="openshift-marketplace/community-operators-qvmvr" Feb 19 10:17:45 crc kubenswrapper[4965]: I0219 10:17:45.555010 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vfwh\" (UniqueName: \"kubernetes.io/projected/9aa1b763-ba6e-41be-9b19-88b769352327-kube-api-access-9vfwh\") pod \"community-operators-qvmvr\" (UID: \"9aa1b763-ba6e-41be-9b19-88b769352327\") " pod="openshift-marketplace/community-operators-qvmvr" Feb 19 10:17:45 crc kubenswrapper[4965]: I0219 10:17:45.694551 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qvmvr" Feb 19 10:17:46 crc kubenswrapper[4965]: I0219 10:17:46.395775 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qvmvr"] Feb 19 10:17:47 crc kubenswrapper[4965]: I0219 10:17:47.379533 4965 generic.go:334] "Generic (PLEG): container finished" podID="9aa1b763-ba6e-41be-9b19-88b769352327" containerID="0851b1f10f4f21bf901efa2698631699fb0bb107a71a0ebd29f7f8bb64ce6714" exitCode=0 Feb 19 10:17:47 crc kubenswrapper[4965]: I0219 10:17:47.379611 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qvmvr" event={"ID":"9aa1b763-ba6e-41be-9b19-88b769352327","Type":"ContainerDied","Data":"0851b1f10f4f21bf901efa2698631699fb0bb107a71a0ebd29f7f8bb64ce6714"} Feb 19 10:17:47 crc kubenswrapper[4965]: I0219 10:17:47.379985 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qvmvr" event={"ID":"9aa1b763-ba6e-41be-9b19-88b769352327","Type":"ContainerStarted","Data":"67a7fffdc71445db9b3b925be31e7c4b9a55210194932ed544e51685b5b0b049"} Feb 19 10:17:49 crc kubenswrapper[4965]: I0219 10:17:49.399255 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qvmvr" event={"ID":"9aa1b763-ba6e-41be-9b19-88b769352327","Type":"ContainerStarted","Data":"07678727b9e30790edc1a0dfa5200f70e29e6ee192bbb5479d80e2701deab95c"} Feb 19 10:17:50 crc kubenswrapper[4965]: I0219 10:17:50.408937 4965 generic.go:334] "Generic (PLEG): container finished" podID="9aa1b763-ba6e-41be-9b19-88b769352327" containerID="07678727b9e30790edc1a0dfa5200f70e29e6ee192bbb5479d80e2701deab95c" exitCode=0 Feb 19 10:17:50 crc kubenswrapper[4965]: I0219 10:17:50.408983 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qvmvr" event={"ID":"9aa1b763-ba6e-41be-9b19-88b769352327","Type":"ContainerDied","Data":"07678727b9e30790edc1a0dfa5200f70e29e6ee192bbb5479d80e2701deab95c"} Feb 19 10:17:52 crc kubenswrapper[4965]: I0219 10:17:52.432559 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qvmvr" event={"ID":"9aa1b763-ba6e-41be-9b19-88b769352327","Type":"ContainerStarted","Data":"d8ec1da7b28e4959f004c9c7f4fbaa07ab53b11bb2f5bc9123e6a5aeec882b49"} Feb 19 10:17:52 crc kubenswrapper[4965]: I0219 10:17:52.454940 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qvmvr" podStartSLOduration=3.6263893 podStartE2EDuration="7.454923357s" podCreationTimestamp="2026-02-19 10:17:45 +0000 UTC" firstStartedPulling="2026-02-19 10:17:47.381863209 +0000 UTC m=+2123.003184519" lastFinishedPulling="2026-02-19 10:17:51.210397266 +0000 UTC m=+2126.831718576" observedRunningTime="2026-02-19 10:17:52.448405879 +0000 UTC m=+2128.069727189" watchObservedRunningTime="2026-02-19 10:17:52.454923357 +0000 UTC m=+2128.076244657" Feb 19 10:17:55 crc kubenswrapper[4965]: I0219 10:17:55.695460 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qvmvr" Feb 19 10:17:55 crc kubenswrapper[4965]: I0219 10:17:55.696025 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qvmvr" Feb 19 10:17:55 crc kubenswrapper[4965]: I0219 10:17:55.744928 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qvmvr" Feb 19 10:17:56 crc kubenswrapper[4965]: I0219 10:17:56.516265 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qvmvr" Feb 19 10:17:58 crc kubenswrapper[4965]: I0219 10:17:58.940048 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qvmvr"] Feb 19 10:17:58 crc kubenswrapper[4965]: I0219 10:17:58.940618 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qvmvr" podUID="9aa1b763-ba6e-41be-9b19-88b769352327" containerName="registry-server" containerID="cri-o://d8ec1da7b28e4959f004c9c7f4fbaa07ab53b11bb2f5bc9123e6a5aeec882b49" gracePeriod=2 Feb 19 10:17:59 crc kubenswrapper[4965]: I0219 10:17:59.504900 4965 generic.go:334] "Generic (PLEG): container finished" podID="9aa1b763-ba6e-41be-9b19-88b769352327" containerID="d8ec1da7b28e4959f004c9c7f4fbaa07ab53b11bb2f5bc9123e6a5aeec882b49" exitCode=0 Feb 19 10:17:59 crc kubenswrapper[4965]: I0219 10:17:59.504981 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qvmvr" event={"ID":"9aa1b763-ba6e-41be-9b19-88b769352327","Type":"ContainerDied","Data":"d8ec1da7b28e4959f004c9c7f4fbaa07ab53b11bb2f5bc9123e6a5aeec882b49"} Feb 19 10:17:59 crc kubenswrapper[4965]: I0219 10:17:59.505190 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qvmvr" event={"ID":"9aa1b763-ba6e-41be-9b19-88b769352327","Type":"ContainerDied","Data":"67a7fffdc71445db9b3b925be31e7c4b9a55210194932ed544e51685b5b0b049"} Feb 19 10:17:59 crc kubenswrapper[4965]: I0219 10:17:59.505224 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67a7fffdc71445db9b3b925be31e7c4b9a55210194932ed544e51685b5b0b049" Feb 19 10:17:59 crc kubenswrapper[4965]: I0219 10:17:59.565050 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qvmvr" Feb 19 10:17:59 crc kubenswrapper[4965]: I0219 10:17:59.619644 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9aa1b763-ba6e-41be-9b19-88b769352327-catalog-content\") pod \"9aa1b763-ba6e-41be-9b19-88b769352327\" (UID: \"9aa1b763-ba6e-41be-9b19-88b769352327\") " Feb 19 10:17:59 crc kubenswrapper[4965]: I0219 10:17:59.619728 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vfwh\" (UniqueName: \"kubernetes.io/projected/9aa1b763-ba6e-41be-9b19-88b769352327-kube-api-access-9vfwh\") pod \"9aa1b763-ba6e-41be-9b19-88b769352327\" (UID: \"9aa1b763-ba6e-41be-9b19-88b769352327\") " Feb 19 10:17:59 crc kubenswrapper[4965]: I0219 10:17:59.619872 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9aa1b763-ba6e-41be-9b19-88b769352327-utilities\") pod \"9aa1b763-ba6e-41be-9b19-88b769352327\" (UID: \"9aa1b763-ba6e-41be-9b19-88b769352327\") " Feb 19 10:17:59 crc kubenswrapper[4965]: I0219 10:17:59.621736 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9aa1b763-ba6e-41be-9b19-88b769352327-utilities" (OuterVolumeSpecName: "utilities") pod "9aa1b763-ba6e-41be-9b19-88b769352327" (UID: "9aa1b763-ba6e-41be-9b19-88b769352327"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:17:59 crc kubenswrapper[4965]: I0219 10:17:59.630397 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9aa1b763-ba6e-41be-9b19-88b769352327-kube-api-access-9vfwh" (OuterVolumeSpecName: "kube-api-access-9vfwh") pod "9aa1b763-ba6e-41be-9b19-88b769352327" (UID: "9aa1b763-ba6e-41be-9b19-88b769352327"). InnerVolumeSpecName "kube-api-access-9vfwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:17:59 crc kubenswrapper[4965]: I0219 10:17:59.703470 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9aa1b763-ba6e-41be-9b19-88b769352327-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9aa1b763-ba6e-41be-9b19-88b769352327" (UID: "9aa1b763-ba6e-41be-9b19-88b769352327"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:17:59 crc kubenswrapper[4965]: I0219 10:17:59.722207 4965 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9aa1b763-ba6e-41be-9b19-88b769352327-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:17:59 crc kubenswrapper[4965]: I0219 10:17:59.722236 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vfwh\" (UniqueName: \"kubernetes.io/projected/9aa1b763-ba6e-41be-9b19-88b769352327-kube-api-access-9vfwh\") on node \"crc\" DevicePath \"\"" Feb 19 10:17:59 crc kubenswrapper[4965]: I0219 10:17:59.722248 4965 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9aa1b763-ba6e-41be-9b19-88b769352327-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:18:00 crc kubenswrapper[4965]: I0219 10:18:00.515025 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qvmvr" Feb 19 10:18:00 crc kubenswrapper[4965]: I0219 10:18:00.592936 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qvmvr"] Feb 19 10:18:00 crc kubenswrapper[4965]: I0219 10:18:00.607972 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qvmvr"] Feb 19 10:18:01 crc kubenswrapper[4965]: I0219 10:18:01.222305 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9aa1b763-ba6e-41be-9b19-88b769352327" path="/var/lib/kubelet/pods/9aa1b763-ba6e-41be-9b19-88b769352327/volumes" Feb 19 10:18:08 crc kubenswrapper[4965]: I0219 10:18:08.606925 4965 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-pbfgx container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 10:18:08 crc kubenswrapper[4965]: I0219 10:18:08.607490 4965 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-pbfgx" podUID="13fcdc33-7dcb-4d34-86ca-bd40d679560e" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 10:18:17 crc kubenswrapper[4965]: I0219 10:18:17.745556 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-spn8r"] Feb 19 10:18:17 crc kubenswrapper[4965]: E0219 10:18:17.749164 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9aa1b763-ba6e-41be-9b19-88b769352327" containerName="registry-server" Feb 19 10:18:17 crc kubenswrapper[4965]: I0219 10:18:17.749335 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="9aa1b763-ba6e-41be-9b19-88b769352327" containerName="registry-server" Feb 19 10:18:17 crc kubenswrapper[4965]: E0219 10:18:17.749460 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9aa1b763-ba6e-41be-9b19-88b769352327" containerName="extract-content" Feb 19 10:18:17 crc kubenswrapper[4965]: I0219 10:18:17.749540 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="9aa1b763-ba6e-41be-9b19-88b769352327" containerName="extract-content" Feb 19 10:18:17 crc kubenswrapper[4965]: E0219 10:18:17.749637 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9aa1b763-ba6e-41be-9b19-88b769352327" containerName="extract-utilities" Feb 19 10:18:17 crc kubenswrapper[4965]: I0219 10:18:17.749714 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="9aa1b763-ba6e-41be-9b19-88b769352327" containerName="extract-utilities" Feb 19 10:18:17 crc kubenswrapper[4965]: I0219 10:18:17.750037 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="9aa1b763-ba6e-41be-9b19-88b769352327" containerName="registry-server" Feb 19 10:18:17 crc kubenswrapper[4965]: I0219 10:18:17.752097 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-spn8r" Feb 19 10:18:17 crc kubenswrapper[4965]: I0219 10:18:17.762365 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-spn8r"] Feb 19 10:18:17 crc kubenswrapper[4965]: I0219 10:18:17.811786 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f830ac86-e65f-4836-8ac5-c9d602296e8c-catalog-content\") pod \"redhat-marketplace-spn8r\" (UID: \"f830ac86-e65f-4836-8ac5-c9d602296e8c\") " pod="openshift-marketplace/redhat-marketplace-spn8r" Feb 19 10:18:17 crc kubenswrapper[4965]: I0219 10:18:17.811941 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f830ac86-e65f-4836-8ac5-c9d602296e8c-utilities\") pod \"redhat-marketplace-spn8r\" (UID: \"f830ac86-e65f-4836-8ac5-c9d602296e8c\") " pod="openshift-marketplace/redhat-marketplace-spn8r" Feb 19 10:18:17 crc kubenswrapper[4965]: I0219 10:18:17.811996 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgf8t\" (UniqueName: \"kubernetes.io/projected/f830ac86-e65f-4836-8ac5-c9d602296e8c-kube-api-access-tgf8t\") pod \"redhat-marketplace-spn8r\" (UID: \"f830ac86-e65f-4836-8ac5-c9d602296e8c\") " pod="openshift-marketplace/redhat-marketplace-spn8r" Feb 19 10:18:17 crc kubenswrapper[4965]: I0219 10:18:17.914148 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f830ac86-e65f-4836-8ac5-c9d602296e8c-catalog-content\") pod \"redhat-marketplace-spn8r\" (UID: \"f830ac86-e65f-4836-8ac5-c9d602296e8c\") " pod="openshift-marketplace/redhat-marketplace-spn8r" Feb 19 10:18:17 crc kubenswrapper[4965]: I0219 10:18:17.914538 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f830ac86-e65f-4836-8ac5-c9d602296e8c-utilities\") pod \"redhat-marketplace-spn8r\" (UID: \"f830ac86-e65f-4836-8ac5-c9d602296e8c\") " pod="openshift-marketplace/redhat-marketplace-spn8r" Feb 19 10:18:17 crc kubenswrapper[4965]: I0219 10:18:17.914632 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgf8t\" (UniqueName: \"kubernetes.io/projected/f830ac86-e65f-4836-8ac5-c9d602296e8c-kube-api-access-tgf8t\") pod \"redhat-marketplace-spn8r\" (UID: \"f830ac86-e65f-4836-8ac5-c9d602296e8c\") " pod="openshift-marketplace/redhat-marketplace-spn8r" Feb 19 10:18:17 crc kubenswrapper[4965]: I0219 10:18:17.914586 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f830ac86-e65f-4836-8ac5-c9d602296e8c-catalog-content\") pod \"redhat-marketplace-spn8r\" (UID: \"f830ac86-e65f-4836-8ac5-c9d602296e8c\") " pod="openshift-marketplace/redhat-marketplace-spn8r" Feb 19 10:18:17 crc kubenswrapper[4965]: I0219 10:18:17.914913 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f830ac86-e65f-4836-8ac5-c9d602296e8c-utilities\") pod \"redhat-marketplace-spn8r\" (UID: \"f830ac86-e65f-4836-8ac5-c9d602296e8c\") " pod="openshift-marketplace/redhat-marketplace-spn8r" Feb 19 10:18:17 crc kubenswrapper[4965]: I0219 10:18:17.935131 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgf8t\" (UniqueName: \"kubernetes.io/projected/f830ac86-e65f-4836-8ac5-c9d602296e8c-kube-api-access-tgf8t\") pod \"redhat-marketplace-spn8r\" (UID: \"f830ac86-e65f-4836-8ac5-c9d602296e8c\") " pod="openshift-marketplace/redhat-marketplace-spn8r" Feb 19 10:18:18 crc kubenswrapper[4965]: I0219 10:18:18.083846 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-spn8r" Feb 19 10:18:18 crc kubenswrapper[4965]: I0219 10:18:18.633169 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-spn8r"] Feb 19 10:18:19 crc kubenswrapper[4965]: I0219 10:18:19.160974 4965 generic.go:334] "Generic (PLEG): container finished" podID="f830ac86-e65f-4836-8ac5-c9d602296e8c" containerID="802caa3716baccfca7ba049ab1a9150d9afab5c2c2b8c192506c5454ad0e75a2" exitCode=0 Feb 19 10:18:19 crc kubenswrapper[4965]: I0219 10:18:19.162033 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-spn8r" event={"ID":"f830ac86-e65f-4836-8ac5-c9d602296e8c","Type":"ContainerDied","Data":"802caa3716baccfca7ba049ab1a9150d9afab5c2c2b8c192506c5454ad0e75a2"} Feb 19 10:18:19 crc kubenswrapper[4965]: I0219 10:18:19.162219 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-spn8r" event={"ID":"f830ac86-e65f-4836-8ac5-c9d602296e8c","Type":"ContainerStarted","Data":"92a8169134f4ec05ef3d612e77c960aeacc64a40a85a0abc0ef49d4b9362db51"} Feb 19 10:18:21 crc kubenswrapper[4965]: I0219 10:18:21.179992 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-spn8r" event={"ID":"f830ac86-e65f-4836-8ac5-c9d602296e8c","Type":"ContainerStarted","Data":"edd3c3569b83aad91cb5f5cdc990a314a210517aaee9553fb0e00bcd73d76b8b"} Feb 19 10:18:23 crc kubenswrapper[4965]: I0219 10:18:23.200853 4965 generic.go:334] "Generic (PLEG): container finished" podID="f830ac86-e65f-4836-8ac5-c9d602296e8c" containerID="edd3c3569b83aad91cb5f5cdc990a314a210517aaee9553fb0e00bcd73d76b8b" exitCode=0 Feb 19 10:18:23 crc kubenswrapper[4965]: I0219 10:18:23.212282 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-spn8r" event={"ID":"f830ac86-e65f-4836-8ac5-c9d602296e8c","Type":"ContainerDied","Data":"edd3c3569b83aad91cb5f5cdc990a314a210517aaee9553fb0e00bcd73d76b8b"} Feb 19 10:18:24 crc kubenswrapper[4965]: I0219 10:18:24.215121 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-spn8r" event={"ID":"f830ac86-e65f-4836-8ac5-c9d602296e8c","Type":"ContainerStarted","Data":"f8b6da23b1c2bcd1bc139ae531db1a03c9284a3c523d64129505553aefcd30a6"} Feb 19 10:18:24 crc kubenswrapper[4965]: I0219 10:18:24.272480 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-spn8r" podStartSLOduration=2.749431643 podStartE2EDuration="7.272444914s" podCreationTimestamp="2026-02-19 10:18:17 +0000 UTC" firstStartedPulling="2026-02-19 10:18:19.162789487 +0000 UTC m=+2154.784110797" lastFinishedPulling="2026-02-19 10:18:23.685802718 +0000 UTC m=+2159.307124068" observedRunningTime="2026-02-19 10:18:24.237537376 +0000 UTC m=+2159.858858686" watchObservedRunningTime="2026-02-19 10:18:24.272444914 +0000 UTC m=+2159.893766264" Feb 19 10:18:27 crc kubenswrapper[4965]: I0219 10:18:27.247559 4965 generic.go:334] "Generic (PLEG): container finished" podID="1189041a-04c1-4fa1-9c71-daf77ef8b3fe" containerID="981e08d91c9f171a2dd23aa6376e00fdfa97003cebf6b65fcea3da577a0787d2" exitCode=0 Feb 19 10:18:27 crc kubenswrapper[4965]: I0219 10:18:27.248185 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dnm76" event={"ID":"1189041a-04c1-4fa1-9c71-daf77ef8b3fe","Type":"ContainerDied","Data":"981e08d91c9f171a2dd23aa6376e00fdfa97003cebf6b65fcea3da577a0787d2"} Feb 19 10:18:28 crc kubenswrapper[4965]: I0219 10:18:28.084349 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-spn8r" Feb 19 10:18:28 crc kubenswrapper[4965]: I0219 10:18:28.084425 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-spn8r" Feb 19 10:18:28 crc kubenswrapper[4965]: I0219 10:18:28.139695 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-spn8r" Feb 19 10:18:28 crc kubenswrapper[4965]: I0219 10:18:28.323820 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-spn8r" Feb 19 10:18:28 crc kubenswrapper[4965]: I0219 10:18:28.897314 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dnm76" Feb 19 10:18:29 crc kubenswrapper[4965]: I0219 10:18:29.001791 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1189041a-04c1-4fa1-9c71-daf77ef8b3fe-inventory\") pod \"1189041a-04c1-4fa1-9c71-daf77ef8b3fe\" (UID: \"1189041a-04c1-4fa1-9c71-daf77ef8b3fe\") " Feb 19 10:18:29 crc kubenswrapper[4965]: I0219 10:18:29.002125 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1189041a-04c1-4fa1-9c71-daf77ef8b3fe-ssh-key-openstack-edpm-ipam\") pod \"1189041a-04c1-4fa1-9c71-daf77ef8b3fe\" (UID: \"1189041a-04c1-4fa1-9c71-daf77ef8b3fe\") " Feb 19 10:18:29 crc kubenswrapper[4965]: I0219 10:18:29.002183 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/1189041a-04c1-4fa1-9c71-daf77ef8b3fe-neutron-ovn-metadata-agent-neutron-config-0\") pod \"1189041a-04c1-4fa1-9c71-daf77ef8b3fe\" (UID: \"1189041a-04c1-4fa1-9c71-daf77ef8b3fe\") " Feb 19 10:18:29 crc kubenswrapper[4965]: I0219 10:18:29.002295 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/1189041a-04c1-4fa1-9c71-daf77ef8b3fe-nova-metadata-neutron-config-0\") pod \"1189041a-04c1-4fa1-9c71-daf77ef8b3fe\" (UID: \"1189041a-04c1-4fa1-9c71-daf77ef8b3fe\") " Feb 19 10:18:29 crc kubenswrapper[4965]: I0219 10:18:29.002339 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gklnx\" (UniqueName: \"kubernetes.io/projected/1189041a-04c1-4fa1-9c71-daf77ef8b3fe-kube-api-access-gklnx\") pod \"1189041a-04c1-4fa1-9c71-daf77ef8b3fe\" (UID: \"1189041a-04c1-4fa1-9c71-daf77ef8b3fe\") " Feb 19 10:18:29 crc kubenswrapper[4965]: I0219 10:18:29.002385 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1189041a-04c1-4fa1-9c71-daf77ef8b3fe-neutron-metadata-combined-ca-bundle\") pod \"1189041a-04c1-4fa1-9c71-daf77ef8b3fe\" (UID: \"1189041a-04c1-4fa1-9c71-daf77ef8b3fe\") " Feb 19 10:18:29 crc kubenswrapper[4965]: I0219 10:18:29.007966 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1189041a-04c1-4fa1-9c71-daf77ef8b3fe-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "1189041a-04c1-4fa1-9c71-daf77ef8b3fe" (UID: "1189041a-04c1-4fa1-9c71-daf77ef8b3fe"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:18:29 crc kubenswrapper[4965]: I0219 10:18:29.008001 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1189041a-04c1-4fa1-9c71-daf77ef8b3fe-kube-api-access-gklnx" (OuterVolumeSpecName: "kube-api-access-gklnx") pod "1189041a-04c1-4fa1-9c71-daf77ef8b3fe" (UID: "1189041a-04c1-4fa1-9c71-daf77ef8b3fe"). InnerVolumeSpecName "kube-api-access-gklnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:18:29 crc kubenswrapper[4965]: I0219 10:18:29.038690 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1189041a-04c1-4fa1-9c71-daf77ef8b3fe-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "1189041a-04c1-4fa1-9c71-daf77ef8b3fe" (UID: "1189041a-04c1-4fa1-9c71-daf77ef8b3fe"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:18:29 crc kubenswrapper[4965]: I0219 10:18:29.047064 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1189041a-04c1-4fa1-9c71-daf77ef8b3fe-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "1189041a-04c1-4fa1-9c71-daf77ef8b3fe" (UID: "1189041a-04c1-4fa1-9c71-daf77ef8b3fe"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:18:29 crc kubenswrapper[4965]: I0219 10:18:29.051082 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1189041a-04c1-4fa1-9c71-daf77ef8b3fe-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1189041a-04c1-4fa1-9c71-daf77ef8b3fe" (UID: "1189041a-04c1-4fa1-9c71-daf77ef8b3fe"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:18:29 crc kubenswrapper[4965]: I0219 10:18:29.052569 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1189041a-04c1-4fa1-9c71-daf77ef8b3fe-inventory" (OuterVolumeSpecName: "inventory") pod "1189041a-04c1-4fa1-9c71-daf77ef8b3fe" (UID: "1189041a-04c1-4fa1-9c71-daf77ef8b3fe"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:18:29 crc kubenswrapper[4965]: I0219 10:18:29.106554 4965 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/1189041a-04c1-4fa1-9c71-daf77ef8b3fe-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:18:29 crc kubenswrapper[4965]: I0219 10:18:29.106591 4965 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/1189041a-04c1-4fa1-9c71-daf77ef8b3fe-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:18:29 crc kubenswrapper[4965]: I0219 10:18:29.106601 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gklnx\" (UniqueName: \"kubernetes.io/projected/1189041a-04c1-4fa1-9c71-daf77ef8b3fe-kube-api-access-gklnx\") on node \"crc\" DevicePath \"\"" Feb 19 10:18:29 crc kubenswrapper[4965]: I0219 10:18:29.106612 4965 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1189041a-04c1-4fa1-9c71-daf77ef8b3fe-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:18:29 crc kubenswrapper[4965]: I0219 10:18:29.106623 4965 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1189041a-04c1-4fa1-9c71-daf77ef8b3fe-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:18:29 crc kubenswrapper[4965]: I0219 10:18:29.106631 4965 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1189041a-04c1-4fa1-9c71-daf77ef8b3fe-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 10:18:29 crc kubenswrapper[4965]: I0219 10:18:29.276535 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dnm76" event={"ID":"1189041a-04c1-4fa1-9c71-daf77ef8b3fe","Type":"ContainerDied","Data":"734480938a3c87dc4889e4c2105f25f8e1cb006e390778a687dcfd9b904380a8"} Feb 19 10:18:29 crc kubenswrapper[4965]: I0219 10:18:29.276565 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dnm76" Feb 19 10:18:29 crc kubenswrapper[4965]: I0219 10:18:29.276590 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="734480938a3c87dc4889e4c2105f25f8e1cb006e390778a687dcfd9b904380a8" Feb 19 10:18:29 crc kubenswrapper[4965]: I0219 10:18:29.456298 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zt9j8"] Feb 19 10:18:29 crc kubenswrapper[4965]: E0219 10:18:29.457425 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1189041a-04c1-4fa1-9c71-daf77ef8b3fe" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 19 10:18:29 crc kubenswrapper[4965]: I0219 10:18:29.457533 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="1189041a-04c1-4fa1-9c71-daf77ef8b3fe" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 19 10:18:29 crc kubenswrapper[4965]: I0219 10:18:29.457887 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="1189041a-04c1-4fa1-9c71-daf77ef8b3fe" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 19 10:18:29 crc kubenswrapper[4965]: I0219 10:18:29.458919 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zt9j8" Feb 19 10:18:29 crc kubenswrapper[4965]: I0219 10:18:29.461565 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 19 10:18:29 crc kubenswrapper[4965]: I0219 10:18:29.462025 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cthw6" Feb 19 10:18:29 crc kubenswrapper[4965]: I0219 10:18:29.462351 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 10:18:29 crc kubenswrapper[4965]: I0219 10:18:29.462571 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 10:18:29 crc kubenswrapper[4965]: I0219 10:18:29.462963 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:18:29 crc kubenswrapper[4965]: I0219 10:18:29.468009 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zt9j8"] Feb 19 10:18:29 crc kubenswrapper[4965]: I0219 10:18:29.617581 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/6b29dda1-69ac-4d2a-a078-e2f1a7103b67-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-zt9j8\" (UID: \"6b29dda1-69ac-4d2a-a078-e2f1a7103b67\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zt9j8" Feb 19 10:18:29 crc kubenswrapper[4965]: I0219 10:18:29.617643 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6b29dda1-69ac-4d2a-a078-e2f1a7103b67-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-zt9j8\" (UID: \"6b29dda1-69ac-4d2a-a078-e2f1a7103b67\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zt9j8" Feb 19 10:18:29 crc kubenswrapper[4965]: I0219 10:18:29.617742 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b29dda1-69ac-4d2a-a078-e2f1a7103b67-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-zt9j8\" (UID: \"6b29dda1-69ac-4d2a-a078-e2f1a7103b67\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zt9j8" Feb 19 10:18:29 crc kubenswrapper[4965]: I0219 10:18:29.617770 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c95v2\" (UniqueName: \"kubernetes.io/projected/6b29dda1-69ac-4d2a-a078-e2f1a7103b67-kube-api-access-c95v2\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-zt9j8\" (UID: \"6b29dda1-69ac-4d2a-a078-e2f1a7103b67\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zt9j8" Feb 19 10:18:29 crc kubenswrapper[4965]: I0219 10:18:29.617798 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6b29dda1-69ac-4d2a-a078-e2f1a7103b67-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-zt9j8\" (UID: \"6b29dda1-69ac-4d2a-a078-e2f1a7103b67\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zt9j8" Feb 19 10:18:29 crc kubenswrapper[4965]: I0219 10:18:29.721367 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/6b29dda1-69ac-4d2a-a078-e2f1a7103b67-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-zt9j8\" (UID: \"6b29dda1-69ac-4d2a-a078-e2f1a7103b67\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zt9j8" Feb 19 10:18:29 crc kubenswrapper[4965]: I0219 10:18:29.721502 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6b29dda1-69ac-4d2a-a078-e2f1a7103b67-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-zt9j8\" (UID: \"6b29dda1-69ac-4d2a-a078-e2f1a7103b67\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zt9j8" Feb 19 10:18:29 crc kubenswrapper[4965]: I0219 10:18:29.721703 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b29dda1-69ac-4d2a-a078-e2f1a7103b67-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-zt9j8\" (UID: \"6b29dda1-69ac-4d2a-a078-e2f1a7103b67\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zt9j8" Feb 19 10:18:29 crc kubenswrapper[4965]: I0219 10:18:29.721747 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c95v2\" (UniqueName: \"kubernetes.io/projected/6b29dda1-69ac-4d2a-a078-e2f1a7103b67-kube-api-access-c95v2\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-zt9j8\" (UID: \"6b29dda1-69ac-4d2a-a078-e2f1a7103b67\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zt9j8" Feb 19 10:18:29 crc kubenswrapper[4965]: I0219 10:18:29.721816 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6b29dda1-69ac-4d2a-a078-e2f1a7103b67-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-zt9j8\" (UID: \"6b29dda1-69ac-4d2a-a078-e2f1a7103b67\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zt9j8" Feb 19 10:18:29 crc kubenswrapper[4965]: I0219 10:18:29.726790 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6b29dda1-69ac-4d2a-a078-e2f1a7103b67-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-zt9j8\" (UID: \"6b29dda1-69ac-4d2a-a078-e2f1a7103b67\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zt9j8" Feb 19 10:18:29 crc kubenswrapper[4965]: I0219 10:18:29.727093 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/6b29dda1-69ac-4d2a-a078-e2f1a7103b67-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-zt9j8\" (UID: \"6b29dda1-69ac-4d2a-a078-e2f1a7103b67\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zt9j8" Feb 19 10:18:29 crc kubenswrapper[4965]: I0219 10:18:29.733978 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6b29dda1-69ac-4d2a-a078-e2f1a7103b67-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-zt9j8\" (UID: \"6b29dda1-69ac-4d2a-a078-e2f1a7103b67\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zt9j8" Feb 19 10:18:29 crc kubenswrapper[4965]: I0219 10:18:29.734918 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b29dda1-69ac-4d2a-a078-e2f1a7103b67-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-zt9j8\" (UID: \"6b29dda1-69ac-4d2a-a078-e2f1a7103b67\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zt9j8" Feb 19 10:18:29 crc kubenswrapper[4965]: I0219 10:18:29.753469 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c95v2\" (UniqueName: \"kubernetes.io/projected/6b29dda1-69ac-4d2a-a078-e2f1a7103b67-kube-api-access-c95v2\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-zt9j8\" (UID: \"6b29dda1-69ac-4d2a-a078-e2f1a7103b67\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zt9j8" Feb 19 10:18:29 crc kubenswrapper[4965]: I0219 10:18:29.786886 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zt9j8" Feb 19 10:18:30 crc kubenswrapper[4965]: I0219 10:18:30.395690 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zt9j8"] Feb 19 10:18:31 crc kubenswrapper[4965]: I0219 10:18:31.296994 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zt9j8" event={"ID":"6b29dda1-69ac-4d2a-a078-e2f1a7103b67","Type":"ContainerStarted","Data":"4761880427f64304d96a6530071ac20474f2f427a137001ccdc00e5d99cba9b3"} Feb 19 10:18:31 crc kubenswrapper[4965]: I0219 10:18:31.297336 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zt9j8" event={"ID":"6b29dda1-69ac-4d2a-a078-e2f1a7103b67","Type":"ContainerStarted","Data":"aa476b86b0ebc2736b42a78dca2da46df4a8b9e0d6b69b9dcc2b504046706a45"} Feb 19 10:18:31 crc kubenswrapper[4965]: I0219 10:18:31.323653 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zt9j8" podStartSLOduration=1.9123484579999999 podStartE2EDuration="2.323634016s" podCreationTimestamp="2026-02-19 10:18:29 +0000 UTC" firstStartedPulling="2026-02-19 10:18:30.395396576 +0000 UTC m=+2166.016717876" lastFinishedPulling="2026-02-19 10:18:30.806682124 +0000 UTC m=+2166.428003434" observedRunningTime="2026-02-19 10:18:31.315784136 +0000 UTC m=+2166.937105456" watchObservedRunningTime="2026-02-19 10:18:31.323634016 +0000 UTC m=+2166.944955316" Feb 19 10:18:31 crc kubenswrapper[4965]: I0219 10:18:31.599210 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-spn8r"] Feb 19 10:18:31 crc kubenswrapper[4965]: I0219 10:18:31.599468 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-spn8r" podUID="f830ac86-e65f-4836-8ac5-c9d602296e8c" containerName="registry-server" containerID="cri-o://f8b6da23b1c2bcd1bc139ae531db1a03c9284a3c523d64129505553aefcd30a6" gracePeriod=2 Feb 19 10:18:32 crc kubenswrapper[4965]: I0219 10:18:32.300140 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-spn8r" Feb 19 10:18:32 crc kubenswrapper[4965]: I0219 10:18:32.311057 4965 generic.go:334] "Generic (PLEG): container finished" podID="f830ac86-e65f-4836-8ac5-c9d602296e8c" containerID="f8b6da23b1c2bcd1bc139ae531db1a03c9284a3c523d64129505553aefcd30a6" exitCode=0 Feb 19 10:18:32 crc kubenswrapper[4965]: I0219 10:18:32.311134 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-spn8r" Feb 19 10:18:32 crc kubenswrapper[4965]: I0219 10:18:32.311158 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-spn8r" event={"ID":"f830ac86-e65f-4836-8ac5-c9d602296e8c","Type":"ContainerDied","Data":"f8b6da23b1c2bcd1bc139ae531db1a03c9284a3c523d64129505553aefcd30a6"} Feb 19 10:18:32 crc kubenswrapper[4965]: I0219 10:18:32.311518 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-spn8r" event={"ID":"f830ac86-e65f-4836-8ac5-c9d602296e8c","Type":"ContainerDied","Data":"92a8169134f4ec05ef3d612e77c960aeacc64a40a85a0abc0ef49d4b9362db51"} Feb 19 10:18:32 crc kubenswrapper[4965]: I0219 10:18:32.311557 4965 scope.go:117] "RemoveContainer" containerID="f8b6da23b1c2bcd1bc139ae531db1a03c9284a3c523d64129505553aefcd30a6" Feb 19 10:18:32 crc kubenswrapper[4965]: I0219 10:18:32.334253 4965 scope.go:117] "RemoveContainer" containerID="edd3c3569b83aad91cb5f5cdc990a314a210517aaee9553fb0e00bcd73d76b8b" Feb 19 10:18:32 crc kubenswrapper[4965]: I0219 10:18:32.357384 4965 scope.go:117] "RemoveContainer" containerID="802caa3716baccfca7ba049ab1a9150d9afab5c2c2b8c192506c5454ad0e75a2" Feb 19 10:18:32 crc kubenswrapper[4965]: I0219 10:18:32.399422 4965 scope.go:117] "RemoveContainer" containerID="f8b6da23b1c2bcd1bc139ae531db1a03c9284a3c523d64129505553aefcd30a6" Feb 19 10:18:32 crc kubenswrapper[4965]: E0219 10:18:32.399786 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8b6da23b1c2bcd1bc139ae531db1a03c9284a3c523d64129505553aefcd30a6\": container with ID starting with f8b6da23b1c2bcd1bc139ae531db1a03c9284a3c523d64129505553aefcd30a6 not found: ID does not exist" containerID="f8b6da23b1c2bcd1bc139ae531db1a03c9284a3c523d64129505553aefcd30a6" Feb 19 10:18:32 crc kubenswrapper[4965]: I0219 10:18:32.399819 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8b6da23b1c2bcd1bc139ae531db1a03c9284a3c523d64129505553aefcd30a6"} err="failed to get container status \"f8b6da23b1c2bcd1bc139ae531db1a03c9284a3c523d64129505553aefcd30a6\": rpc error: code = NotFound desc = could not find container \"f8b6da23b1c2bcd1bc139ae531db1a03c9284a3c523d64129505553aefcd30a6\": container with ID starting with f8b6da23b1c2bcd1bc139ae531db1a03c9284a3c523d64129505553aefcd30a6 not found: ID does not exist" Feb 19 10:18:32 crc kubenswrapper[4965]: I0219 10:18:32.399840 4965 scope.go:117] "RemoveContainer" containerID="edd3c3569b83aad91cb5f5cdc990a314a210517aaee9553fb0e00bcd73d76b8b" Feb 19 10:18:32 crc kubenswrapper[4965]: E0219 10:18:32.400331 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edd3c3569b83aad91cb5f5cdc990a314a210517aaee9553fb0e00bcd73d76b8b\": container with ID starting with edd3c3569b83aad91cb5f5cdc990a314a210517aaee9553fb0e00bcd73d76b8b not found: ID does not exist" containerID="edd3c3569b83aad91cb5f5cdc990a314a210517aaee9553fb0e00bcd73d76b8b" Feb 19 10:18:32 crc kubenswrapper[4965]: I0219 10:18:32.400350 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edd3c3569b83aad91cb5f5cdc990a314a210517aaee9553fb0e00bcd73d76b8b"} err="failed to get container status \"edd3c3569b83aad91cb5f5cdc990a314a210517aaee9553fb0e00bcd73d76b8b\": rpc error: code = NotFound desc = could not find container \"edd3c3569b83aad91cb5f5cdc990a314a210517aaee9553fb0e00bcd73d76b8b\": container with ID starting with edd3c3569b83aad91cb5f5cdc990a314a210517aaee9553fb0e00bcd73d76b8b not found: ID does not exist" Feb 19 10:18:32 crc kubenswrapper[4965]: I0219 10:18:32.400381 4965 scope.go:117] "RemoveContainer" containerID="802caa3716baccfca7ba049ab1a9150d9afab5c2c2b8c192506c5454ad0e75a2" Feb 19 10:18:32 crc kubenswrapper[4965]: E0219 10:18:32.400570 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"802caa3716baccfca7ba049ab1a9150d9afab5c2c2b8c192506c5454ad0e75a2\": container with ID starting with 802caa3716baccfca7ba049ab1a9150d9afab5c2c2b8c192506c5454ad0e75a2 not found: ID does not exist" containerID="802caa3716baccfca7ba049ab1a9150d9afab5c2c2b8c192506c5454ad0e75a2" Feb 19 10:18:32 crc kubenswrapper[4965]: I0219 10:18:32.400589 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"802caa3716baccfca7ba049ab1a9150d9afab5c2c2b8c192506c5454ad0e75a2"} err="failed to get container status \"802caa3716baccfca7ba049ab1a9150d9afab5c2c2b8c192506c5454ad0e75a2\": rpc error: code = NotFound desc = could not find container \"802caa3716baccfca7ba049ab1a9150d9afab5c2c2b8c192506c5454ad0e75a2\": container with ID starting with 802caa3716baccfca7ba049ab1a9150d9afab5c2c2b8c192506c5454ad0e75a2 not found: ID does not exist" Feb 19 10:18:32 crc kubenswrapper[4965]: I0219 10:18:32.478453 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f830ac86-e65f-4836-8ac5-c9d602296e8c-utilities\") pod \"f830ac86-e65f-4836-8ac5-c9d602296e8c\" (UID: \"f830ac86-e65f-4836-8ac5-c9d602296e8c\") " Feb 19 10:18:32 crc kubenswrapper[4965]: I0219 10:18:32.478619 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f830ac86-e65f-4836-8ac5-c9d602296e8c-catalog-content\") pod \"f830ac86-e65f-4836-8ac5-c9d602296e8c\" (UID: \"f830ac86-e65f-4836-8ac5-c9d602296e8c\") " Feb 19 10:18:32 crc kubenswrapper[4965]: I0219 10:18:32.478768 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgf8t\" (UniqueName: \"kubernetes.io/projected/f830ac86-e65f-4836-8ac5-c9d602296e8c-kube-api-access-tgf8t\") pod \"f830ac86-e65f-4836-8ac5-c9d602296e8c\" (UID: \"f830ac86-e65f-4836-8ac5-c9d602296e8c\") " Feb 19 10:18:32 crc kubenswrapper[4965]: I0219 10:18:32.479213 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f830ac86-e65f-4836-8ac5-c9d602296e8c-utilities" (OuterVolumeSpecName: "utilities") pod "f830ac86-e65f-4836-8ac5-c9d602296e8c" (UID: "f830ac86-e65f-4836-8ac5-c9d602296e8c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:18:32 crc kubenswrapper[4965]: I0219 10:18:32.479462 4965 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f830ac86-e65f-4836-8ac5-c9d602296e8c-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:18:32 crc kubenswrapper[4965]: I0219 10:18:32.483517 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f830ac86-e65f-4836-8ac5-c9d602296e8c-kube-api-access-tgf8t" (OuterVolumeSpecName: "kube-api-access-tgf8t") pod "f830ac86-e65f-4836-8ac5-c9d602296e8c" (UID: "f830ac86-e65f-4836-8ac5-c9d602296e8c"). InnerVolumeSpecName "kube-api-access-tgf8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:18:32 crc kubenswrapper[4965]: I0219 10:18:32.502618 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f830ac86-e65f-4836-8ac5-c9d602296e8c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f830ac86-e65f-4836-8ac5-c9d602296e8c" (UID: "f830ac86-e65f-4836-8ac5-c9d602296e8c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:18:32 crc kubenswrapper[4965]: I0219 10:18:32.581144 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgf8t\" (UniqueName: \"kubernetes.io/projected/f830ac86-e65f-4836-8ac5-c9d602296e8c-kube-api-access-tgf8t\") on node \"crc\" DevicePath \"\"" Feb 19 10:18:32 crc kubenswrapper[4965]: I0219 10:18:32.581174 4965 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f830ac86-e65f-4836-8ac5-c9d602296e8c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:18:32 crc kubenswrapper[4965]: I0219 10:18:32.642815 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-spn8r"] Feb 19 10:18:32 crc kubenswrapper[4965]: I0219 10:18:32.654124 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-spn8r"] Feb 19 10:18:33 crc kubenswrapper[4965]: I0219 10:18:33.211476 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f830ac86-e65f-4836-8ac5-c9d602296e8c" path="/var/lib/kubelet/pods/f830ac86-e65f-4836-8ac5-c9d602296e8c/volumes" Feb 19 10:18:46 crc kubenswrapper[4965]: I0219 10:18:46.600850 4965 patch_prober.go:28] interesting pod/machine-config-daemon-7mhh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:18:46 crc kubenswrapper[4965]: I0219 10:18:46.602725 4965 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:19:16 crc kubenswrapper[4965]: I0219 10:19:16.601182 4965 patch_prober.go:28] interesting pod/machine-config-daemon-7mhh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:19:16 crc kubenswrapper[4965]: I0219 10:19:16.601721 4965 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:19:46 crc kubenswrapper[4965]: I0219 10:19:46.600994 4965 patch_prober.go:28] interesting pod/machine-config-daemon-7mhh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:19:46 crc kubenswrapper[4965]: I0219 10:19:46.601589 4965 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:19:46 crc kubenswrapper[4965]: I0219 10:19:46.601638 4965 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" Feb 19 10:19:46 crc kubenswrapper[4965]: I0219 10:19:46.602440 4965 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0adc6e28f055d539583ad7bb06c2cd0eb874f0b736c831a9dc5b0964a3e19dcf"} pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 10:19:46 crc kubenswrapper[4965]: I0219 10:19:46.602499 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerName="machine-config-daemon" containerID="cri-o://0adc6e28f055d539583ad7bb06c2cd0eb874f0b736c831a9dc5b0964a3e19dcf" gracePeriod=600 Feb 19 10:19:46 crc kubenswrapper[4965]: E0219 10:19:46.730953 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:19:46 crc kubenswrapper[4965]: I0219 10:19:46.888243 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jjz7h"] Feb 19 10:19:46 crc kubenswrapper[4965]: E0219 10:19:46.888965 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f830ac86-e65f-4836-8ac5-c9d602296e8c" containerName="registry-server" Feb 19 10:19:46 crc kubenswrapper[4965]: I0219 10:19:46.888989 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="f830ac86-e65f-4836-8ac5-c9d602296e8c" containerName="registry-server" Feb 19 10:19:46 crc kubenswrapper[4965]: E0219 10:19:46.889018 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f830ac86-e65f-4836-8ac5-c9d602296e8c" containerName="extract-content" Feb 19 10:19:46 crc kubenswrapper[4965]: I0219 10:19:46.889030 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="f830ac86-e65f-4836-8ac5-c9d602296e8c" containerName="extract-content" Feb 19 10:19:46 crc kubenswrapper[4965]: E0219 10:19:46.889067 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f830ac86-e65f-4836-8ac5-c9d602296e8c" containerName="extract-utilities" Feb 19 10:19:46 crc kubenswrapper[4965]: I0219 10:19:46.889081 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="f830ac86-e65f-4836-8ac5-c9d602296e8c" containerName="extract-utilities" Feb 19 10:19:46 crc kubenswrapper[4965]: I0219 10:19:46.889446 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="f830ac86-e65f-4836-8ac5-c9d602296e8c" containerName="registry-server" Feb 19 10:19:46 crc kubenswrapper[4965]: I0219 10:19:46.894987 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jjz7h" Feb 19 10:19:46 crc kubenswrapper[4965]: I0219 10:19:46.925314 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jjz7h"] Feb 19 10:19:47 crc kubenswrapper[4965]: I0219 10:19:47.039766 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvsss\" (UniqueName: \"kubernetes.io/projected/28ed8d8d-3c38-43ae-b93c-d5c88112a04b-kube-api-access-tvsss\") pod \"certified-operators-jjz7h\" (UID: \"28ed8d8d-3c38-43ae-b93c-d5c88112a04b\") " pod="openshift-marketplace/certified-operators-jjz7h" Feb 19 10:19:47 crc kubenswrapper[4965]: I0219 10:19:47.039978 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28ed8d8d-3c38-43ae-b93c-d5c88112a04b-catalog-content\") pod \"certified-operators-jjz7h\" (UID: \"28ed8d8d-3c38-43ae-b93c-d5c88112a04b\") " pod="openshift-marketplace/certified-operators-jjz7h" Feb 19 10:19:47 crc kubenswrapper[4965]: I0219 10:19:47.040303 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28ed8d8d-3c38-43ae-b93c-d5c88112a04b-utilities\") pod \"certified-operators-jjz7h\" (UID: \"28ed8d8d-3c38-43ae-b93c-d5c88112a04b\") " pod="openshift-marketplace/certified-operators-jjz7h" Feb 19 10:19:47 crc kubenswrapper[4965]: I0219 10:19:47.142323 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28ed8d8d-3c38-43ae-b93c-d5c88112a04b-utilities\") pod \"certified-operators-jjz7h\" (UID: \"28ed8d8d-3c38-43ae-b93c-d5c88112a04b\") " pod="openshift-marketplace/certified-operators-jjz7h" Feb 19 10:19:47 crc kubenswrapper[4965]: I0219 10:19:47.142478 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvsss\" (UniqueName: \"kubernetes.io/projected/28ed8d8d-3c38-43ae-b93c-d5c88112a04b-kube-api-access-tvsss\") pod \"certified-operators-jjz7h\" (UID: \"28ed8d8d-3c38-43ae-b93c-d5c88112a04b\") " pod="openshift-marketplace/certified-operators-jjz7h" Feb 19 10:19:47 crc kubenswrapper[4965]: I0219 10:19:47.142531 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28ed8d8d-3c38-43ae-b93c-d5c88112a04b-catalog-content\") pod \"certified-operators-jjz7h\" (UID: \"28ed8d8d-3c38-43ae-b93c-d5c88112a04b\") " pod="openshift-marketplace/certified-operators-jjz7h" Feb 19 10:19:47 crc kubenswrapper[4965]: I0219 10:19:47.143030 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28ed8d8d-3c38-43ae-b93c-d5c88112a04b-utilities\") pod \"certified-operators-jjz7h\" (UID: \"28ed8d8d-3c38-43ae-b93c-d5c88112a04b\") " pod="openshift-marketplace/certified-operators-jjz7h" Feb 19 10:19:47 crc kubenswrapper[4965]: I0219 10:19:47.144446 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28ed8d8d-3c38-43ae-b93c-d5c88112a04b-catalog-content\") pod \"certified-operators-jjz7h\" (UID: \"28ed8d8d-3c38-43ae-b93c-d5c88112a04b\") " pod="openshift-marketplace/certified-operators-jjz7h" Feb 19 10:19:47 crc kubenswrapper[4965]: I0219 10:19:47.161934 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvsss\" (UniqueName: \"kubernetes.io/projected/28ed8d8d-3c38-43ae-b93c-d5c88112a04b-kube-api-access-tvsss\") pod \"certified-operators-jjz7h\" (UID: \"28ed8d8d-3c38-43ae-b93c-d5c88112a04b\") " pod="openshift-marketplace/certified-operators-jjz7h" Feb 19 10:19:47 crc kubenswrapper[4965]: I0219 10:19:47.242812 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jjz7h" Feb 19 10:19:47 crc kubenswrapper[4965]: I0219 10:19:47.247588 4965 generic.go:334] "Generic (PLEG): container finished" podID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerID="0adc6e28f055d539583ad7bb06c2cd0eb874f0b736c831a9dc5b0964a3e19dcf" exitCode=0 Feb 19 10:19:47 crc kubenswrapper[4965]: I0219 10:19:47.247651 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" event={"ID":"63ef3eb8-6103-492d-b6ef-f16081d15e83","Type":"ContainerDied","Data":"0adc6e28f055d539583ad7bb06c2cd0eb874f0b736c831a9dc5b0964a3e19dcf"} Feb 19 10:19:47 crc kubenswrapper[4965]: I0219 10:19:47.247699 4965 scope.go:117] "RemoveContainer" containerID="6e790f4d4e6658655db9f91927db114ee9b37405e8ae4a7d350746d0c209e2f2" Feb 19 10:19:47 crc kubenswrapper[4965]: I0219 10:19:47.248557 4965 scope.go:117] "RemoveContainer" containerID="0adc6e28f055d539583ad7bb06c2cd0eb874f0b736c831a9dc5b0964a3e19dcf" Feb 19 10:19:47 crc kubenswrapper[4965]: E0219 10:19:47.250137 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:19:47 crc kubenswrapper[4965]: I0219 10:19:47.780468 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jjz7h"] Feb 19 10:19:48 crc kubenswrapper[4965]: I0219 10:19:48.268131 4965 generic.go:334] "Generic (PLEG): container finished" podID="28ed8d8d-3c38-43ae-b93c-d5c88112a04b" containerID="e09c78676090a1835d5476ec01f5ae4ecb9afa9e881827a3a193dbc79fadd129" exitCode=0 Feb 19 10:19:48 crc kubenswrapper[4965]: I0219 10:19:48.268269 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jjz7h" event={"ID":"28ed8d8d-3c38-43ae-b93c-d5c88112a04b","Type":"ContainerDied","Data":"e09c78676090a1835d5476ec01f5ae4ecb9afa9e881827a3a193dbc79fadd129"} Feb 19 10:19:48 crc kubenswrapper[4965]: I0219 10:19:48.268386 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jjz7h" event={"ID":"28ed8d8d-3c38-43ae-b93c-d5c88112a04b","Type":"ContainerStarted","Data":"dc7849f653587f5cf8441defdcdfb518f8e7f488daf5f6c51fd45bb4fb3576d8"} Feb 19 10:19:55 crc kubenswrapper[4965]: I0219 10:19:55.352988 4965 generic.go:334] "Generic (PLEG): container finished" podID="28ed8d8d-3c38-43ae-b93c-d5c88112a04b" containerID="7ed489f4192a30665abc3a3bd7938f3d8410fc70ea78f0b1a2283f5f1b5e3e86" exitCode=0 Feb 19 10:19:55 crc kubenswrapper[4965]: I0219 10:19:55.353112 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jjz7h" event={"ID":"28ed8d8d-3c38-43ae-b93c-d5c88112a04b","Type":"ContainerDied","Data":"7ed489f4192a30665abc3a3bd7938f3d8410fc70ea78f0b1a2283f5f1b5e3e86"} Feb 19 10:19:56 crc kubenswrapper[4965]: I0219 10:19:56.362686 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jjz7h" event={"ID":"28ed8d8d-3c38-43ae-b93c-d5c88112a04b","Type":"ContainerStarted","Data":"3dd4f36de18962c1837cd30f62f4a9af3c32a68d86536881f7e2714ba571e1e2"} Feb 19 10:19:56 crc kubenswrapper[4965]: I0219 10:19:56.387305 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jjz7h" podStartSLOduration=2.838043266 podStartE2EDuration="10.387274637s" podCreationTimestamp="2026-02-19 10:19:46 +0000 UTC" firstStartedPulling="2026-02-19 10:19:48.271087086 +0000 UTC m=+2243.892408436" lastFinishedPulling="2026-02-19 10:19:55.820318457 +0000 UTC m=+2251.441639807" observedRunningTime="2026-02-19 10:19:56.378757481 +0000 UTC m=+2252.000078801" watchObservedRunningTime="2026-02-19 10:19:56.387274637 +0000 UTC m=+2252.008595957" Feb 19 10:19:57 crc kubenswrapper[4965]: I0219 10:19:57.244147 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jjz7h" Feb 19 10:19:57 crc kubenswrapper[4965]: I0219 10:19:57.244527 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jjz7h" Feb 19 10:19:58 crc kubenswrapper[4965]: I0219 10:19:58.311871 4965 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-jjz7h" podUID="28ed8d8d-3c38-43ae-b93c-d5c88112a04b" containerName="registry-server" probeResult="failure" output=< Feb 19 10:19:58 crc kubenswrapper[4965]: timeout: failed to connect service ":50051" within 1s Feb 19 10:19:58 crc kubenswrapper[4965]: > Feb 19 10:20:00 crc kubenswrapper[4965]: I0219 10:20:00.199001 4965 scope.go:117] "RemoveContainer" containerID="0adc6e28f055d539583ad7bb06c2cd0eb874f0b736c831a9dc5b0964a3e19dcf" Feb 19 10:20:00 crc kubenswrapper[4965]: E0219 10:20:00.199866 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:20:07 crc kubenswrapper[4965]: I0219 10:20:07.299083 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jjz7h" Feb 19 10:20:07 crc kubenswrapper[4965]: I0219 10:20:07.351047 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jjz7h" Feb 19 10:20:07 crc kubenswrapper[4965]: I0219 10:20:07.453815 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jjz7h"] Feb 19 10:20:07 crc kubenswrapper[4965]: I0219 10:20:07.544871 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7ct5s"] Feb 19 10:20:07 crc kubenswrapper[4965]: I0219 10:20:07.545235 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7ct5s" podUID="5aa07bb4-7540-437b-9720-9cf4b8b3af65" containerName="registry-server" containerID="cri-o://c1c1898ca2f01380bc70bb3fec828bea453bd40c05302d69590fe041db5f128a" gracePeriod=2 Feb 19 10:20:08 crc kubenswrapper[4965]: I0219 10:20:08.149508 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7ct5s" Feb 19 10:20:08 crc kubenswrapper[4965]: I0219 10:20:08.352992 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5aa07bb4-7540-437b-9720-9cf4b8b3af65-catalog-content\") pod \"5aa07bb4-7540-437b-9720-9cf4b8b3af65\" (UID: \"5aa07bb4-7540-437b-9720-9cf4b8b3af65\") " Feb 19 10:20:08 crc kubenswrapper[4965]: I0219 10:20:08.353250 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8m2p2\" (UniqueName: \"kubernetes.io/projected/5aa07bb4-7540-437b-9720-9cf4b8b3af65-kube-api-access-8m2p2\") pod \"5aa07bb4-7540-437b-9720-9cf4b8b3af65\" (UID: \"5aa07bb4-7540-437b-9720-9cf4b8b3af65\") " Feb 19 10:20:08 crc kubenswrapper[4965]: I0219 10:20:08.353308 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5aa07bb4-7540-437b-9720-9cf4b8b3af65-utilities\") pod \"5aa07bb4-7540-437b-9720-9cf4b8b3af65\" (UID: \"5aa07bb4-7540-437b-9720-9cf4b8b3af65\") " Feb 19 10:20:08 crc kubenswrapper[4965]: I0219 10:20:08.357256 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5aa07bb4-7540-437b-9720-9cf4b8b3af65-utilities" (OuterVolumeSpecName: "utilities") pod "5aa07bb4-7540-437b-9720-9cf4b8b3af65" (UID: "5aa07bb4-7540-437b-9720-9cf4b8b3af65"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:20:08 crc kubenswrapper[4965]: I0219 10:20:08.360747 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5aa07bb4-7540-437b-9720-9cf4b8b3af65-kube-api-access-8m2p2" (OuterVolumeSpecName: "kube-api-access-8m2p2") pod "5aa07bb4-7540-437b-9720-9cf4b8b3af65" (UID: "5aa07bb4-7540-437b-9720-9cf4b8b3af65"). InnerVolumeSpecName "kube-api-access-8m2p2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:20:08 crc kubenswrapper[4965]: I0219 10:20:08.455820 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8m2p2\" (UniqueName: \"kubernetes.io/projected/5aa07bb4-7540-437b-9720-9cf4b8b3af65-kube-api-access-8m2p2\") on node \"crc\" DevicePath \"\"" Feb 19 10:20:08 crc kubenswrapper[4965]: I0219 10:20:08.455852 4965 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5aa07bb4-7540-437b-9720-9cf4b8b3af65-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:20:08 crc kubenswrapper[4965]: I0219 10:20:08.509446 4965 generic.go:334] "Generic (PLEG): container finished" podID="5aa07bb4-7540-437b-9720-9cf4b8b3af65" containerID="c1c1898ca2f01380bc70bb3fec828bea453bd40c05302d69590fe041db5f128a" exitCode=0 Feb 19 10:20:08 crc kubenswrapper[4965]: I0219 10:20:08.509507 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7ct5s" Feb 19 10:20:08 crc kubenswrapper[4965]: I0219 10:20:08.509534 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7ct5s" event={"ID":"5aa07bb4-7540-437b-9720-9cf4b8b3af65","Type":"ContainerDied","Data":"c1c1898ca2f01380bc70bb3fec828bea453bd40c05302d69590fe041db5f128a"} Feb 19 10:20:08 crc kubenswrapper[4965]: I0219 10:20:08.510502 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7ct5s" event={"ID":"5aa07bb4-7540-437b-9720-9cf4b8b3af65","Type":"ContainerDied","Data":"e0d390437982ec942bc6d801f8f73c6a3d391575bcac381b0ab7e3e9e58bd17f"} Feb 19 10:20:08 crc kubenswrapper[4965]: I0219 10:20:08.510525 4965 scope.go:117] "RemoveContainer" containerID="c1c1898ca2f01380bc70bb3fec828bea453bd40c05302d69590fe041db5f128a" Feb 19 10:20:08 crc kubenswrapper[4965]: I0219 10:20:08.536303 4965 scope.go:117] "RemoveContainer" containerID="ea2f4307e1dbe35d6af39cb06bbf396afd323100da9ecf6d3c82628174fcbc82" Feb 19 10:20:08 crc kubenswrapper[4965]: I0219 10:20:08.553163 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5aa07bb4-7540-437b-9720-9cf4b8b3af65-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5aa07bb4-7540-437b-9720-9cf4b8b3af65" (UID: "5aa07bb4-7540-437b-9720-9cf4b8b3af65"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:20:08 crc kubenswrapper[4965]: I0219 10:20:08.558347 4965 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5aa07bb4-7540-437b-9720-9cf4b8b3af65-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:20:08 crc kubenswrapper[4965]: I0219 10:20:08.585844 4965 scope.go:117] "RemoveContainer" containerID="79b5df30d728f68bc78d0bbfd80895e14ea436d46b2086306f6322a82e7eb34e" Feb 19 10:20:08 crc kubenswrapper[4965]: I0219 10:20:08.640752 4965 scope.go:117] "RemoveContainer" containerID="c1c1898ca2f01380bc70bb3fec828bea453bd40c05302d69590fe041db5f128a" Feb 19 10:20:08 crc kubenswrapper[4965]: E0219 10:20:08.641130 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1c1898ca2f01380bc70bb3fec828bea453bd40c05302d69590fe041db5f128a\": container with ID starting with c1c1898ca2f01380bc70bb3fec828bea453bd40c05302d69590fe041db5f128a not found: ID does not exist" containerID="c1c1898ca2f01380bc70bb3fec828bea453bd40c05302d69590fe041db5f128a" Feb 19 10:20:08 crc kubenswrapper[4965]: I0219 10:20:08.641159 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1c1898ca2f01380bc70bb3fec828bea453bd40c05302d69590fe041db5f128a"} err="failed to get container status \"c1c1898ca2f01380bc70bb3fec828bea453bd40c05302d69590fe041db5f128a\": rpc error: code = NotFound desc = could not find container \"c1c1898ca2f01380bc70bb3fec828bea453bd40c05302d69590fe041db5f128a\": container with ID starting with c1c1898ca2f01380bc70bb3fec828bea453bd40c05302d69590fe041db5f128a not found: ID does not exist" Feb 19 10:20:08 crc kubenswrapper[4965]: I0219 10:20:08.641179 4965 scope.go:117] "RemoveContainer" containerID="ea2f4307e1dbe35d6af39cb06bbf396afd323100da9ecf6d3c82628174fcbc82" Feb 19 10:20:08 crc kubenswrapper[4965]: E0219 10:20:08.641495 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea2f4307e1dbe35d6af39cb06bbf396afd323100da9ecf6d3c82628174fcbc82\": container with ID starting with ea2f4307e1dbe35d6af39cb06bbf396afd323100da9ecf6d3c82628174fcbc82 not found: ID does not exist" containerID="ea2f4307e1dbe35d6af39cb06bbf396afd323100da9ecf6d3c82628174fcbc82" Feb 19 10:20:08 crc kubenswrapper[4965]: I0219 10:20:08.641535 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea2f4307e1dbe35d6af39cb06bbf396afd323100da9ecf6d3c82628174fcbc82"} err="failed to get container status \"ea2f4307e1dbe35d6af39cb06bbf396afd323100da9ecf6d3c82628174fcbc82\": rpc error: code = NotFound desc = could not find container \"ea2f4307e1dbe35d6af39cb06bbf396afd323100da9ecf6d3c82628174fcbc82\": container with ID starting with ea2f4307e1dbe35d6af39cb06bbf396afd323100da9ecf6d3c82628174fcbc82 not found: ID does not exist" Feb 19 10:20:08 crc kubenswrapper[4965]: I0219 10:20:08.641562 4965 scope.go:117] "RemoveContainer" containerID="79b5df30d728f68bc78d0bbfd80895e14ea436d46b2086306f6322a82e7eb34e" Feb 19 10:20:08 crc kubenswrapper[4965]: E0219 10:20:08.641839 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79b5df30d728f68bc78d0bbfd80895e14ea436d46b2086306f6322a82e7eb34e\": container with ID starting with 79b5df30d728f68bc78d0bbfd80895e14ea436d46b2086306f6322a82e7eb34e not found: ID does not exist" containerID="79b5df30d728f68bc78d0bbfd80895e14ea436d46b2086306f6322a82e7eb34e" Feb 19 10:20:08 crc kubenswrapper[4965]: I0219 10:20:08.641890 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79b5df30d728f68bc78d0bbfd80895e14ea436d46b2086306f6322a82e7eb34e"} err="failed to get container status \"79b5df30d728f68bc78d0bbfd80895e14ea436d46b2086306f6322a82e7eb34e\": rpc error: code = NotFound desc = could not find container \"79b5df30d728f68bc78d0bbfd80895e14ea436d46b2086306f6322a82e7eb34e\": container with ID starting with 79b5df30d728f68bc78d0bbfd80895e14ea436d46b2086306f6322a82e7eb34e not found: ID does not exist" Feb 19 10:20:08 crc kubenswrapper[4965]: I0219 10:20:08.846689 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7ct5s"] Feb 19 10:20:08 crc kubenswrapper[4965]: I0219 10:20:08.857163 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7ct5s"] Feb 19 10:20:09 crc kubenswrapper[4965]: I0219 10:20:09.212356 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5aa07bb4-7540-437b-9720-9cf4b8b3af65" path="/var/lib/kubelet/pods/5aa07bb4-7540-437b-9720-9cf4b8b3af65/volumes" Feb 19 10:20:13 crc kubenswrapper[4965]: I0219 10:20:13.201064 4965 scope.go:117] "RemoveContainer" containerID="0adc6e28f055d539583ad7bb06c2cd0eb874f0b736c831a9dc5b0964a3e19dcf" Feb 19 10:20:13 crc kubenswrapper[4965]: E0219 10:20:13.201946 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:20:27 crc kubenswrapper[4965]: I0219 10:20:27.198391 4965 scope.go:117] "RemoveContainer" containerID="0adc6e28f055d539583ad7bb06c2cd0eb874f0b736c831a9dc5b0964a3e19dcf" Feb 19 10:20:27 crc kubenswrapper[4965]: E0219 10:20:27.199235 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:20:39 crc kubenswrapper[4965]: I0219 10:20:39.198387 4965 scope.go:117] "RemoveContainer" containerID="0adc6e28f055d539583ad7bb06c2cd0eb874f0b736c831a9dc5b0964a3e19dcf" Feb 19 10:20:39 crc kubenswrapper[4965]: E0219 10:20:39.199421 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:20:52 crc kubenswrapper[4965]: I0219 10:20:52.201005 4965 scope.go:117] "RemoveContainer" containerID="0adc6e28f055d539583ad7bb06c2cd0eb874f0b736c831a9dc5b0964a3e19dcf" Feb 19 10:20:52 crc kubenswrapper[4965]: E0219 10:20:52.201958 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:21:07 crc kubenswrapper[4965]: I0219 10:21:07.198544 4965 scope.go:117] "RemoveContainer" containerID="0adc6e28f055d539583ad7bb06c2cd0eb874f0b736c831a9dc5b0964a3e19dcf" Feb 19 10:21:07 crc kubenswrapper[4965]: E0219 10:21:07.199396 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:21:18 crc kubenswrapper[4965]: I0219 10:21:18.197929 4965 scope.go:117] "RemoveContainer" containerID="0adc6e28f055d539583ad7bb06c2cd0eb874f0b736c831a9dc5b0964a3e19dcf" Feb 19 10:21:18 crc kubenswrapper[4965]: E0219 10:21:18.198610 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:21:33 crc kubenswrapper[4965]: I0219 10:21:33.199525 4965 scope.go:117] "RemoveContainer" containerID="0adc6e28f055d539583ad7bb06c2cd0eb874f0b736c831a9dc5b0964a3e19dcf" Feb 19 10:21:33 crc kubenswrapper[4965]: E0219 10:21:33.200509 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:21:47 crc kubenswrapper[4965]: I0219 10:21:47.198126 4965 scope.go:117] "RemoveContainer" containerID="0adc6e28f055d539583ad7bb06c2cd0eb874f0b736c831a9dc5b0964a3e19dcf" Feb 19 10:21:47 crc kubenswrapper[4965]: E0219 10:21:47.198869 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:21:59 crc kubenswrapper[4965]: I0219 10:21:59.198450 4965 scope.go:117] "RemoveContainer" containerID="0adc6e28f055d539583ad7bb06c2cd0eb874f0b736c831a9dc5b0964a3e19dcf" Feb 19 10:21:59 crc kubenswrapper[4965]: E0219 10:21:59.199721 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:22:04 crc kubenswrapper[4965]: I0219 10:22:04.729553 4965 generic.go:334] "Generic (PLEG): container finished" podID="6b29dda1-69ac-4d2a-a078-e2f1a7103b67" containerID="4761880427f64304d96a6530071ac20474f2f427a137001ccdc00e5d99cba9b3" exitCode=0 Feb 19 10:22:04 crc kubenswrapper[4965]: I0219 10:22:04.729635 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zt9j8" event={"ID":"6b29dda1-69ac-4d2a-a078-e2f1a7103b67","Type":"ContainerDied","Data":"4761880427f64304d96a6530071ac20474f2f427a137001ccdc00e5d99cba9b3"} Feb 19 10:22:06 crc kubenswrapper[4965]: I0219 10:22:06.316125 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zt9j8" Feb 19 10:22:06 crc kubenswrapper[4965]: I0219 10:22:06.409916 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b29dda1-69ac-4d2a-a078-e2f1a7103b67-libvirt-combined-ca-bundle\") pod \"6b29dda1-69ac-4d2a-a078-e2f1a7103b67\" (UID: \"6b29dda1-69ac-4d2a-a078-e2f1a7103b67\") " Feb 19 10:22:06 crc kubenswrapper[4965]: I0219 10:22:06.410137 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6b29dda1-69ac-4d2a-a078-e2f1a7103b67-inventory\") pod \"6b29dda1-69ac-4d2a-a078-e2f1a7103b67\" (UID: \"6b29dda1-69ac-4d2a-a078-e2f1a7103b67\") " Feb 19 10:22:06 crc kubenswrapper[4965]: I0219 10:22:06.410176 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c95v2\" (UniqueName: \"kubernetes.io/projected/6b29dda1-69ac-4d2a-a078-e2f1a7103b67-kube-api-access-c95v2\") pod \"6b29dda1-69ac-4d2a-a078-e2f1a7103b67\" (UID: \"6b29dda1-69ac-4d2a-a078-e2f1a7103b67\") " Feb 19 10:22:06 crc kubenswrapper[4965]: I0219 10:22:06.410319 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6b29dda1-69ac-4d2a-a078-e2f1a7103b67-ssh-key-openstack-edpm-ipam\") pod \"6b29dda1-69ac-4d2a-a078-e2f1a7103b67\" (UID: \"6b29dda1-69ac-4d2a-a078-e2f1a7103b67\") " Feb 19 10:22:06 crc kubenswrapper[4965]: I0219 10:22:06.410371 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/6b29dda1-69ac-4d2a-a078-e2f1a7103b67-libvirt-secret-0\") pod \"6b29dda1-69ac-4d2a-a078-e2f1a7103b67\" (UID: \"6b29dda1-69ac-4d2a-a078-e2f1a7103b67\") " Feb 19 10:22:06 crc kubenswrapper[4965]: I0219 10:22:06.417012 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b29dda1-69ac-4d2a-a078-e2f1a7103b67-kube-api-access-c95v2" (OuterVolumeSpecName: "kube-api-access-c95v2") pod "6b29dda1-69ac-4d2a-a078-e2f1a7103b67" (UID: "6b29dda1-69ac-4d2a-a078-e2f1a7103b67"). InnerVolumeSpecName "kube-api-access-c95v2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:22:06 crc kubenswrapper[4965]: I0219 10:22:06.423937 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b29dda1-69ac-4d2a-a078-e2f1a7103b67-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "6b29dda1-69ac-4d2a-a078-e2f1a7103b67" (UID: "6b29dda1-69ac-4d2a-a078-e2f1a7103b67"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:22:06 crc kubenswrapper[4965]: I0219 10:22:06.445178 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b29dda1-69ac-4d2a-a078-e2f1a7103b67-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6b29dda1-69ac-4d2a-a078-e2f1a7103b67" (UID: "6b29dda1-69ac-4d2a-a078-e2f1a7103b67"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:22:06 crc kubenswrapper[4965]: I0219 10:22:06.445288 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b29dda1-69ac-4d2a-a078-e2f1a7103b67-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "6b29dda1-69ac-4d2a-a078-e2f1a7103b67" (UID: "6b29dda1-69ac-4d2a-a078-e2f1a7103b67"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:22:06 crc kubenswrapper[4965]: I0219 10:22:06.445439 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b29dda1-69ac-4d2a-a078-e2f1a7103b67-inventory" (OuterVolumeSpecName: "inventory") pod "6b29dda1-69ac-4d2a-a078-e2f1a7103b67" (UID: "6b29dda1-69ac-4d2a-a078-e2f1a7103b67"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:22:06 crc kubenswrapper[4965]: I0219 10:22:06.512962 4965 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6b29dda1-69ac-4d2a-a078-e2f1a7103b67-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:22:06 crc kubenswrapper[4965]: I0219 10:22:06.513063 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c95v2\" (UniqueName: \"kubernetes.io/projected/6b29dda1-69ac-4d2a-a078-e2f1a7103b67-kube-api-access-c95v2\") on node \"crc\" DevicePath \"\"" Feb 19 10:22:06 crc kubenswrapper[4965]: I0219 10:22:06.513108 4965 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6b29dda1-69ac-4d2a-a078-e2f1a7103b67-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 10:22:06 crc kubenswrapper[4965]: I0219 10:22:06.513125 4965 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/6b29dda1-69ac-4d2a-a078-e2f1a7103b67-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:22:06 crc kubenswrapper[4965]: I0219 10:22:06.513138 4965 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b29dda1-69ac-4d2a-a078-e2f1a7103b67-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:22:06 crc kubenswrapper[4965]: I0219 10:22:06.748407 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zt9j8" event={"ID":"6b29dda1-69ac-4d2a-a078-e2f1a7103b67","Type":"ContainerDied","Data":"aa476b86b0ebc2736b42a78dca2da46df4a8b9e0d6b69b9dcc2b504046706a45"} Feb 19 10:22:06 crc kubenswrapper[4965]: I0219 10:22:06.748520 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa476b86b0ebc2736b42a78dca2da46df4a8b9e0d6b69b9dcc2b504046706a45" Feb 19 10:22:06 crc kubenswrapper[4965]: I0219 10:22:06.748466 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-zt9j8" Feb 19 10:22:06 crc kubenswrapper[4965]: I0219 10:22:06.863059 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-x4cs9"] Feb 19 10:22:06 crc kubenswrapper[4965]: E0219 10:22:06.863595 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b29dda1-69ac-4d2a-a078-e2f1a7103b67" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 19 10:22:06 crc kubenswrapper[4965]: I0219 10:22:06.863625 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b29dda1-69ac-4d2a-a078-e2f1a7103b67" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 19 10:22:06 crc kubenswrapper[4965]: E0219 10:22:06.863637 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5aa07bb4-7540-437b-9720-9cf4b8b3af65" containerName="extract-content" Feb 19 10:22:06 crc kubenswrapper[4965]: I0219 10:22:06.863645 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="5aa07bb4-7540-437b-9720-9cf4b8b3af65" containerName="extract-content" Feb 19 10:22:06 crc kubenswrapper[4965]: E0219 10:22:06.863674 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5aa07bb4-7540-437b-9720-9cf4b8b3af65" containerName="registry-server" Feb 19 10:22:06 crc kubenswrapper[4965]: I0219 10:22:06.863682 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="5aa07bb4-7540-437b-9720-9cf4b8b3af65" containerName="registry-server" Feb 19 10:22:06 crc kubenswrapper[4965]: E0219 10:22:06.863692 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5aa07bb4-7540-437b-9720-9cf4b8b3af65" containerName="extract-utilities" Feb 19 10:22:06 crc kubenswrapper[4965]: I0219 10:22:06.863700 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="5aa07bb4-7540-437b-9720-9cf4b8b3af65" containerName="extract-utilities" Feb 19 10:22:06 crc kubenswrapper[4965]: I0219 10:22:06.863983 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="5aa07bb4-7540-437b-9720-9cf4b8b3af65" containerName="registry-server" Feb 19 10:22:06 crc kubenswrapper[4965]: I0219 10:22:06.864005 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b29dda1-69ac-4d2a-a078-e2f1a7103b67" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 19 10:22:06 crc kubenswrapper[4965]: I0219 10:22:06.865005 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4cs9" Feb 19 10:22:06 crc kubenswrapper[4965]: I0219 10:22:06.867290 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 19 10:22:06 crc kubenswrapper[4965]: I0219 10:22:06.867491 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 19 10:22:06 crc kubenswrapper[4965]: I0219 10:22:06.867818 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 19 10:22:06 crc kubenswrapper[4965]: I0219 10:22:06.868013 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 10:22:06 crc kubenswrapper[4965]: I0219 10:22:06.868167 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cthw6" Feb 19 10:22:06 crc kubenswrapper[4965]: I0219 10:22:06.873885 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 10:22:06 crc kubenswrapper[4965]: I0219 10:22:06.873968 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:22:06 crc kubenswrapper[4965]: I0219 10:22:06.888135 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-x4cs9"] Feb 19 10:22:06 crc kubenswrapper[4965]: I0219 10:22:06.922254 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/4bb72c3c-878c-497d-8105-767df1971b0d-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x4cs9\" (UID: \"4bb72c3c-878c-497d-8105-767df1971b0d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4cs9" Feb 19 10:22:06 crc kubenswrapper[4965]: I0219 10:22:06.922328 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/4bb72c3c-878c-497d-8105-767df1971b0d-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x4cs9\" (UID: \"4bb72c3c-878c-497d-8105-767df1971b0d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4cs9" Feb 19 10:22:06 crc kubenswrapper[4965]: I0219 10:22:06.922405 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/4bb72c3c-878c-497d-8105-767df1971b0d-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x4cs9\" (UID: \"4bb72c3c-878c-497d-8105-767df1971b0d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4cs9" Feb 19 10:22:06 crc kubenswrapper[4965]: I0219 10:22:06.922623 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/4bb72c3c-878c-497d-8105-767df1971b0d-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x4cs9\" (UID: \"4bb72c3c-878c-497d-8105-767df1971b0d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4cs9" Feb 19 10:22:06 crc kubenswrapper[4965]: I0219 10:22:06.922989 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqtgk\" (UniqueName: \"kubernetes.io/projected/4bb72c3c-878c-497d-8105-767df1971b0d-kube-api-access-zqtgk\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x4cs9\" (UID: \"4bb72c3c-878c-497d-8105-767df1971b0d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4cs9" Feb 19 10:22:06 crc kubenswrapper[4965]: I0219 10:22:06.923042 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/4bb72c3c-878c-497d-8105-767df1971b0d-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x4cs9\" (UID: \"4bb72c3c-878c-497d-8105-767df1971b0d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4cs9" Feb 19 10:22:06 crc kubenswrapper[4965]: I0219 10:22:06.923130 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4bb72c3c-878c-497d-8105-767df1971b0d-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x4cs9\" (UID: \"4bb72c3c-878c-497d-8105-767df1971b0d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4cs9" Feb 19 10:22:06 crc kubenswrapper[4965]: I0219 10:22:06.923186 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/4bb72c3c-878c-497d-8105-767df1971b0d-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x4cs9\" (UID: \"4bb72c3c-878c-497d-8105-767df1971b0d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4cs9" Feb 19 10:22:06 crc kubenswrapper[4965]: I0219 10:22:06.923242 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bb72c3c-878c-497d-8105-767df1971b0d-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x4cs9\" (UID: \"4bb72c3c-878c-497d-8105-767df1971b0d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4cs9" Feb 19 10:22:06 crc kubenswrapper[4965]: I0219 10:22:06.923281 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/4bb72c3c-878c-497d-8105-767df1971b0d-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x4cs9\" (UID: \"4bb72c3c-878c-497d-8105-767df1971b0d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4cs9" Feb 19 10:22:06 crc kubenswrapper[4965]: I0219 10:22:06.923380 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4bb72c3c-878c-497d-8105-767df1971b0d-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x4cs9\" (UID: \"4bb72c3c-878c-497d-8105-767df1971b0d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4cs9" Feb 19 10:22:07 crc kubenswrapper[4965]: I0219 10:22:07.025793 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/4bb72c3c-878c-497d-8105-767df1971b0d-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x4cs9\" (UID: \"4bb72c3c-878c-497d-8105-767df1971b0d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4cs9" Feb 19 10:22:07 crc kubenswrapper[4965]: I0219 10:22:07.025856 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/4bb72c3c-878c-497d-8105-767df1971b0d-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x4cs9\" (UID: \"4bb72c3c-878c-497d-8105-767df1971b0d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4cs9" Feb 19 10:22:07 crc kubenswrapper[4965]: I0219 10:22:07.025878 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/4bb72c3c-878c-497d-8105-767df1971b0d-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x4cs9\" (UID: \"4bb72c3c-878c-497d-8105-767df1971b0d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4cs9" Feb 19 10:22:07 crc kubenswrapper[4965]: I0219 10:22:07.025905 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/4bb72c3c-878c-497d-8105-767df1971b0d-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x4cs9\" (UID: \"4bb72c3c-878c-497d-8105-767df1971b0d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4cs9" Feb 19 10:22:07 crc kubenswrapper[4965]: I0219 10:22:07.025940 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqtgk\" (UniqueName: \"kubernetes.io/projected/4bb72c3c-878c-497d-8105-767df1971b0d-kube-api-access-zqtgk\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x4cs9\" (UID: \"4bb72c3c-878c-497d-8105-767df1971b0d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4cs9" Feb 19 10:22:07 crc kubenswrapper[4965]: I0219 10:22:07.025963 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/4bb72c3c-878c-497d-8105-767df1971b0d-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x4cs9\" (UID: \"4bb72c3c-878c-497d-8105-767df1971b0d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4cs9" Feb 19 10:22:07 crc kubenswrapper[4965]: I0219 10:22:07.025997 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4bb72c3c-878c-497d-8105-767df1971b0d-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x4cs9\" (UID: \"4bb72c3c-878c-497d-8105-767df1971b0d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4cs9" Feb 19 10:22:07 crc kubenswrapper[4965]: I0219 10:22:07.026038 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/4bb72c3c-878c-497d-8105-767df1971b0d-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x4cs9\" (UID: \"4bb72c3c-878c-497d-8105-767df1971b0d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4cs9" Feb 19 10:22:07 crc kubenswrapper[4965]: I0219 10:22:07.026066 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bb72c3c-878c-497d-8105-767df1971b0d-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x4cs9\" (UID: \"4bb72c3c-878c-497d-8105-767df1971b0d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4cs9" Feb 19 10:22:07 crc kubenswrapper[4965]: I0219 10:22:07.026101 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/4bb72c3c-878c-497d-8105-767df1971b0d-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x4cs9\" (UID: \"4bb72c3c-878c-497d-8105-767df1971b0d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4cs9" Feb 19 10:22:07 crc kubenswrapper[4965]: I0219 10:22:07.026128 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4bb72c3c-878c-497d-8105-767df1971b0d-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x4cs9\" (UID: \"4bb72c3c-878c-497d-8105-767df1971b0d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4cs9" Feb 19 10:22:07 crc kubenswrapper[4965]: I0219 10:22:07.027579 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/4bb72c3c-878c-497d-8105-767df1971b0d-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x4cs9\" (UID: \"4bb72c3c-878c-497d-8105-767df1971b0d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4cs9" Feb 19 10:22:07 crc kubenswrapper[4965]: I0219 10:22:07.038039 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4bb72c3c-878c-497d-8105-767df1971b0d-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x4cs9\" (UID: \"4bb72c3c-878c-497d-8105-767df1971b0d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4cs9" Feb 19 10:22:07 crc kubenswrapper[4965]: I0219 10:22:07.039759 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/4bb72c3c-878c-497d-8105-767df1971b0d-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x4cs9\" (UID: \"4bb72c3c-878c-497d-8105-767df1971b0d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4cs9" Feb 19 10:22:07 crc kubenswrapper[4965]: I0219 10:22:07.044107 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/4bb72c3c-878c-497d-8105-767df1971b0d-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x4cs9\" (UID: \"4bb72c3c-878c-497d-8105-767df1971b0d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4cs9" Feb 19 10:22:07 crc kubenswrapper[4965]: I0219 10:22:07.044983 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/4bb72c3c-878c-497d-8105-767df1971b0d-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x4cs9\" (UID: \"4bb72c3c-878c-497d-8105-767df1971b0d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4cs9" Feb 19 10:22:07 crc kubenswrapper[4965]: I0219 10:22:07.046498 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/4bb72c3c-878c-497d-8105-767df1971b0d-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x4cs9\" (UID: \"4bb72c3c-878c-497d-8105-767df1971b0d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4cs9" Feb 19 10:22:07 crc kubenswrapper[4965]: I0219 10:22:07.046932 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bb72c3c-878c-497d-8105-767df1971b0d-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x4cs9\" (UID: \"4bb72c3c-878c-497d-8105-767df1971b0d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4cs9" Feb 19 10:22:07 crc kubenswrapper[4965]: I0219 10:22:07.047062 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/4bb72c3c-878c-497d-8105-767df1971b0d-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x4cs9\" (UID: \"4bb72c3c-878c-497d-8105-767df1971b0d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4cs9" Feb 19 10:22:07 crc kubenswrapper[4965]: I0219 10:22:07.048297 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/4bb72c3c-878c-497d-8105-767df1971b0d-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x4cs9\" (UID: \"4bb72c3c-878c-497d-8105-767df1971b0d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4cs9" Feb 19 10:22:07 crc kubenswrapper[4965]: I0219 10:22:07.048528 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqtgk\" (UniqueName: \"kubernetes.io/projected/4bb72c3c-878c-497d-8105-767df1971b0d-kube-api-access-zqtgk\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x4cs9\" (UID: \"4bb72c3c-878c-497d-8105-767df1971b0d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4cs9" Feb 19 10:22:07 crc kubenswrapper[4965]: I0219 10:22:07.048961 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4bb72c3c-878c-497d-8105-767df1971b0d-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-x4cs9\" (UID: \"4bb72c3c-878c-497d-8105-767df1971b0d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4cs9" Feb 19 10:22:07 crc kubenswrapper[4965]: I0219 10:22:07.186942 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4cs9" Feb 19 10:22:07 crc kubenswrapper[4965]: I0219 10:22:07.782132 4965 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 10:22:07 crc kubenswrapper[4965]: I0219 10:22:07.787872 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-x4cs9"] Feb 19 10:22:08 crc kubenswrapper[4965]: I0219 10:22:08.767460 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4cs9" event={"ID":"4bb72c3c-878c-497d-8105-767df1971b0d","Type":"ContainerStarted","Data":"4953251c8ac5f75f43dcc9edf3d34c0a09c3f710012acad9bd79e898b01d33de"} Feb 19 10:22:08 crc kubenswrapper[4965]: I0219 10:22:08.768295 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4cs9" event={"ID":"4bb72c3c-878c-497d-8105-767df1971b0d","Type":"ContainerStarted","Data":"03c42a09fdb3d8777ba0084ff3d2959006abb5c0ffbe9712425bdeef10fecd28"} Feb 19 10:22:08 crc kubenswrapper[4965]: I0219 10:22:08.790393 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4cs9" podStartSLOduration=2.257257368 podStartE2EDuration="2.790370728s" podCreationTimestamp="2026-02-19 10:22:06 +0000 UTC" firstStartedPulling="2026-02-19 10:22:07.781867866 +0000 UTC m=+2383.403189176" lastFinishedPulling="2026-02-19 10:22:08.314981236 +0000 UTC m=+2383.936302536" observedRunningTime="2026-02-19 10:22:08.787460638 +0000 UTC m=+2384.408781948" watchObservedRunningTime="2026-02-19 10:22:08.790370728 +0000 UTC m=+2384.411692048" Feb 19 10:22:13 crc kubenswrapper[4965]: I0219 10:22:13.198350 4965 scope.go:117] "RemoveContainer" containerID="0adc6e28f055d539583ad7bb06c2cd0eb874f0b736c831a9dc5b0964a3e19dcf" Feb 19 10:22:13 crc kubenswrapper[4965]: E0219 10:22:13.199121 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:22:28 crc kubenswrapper[4965]: I0219 10:22:28.198760 4965 scope.go:117] "RemoveContainer" containerID="0adc6e28f055d539583ad7bb06c2cd0eb874f0b736c831a9dc5b0964a3e19dcf" Feb 19 10:22:28 crc kubenswrapper[4965]: E0219 10:22:28.199377 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:22:42 crc kubenswrapper[4965]: I0219 10:22:42.198606 4965 scope.go:117] "RemoveContainer" containerID="0adc6e28f055d539583ad7bb06c2cd0eb874f0b736c831a9dc5b0964a3e19dcf" Feb 19 10:22:42 crc kubenswrapper[4965]: E0219 10:22:42.199477 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:22:53 crc kubenswrapper[4965]: I0219 10:22:53.198921 4965 scope.go:117] "RemoveContainer" containerID="0adc6e28f055d539583ad7bb06c2cd0eb874f0b736c831a9dc5b0964a3e19dcf" Feb 19 10:22:53 crc kubenswrapper[4965]: E0219 10:22:53.200249 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:23:08 crc kubenswrapper[4965]: I0219 10:23:08.198691 4965 scope.go:117] "RemoveContainer" containerID="0adc6e28f055d539583ad7bb06c2cd0eb874f0b736c831a9dc5b0964a3e19dcf" Feb 19 10:23:08 crc kubenswrapper[4965]: E0219 10:23:08.199780 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:23:20 crc kubenswrapper[4965]: I0219 10:23:20.198775 4965 scope.go:117] "RemoveContainer" containerID="0adc6e28f055d539583ad7bb06c2cd0eb874f0b736c831a9dc5b0964a3e19dcf" Feb 19 10:23:20 crc kubenswrapper[4965]: E0219 10:23:20.199636 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:23:31 crc kubenswrapper[4965]: I0219 10:23:31.198822 4965 scope.go:117] "RemoveContainer" containerID="0adc6e28f055d539583ad7bb06c2cd0eb874f0b736c831a9dc5b0964a3e19dcf" Feb 19 10:23:31 crc kubenswrapper[4965]: E0219 10:23:31.199651 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:23:45 crc kubenswrapper[4965]: I0219 10:23:45.207681 4965 scope.go:117] "RemoveContainer" containerID="0adc6e28f055d539583ad7bb06c2cd0eb874f0b736c831a9dc5b0964a3e19dcf" Feb 19 10:23:45 crc kubenswrapper[4965]: E0219 10:23:45.209979 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:23:57 crc kubenswrapper[4965]: I0219 10:23:57.198165 4965 scope.go:117] "RemoveContainer" containerID="0adc6e28f055d539583ad7bb06c2cd0eb874f0b736c831a9dc5b0964a3e19dcf" Feb 19 10:23:57 crc kubenswrapper[4965]: E0219 10:23:57.199248 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:24:09 crc kubenswrapper[4965]: I0219 10:24:09.198662 4965 scope.go:117] "RemoveContainer" containerID="0adc6e28f055d539583ad7bb06c2cd0eb874f0b736c831a9dc5b0964a3e19dcf" Feb 19 10:24:09 crc kubenswrapper[4965]: E0219 10:24:09.199667 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:24:24 crc kubenswrapper[4965]: I0219 10:24:24.197968 4965 scope.go:117] "RemoveContainer" containerID="0adc6e28f055d539583ad7bb06c2cd0eb874f0b736c831a9dc5b0964a3e19dcf" Feb 19 10:24:24 crc kubenswrapper[4965]: E0219 10:24:24.198729 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:24:28 crc kubenswrapper[4965]: I0219 10:24:28.228317 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4cs9" event={"ID":"4bb72c3c-878c-497d-8105-767df1971b0d","Type":"ContainerDied","Data":"4953251c8ac5f75f43dcc9edf3d34c0a09c3f710012acad9bd79e898b01d33de"} Feb 19 10:24:28 crc kubenswrapper[4965]: I0219 10:24:28.228316 4965 generic.go:334] "Generic (PLEG): container finished" podID="4bb72c3c-878c-497d-8105-767df1971b0d" containerID="4953251c8ac5f75f43dcc9edf3d34c0a09c3f710012acad9bd79e898b01d33de" exitCode=0 Feb 19 10:24:29 crc kubenswrapper[4965]: I0219 10:24:29.834074 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4cs9" Feb 19 10:24:29 crc kubenswrapper[4965]: I0219 10:24:29.936532 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/4bb72c3c-878c-497d-8105-767df1971b0d-nova-cell1-compute-config-1\") pod \"4bb72c3c-878c-497d-8105-767df1971b0d\" (UID: \"4bb72c3c-878c-497d-8105-767df1971b0d\") " Feb 19 10:24:29 crc kubenswrapper[4965]: I0219 10:24:29.936852 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/4bb72c3c-878c-497d-8105-767df1971b0d-nova-migration-ssh-key-0\") pod \"4bb72c3c-878c-497d-8105-767df1971b0d\" (UID: \"4bb72c3c-878c-497d-8105-767df1971b0d\") " Feb 19 10:24:29 crc kubenswrapper[4965]: I0219 10:24:29.936891 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bb72c3c-878c-497d-8105-767df1971b0d-nova-combined-ca-bundle\") pod \"4bb72c3c-878c-497d-8105-767df1971b0d\" (UID: \"4bb72c3c-878c-497d-8105-767df1971b0d\") " Feb 19 10:24:29 crc kubenswrapper[4965]: I0219 10:24:29.936945 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4bb72c3c-878c-497d-8105-767df1971b0d-inventory\") pod \"4bb72c3c-878c-497d-8105-767df1971b0d\" (UID: \"4bb72c3c-878c-497d-8105-767df1971b0d\") " Feb 19 10:24:29 crc kubenswrapper[4965]: I0219 10:24:29.937022 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqtgk\" (UniqueName: \"kubernetes.io/projected/4bb72c3c-878c-497d-8105-767df1971b0d-kube-api-access-zqtgk\") pod \"4bb72c3c-878c-497d-8105-767df1971b0d\" (UID: \"4bb72c3c-878c-497d-8105-767df1971b0d\") " Feb 19 10:24:29 crc kubenswrapper[4965]: I0219 10:24:29.937107 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/4bb72c3c-878c-497d-8105-767df1971b0d-nova-cell1-compute-config-2\") pod \"4bb72c3c-878c-497d-8105-767df1971b0d\" (UID: \"4bb72c3c-878c-497d-8105-767df1971b0d\") " Feb 19 10:24:29 crc kubenswrapper[4965]: I0219 10:24:29.937265 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/4bb72c3c-878c-497d-8105-767df1971b0d-nova-extra-config-0\") pod \"4bb72c3c-878c-497d-8105-767df1971b0d\" (UID: \"4bb72c3c-878c-497d-8105-767df1971b0d\") " Feb 19 10:24:29 crc kubenswrapper[4965]: I0219 10:24:29.938393 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/4bb72c3c-878c-497d-8105-767df1971b0d-nova-cell1-compute-config-3\") pod \"4bb72c3c-878c-497d-8105-767df1971b0d\" (UID: \"4bb72c3c-878c-497d-8105-767df1971b0d\") " Feb 19 10:24:29 crc kubenswrapper[4965]: I0219 10:24:29.938531 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/4bb72c3c-878c-497d-8105-767df1971b0d-nova-cell1-compute-config-0\") pod \"4bb72c3c-878c-497d-8105-767df1971b0d\" (UID: \"4bb72c3c-878c-497d-8105-767df1971b0d\") " Feb 19 10:24:29 crc kubenswrapper[4965]: I0219 10:24:29.938585 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4bb72c3c-878c-497d-8105-767df1971b0d-ssh-key-openstack-edpm-ipam\") pod \"4bb72c3c-878c-497d-8105-767df1971b0d\" (UID: \"4bb72c3c-878c-497d-8105-767df1971b0d\") " Feb 19 10:24:29 crc kubenswrapper[4965]: I0219 10:24:29.938648 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/4bb72c3c-878c-497d-8105-767df1971b0d-nova-migration-ssh-key-1\") pod \"4bb72c3c-878c-497d-8105-767df1971b0d\" (UID: \"4bb72c3c-878c-497d-8105-767df1971b0d\") " Feb 19 10:24:29 crc kubenswrapper[4965]: I0219 10:24:29.944590 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bb72c3c-878c-497d-8105-767df1971b0d-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "4bb72c3c-878c-497d-8105-767df1971b0d" (UID: "4bb72c3c-878c-497d-8105-767df1971b0d"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:24:29 crc kubenswrapper[4965]: I0219 10:24:29.945661 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb72c3c-878c-497d-8105-767df1971b0d-kube-api-access-zqtgk" (OuterVolumeSpecName: "kube-api-access-zqtgk") pod "4bb72c3c-878c-497d-8105-767df1971b0d" (UID: "4bb72c3c-878c-497d-8105-767df1971b0d"). InnerVolumeSpecName "kube-api-access-zqtgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:24:29 crc kubenswrapper[4965]: I0219 10:24:29.972987 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bb72c3c-878c-497d-8105-767df1971b0d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4bb72c3c-878c-497d-8105-767df1971b0d" (UID: "4bb72c3c-878c-497d-8105-767df1971b0d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:24:29 crc kubenswrapper[4965]: I0219 10:24:29.975047 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bb72c3c-878c-497d-8105-767df1971b0d-inventory" (OuterVolumeSpecName: "inventory") pod "4bb72c3c-878c-497d-8105-767df1971b0d" (UID: "4bb72c3c-878c-497d-8105-767df1971b0d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:24:29 crc kubenswrapper[4965]: I0219 10:24:29.975700 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bb72c3c-878c-497d-8105-767df1971b0d-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "4bb72c3c-878c-497d-8105-767df1971b0d" (UID: "4bb72c3c-878c-497d-8105-767df1971b0d"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:24:29 crc kubenswrapper[4965]: I0219 10:24:29.976016 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bb72c3c-878c-497d-8105-767df1971b0d-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "4bb72c3c-878c-497d-8105-767df1971b0d" (UID: "4bb72c3c-878c-497d-8105-767df1971b0d"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:24:29 crc kubenswrapper[4965]: I0219 10:24:29.980527 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bb72c3c-878c-497d-8105-767df1971b0d-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "4bb72c3c-878c-497d-8105-767df1971b0d" (UID: "4bb72c3c-878c-497d-8105-767df1971b0d"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:24:29 crc kubenswrapper[4965]: I0219 10:24:29.981516 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bb72c3c-878c-497d-8105-767df1971b0d-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "4bb72c3c-878c-497d-8105-767df1971b0d" (UID: "4bb72c3c-878c-497d-8105-767df1971b0d"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:24:29 crc kubenswrapper[4965]: I0219 10:24:29.983546 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb72c3c-878c-497d-8105-767df1971b0d-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "4bb72c3c-878c-497d-8105-767df1971b0d" (UID: "4bb72c3c-878c-497d-8105-767df1971b0d"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:24:29 crc kubenswrapper[4965]: I0219 10:24:29.984456 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bb72c3c-878c-497d-8105-767df1971b0d-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "4bb72c3c-878c-497d-8105-767df1971b0d" (UID: "4bb72c3c-878c-497d-8105-767df1971b0d"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:24:29 crc kubenswrapper[4965]: I0219 10:24:29.985549 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bb72c3c-878c-497d-8105-767df1971b0d-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "4bb72c3c-878c-497d-8105-767df1971b0d" (UID: "4bb72c3c-878c-497d-8105-767df1971b0d"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:24:30 crc kubenswrapper[4965]: I0219 10:24:30.042136 4965 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/4bb72c3c-878c-497d-8105-767df1971b0d-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 19 10:24:30 crc kubenswrapper[4965]: I0219 10:24:30.042174 4965 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/4bb72c3c-878c-497d-8105-767df1971b0d-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:24:30 crc kubenswrapper[4965]: I0219 10:24:30.042184 4965 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bb72c3c-878c-497d-8105-767df1971b0d-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:24:30 crc kubenswrapper[4965]: I0219 10:24:30.042214 4965 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4bb72c3c-878c-497d-8105-767df1971b0d-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:24:30 crc kubenswrapper[4965]: I0219 10:24:30.042225 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqtgk\" (UniqueName: \"kubernetes.io/projected/4bb72c3c-878c-497d-8105-767df1971b0d-kube-api-access-zqtgk\") on node \"crc\" DevicePath \"\"" Feb 19 10:24:30 crc kubenswrapper[4965]: I0219 10:24:30.042235 4965 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/4bb72c3c-878c-497d-8105-767df1971b0d-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Feb 19 10:24:30 crc kubenswrapper[4965]: I0219 10:24:30.042246 4965 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/4bb72c3c-878c-497d-8105-767df1971b0d-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:24:30 crc kubenswrapper[4965]: I0219 10:24:30.042255 4965 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/4bb72c3c-878c-497d-8105-767df1971b0d-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Feb 19 10:24:30 crc kubenswrapper[4965]: I0219 10:24:30.042268 4965 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/4bb72c3c-878c-497d-8105-767df1971b0d-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:24:30 crc kubenswrapper[4965]: I0219 10:24:30.042278 4965 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4bb72c3c-878c-497d-8105-767df1971b0d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 10:24:30 crc kubenswrapper[4965]: I0219 10:24:30.042288 4965 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/4bb72c3c-878c-497d-8105-767df1971b0d-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 19 10:24:30 crc kubenswrapper[4965]: I0219 10:24:30.248907 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4cs9" event={"ID":"4bb72c3c-878c-497d-8105-767df1971b0d","Type":"ContainerDied","Data":"03c42a09fdb3d8777ba0084ff3d2959006abb5c0ffbe9712425bdeef10fecd28"} Feb 19 10:24:30 crc kubenswrapper[4965]: I0219 10:24:30.248990 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03c42a09fdb3d8777ba0084ff3d2959006abb5c0ffbe9712425bdeef10fecd28" Feb 19 10:24:30 crc kubenswrapper[4965]: I0219 10:24:30.248993 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-x4cs9" Feb 19 10:24:30 crc kubenswrapper[4965]: I0219 10:24:30.371195 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lf5p2"] Feb 19 10:24:30 crc kubenswrapper[4965]: E0219 10:24:30.372120 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bb72c3c-878c-497d-8105-767df1971b0d" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 19 10:24:30 crc kubenswrapper[4965]: I0219 10:24:30.372144 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bb72c3c-878c-497d-8105-767df1971b0d" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 19 10:24:30 crc kubenswrapper[4965]: I0219 10:24:30.372608 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bb72c3c-878c-497d-8105-767df1971b0d" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 19 10:24:30 crc kubenswrapper[4965]: I0219 10:24:30.373601 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lf5p2" Feb 19 10:24:30 crc kubenswrapper[4965]: I0219 10:24:30.376546 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 10:24:30 crc kubenswrapper[4965]: I0219 10:24:30.376854 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 19 10:24:30 crc kubenswrapper[4965]: I0219 10:24:30.377516 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:24:30 crc kubenswrapper[4965]: I0219 10:24:30.378108 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cthw6" Feb 19 10:24:30 crc kubenswrapper[4965]: I0219 10:24:30.384479 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 10:24:30 crc kubenswrapper[4965]: I0219 10:24:30.384911 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lf5p2"] Feb 19 10:24:30 crc kubenswrapper[4965]: I0219 10:24:30.553839 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sl7b8\" (UniqueName: \"kubernetes.io/projected/b3d2f922-3941-4ff3-92fc-6bb14cd46698-kube-api-access-sl7b8\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lf5p2\" (UID: \"b3d2f922-3941-4ff3-92fc-6bb14cd46698\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lf5p2" Feb 19 10:24:30 crc kubenswrapper[4965]: I0219 10:24:30.553914 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b3d2f922-3941-4ff3-92fc-6bb14cd46698-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lf5p2\" (UID: \"b3d2f922-3941-4ff3-92fc-6bb14cd46698\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lf5p2" Feb 19 10:24:30 crc kubenswrapper[4965]: I0219 10:24:30.553946 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3d2f922-3941-4ff3-92fc-6bb14cd46698-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lf5p2\" (UID: \"b3d2f922-3941-4ff3-92fc-6bb14cd46698\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lf5p2" Feb 19 10:24:30 crc kubenswrapper[4965]: I0219 10:24:30.553973 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b3d2f922-3941-4ff3-92fc-6bb14cd46698-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lf5p2\" (UID: \"b3d2f922-3941-4ff3-92fc-6bb14cd46698\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lf5p2" Feb 19 10:24:30 crc kubenswrapper[4965]: I0219 10:24:30.553991 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b3d2f922-3941-4ff3-92fc-6bb14cd46698-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lf5p2\" (UID: \"b3d2f922-3941-4ff3-92fc-6bb14cd46698\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lf5p2" Feb 19 10:24:30 crc kubenswrapper[4965]: I0219 10:24:30.554023 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b3d2f922-3941-4ff3-92fc-6bb14cd46698-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lf5p2\" (UID: \"b3d2f922-3941-4ff3-92fc-6bb14cd46698\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lf5p2" Feb 19 10:24:30 crc kubenswrapper[4965]: I0219 10:24:30.554469 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b3d2f922-3941-4ff3-92fc-6bb14cd46698-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lf5p2\" (UID: \"b3d2f922-3941-4ff3-92fc-6bb14cd46698\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lf5p2" Feb 19 10:24:30 crc kubenswrapper[4965]: I0219 10:24:30.656467 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sl7b8\" (UniqueName: \"kubernetes.io/projected/b3d2f922-3941-4ff3-92fc-6bb14cd46698-kube-api-access-sl7b8\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lf5p2\" (UID: \"b3d2f922-3941-4ff3-92fc-6bb14cd46698\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lf5p2" Feb 19 10:24:30 crc kubenswrapper[4965]: I0219 10:24:30.656782 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b3d2f922-3941-4ff3-92fc-6bb14cd46698-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lf5p2\" (UID: \"b3d2f922-3941-4ff3-92fc-6bb14cd46698\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lf5p2" Feb 19 10:24:30 crc kubenswrapper[4965]: I0219 10:24:30.656811 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3d2f922-3941-4ff3-92fc-6bb14cd46698-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lf5p2\" (UID: \"b3d2f922-3941-4ff3-92fc-6bb14cd46698\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lf5p2" Feb 19 10:24:30 crc kubenswrapper[4965]: I0219 10:24:30.656839 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b3d2f922-3941-4ff3-92fc-6bb14cd46698-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lf5p2\" (UID: \"b3d2f922-3941-4ff3-92fc-6bb14cd46698\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lf5p2" Feb 19 10:24:30 crc kubenswrapper[4965]: I0219 10:24:30.656862 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b3d2f922-3941-4ff3-92fc-6bb14cd46698-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lf5p2\" (UID: \"b3d2f922-3941-4ff3-92fc-6bb14cd46698\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lf5p2" Feb 19 10:24:30 crc kubenswrapper[4965]: I0219 10:24:30.656910 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b3d2f922-3941-4ff3-92fc-6bb14cd46698-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lf5p2\" (UID: \"b3d2f922-3941-4ff3-92fc-6bb14cd46698\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lf5p2" Feb 19 10:24:30 crc kubenswrapper[4965]: I0219 10:24:30.657037 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b3d2f922-3941-4ff3-92fc-6bb14cd46698-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lf5p2\" (UID: \"b3d2f922-3941-4ff3-92fc-6bb14cd46698\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lf5p2" Feb 19 10:24:30 crc kubenswrapper[4965]: I0219 10:24:30.662165 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b3d2f922-3941-4ff3-92fc-6bb14cd46698-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lf5p2\" (UID: \"b3d2f922-3941-4ff3-92fc-6bb14cd46698\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lf5p2" Feb 19 10:24:30 crc kubenswrapper[4965]: I0219 10:24:30.662241 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b3d2f922-3941-4ff3-92fc-6bb14cd46698-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lf5p2\" (UID: \"b3d2f922-3941-4ff3-92fc-6bb14cd46698\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lf5p2" Feb 19 10:24:30 crc kubenswrapper[4965]: I0219 10:24:30.662449 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3d2f922-3941-4ff3-92fc-6bb14cd46698-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lf5p2\" (UID: \"b3d2f922-3941-4ff3-92fc-6bb14cd46698\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lf5p2" Feb 19 10:24:30 crc kubenswrapper[4965]: I0219 10:24:30.662581 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b3d2f922-3941-4ff3-92fc-6bb14cd46698-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lf5p2\" (UID: \"b3d2f922-3941-4ff3-92fc-6bb14cd46698\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lf5p2" Feb 19 10:24:30 crc kubenswrapper[4965]: I0219 10:24:30.662588 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b3d2f922-3941-4ff3-92fc-6bb14cd46698-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lf5p2\" (UID: \"b3d2f922-3941-4ff3-92fc-6bb14cd46698\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lf5p2" Feb 19 10:24:30 crc kubenswrapper[4965]: I0219 10:24:30.664496 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b3d2f922-3941-4ff3-92fc-6bb14cd46698-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lf5p2\" (UID: \"b3d2f922-3941-4ff3-92fc-6bb14cd46698\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lf5p2" Feb 19 10:24:30 crc kubenswrapper[4965]: I0219 10:24:30.678032 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sl7b8\" (UniqueName: \"kubernetes.io/projected/b3d2f922-3941-4ff3-92fc-6bb14cd46698-kube-api-access-sl7b8\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lf5p2\" (UID: \"b3d2f922-3941-4ff3-92fc-6bb14cd46698\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lf5p2" Feb 19 10:24:30 crc kubenswrapper[4965]: I0219 10:24:30.696278 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lf5p2" Feb 19 10:24:31 crc kubenswrapper[4965]: I0219 10:24:31.294513 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lf5p2"] Feb 19 10:24:32 crc kubenswrapper[4965]: I0219 10:24:32.277881 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lf5p2" event={"ID":"b3d2f922-3941-4ff3-92fc-6bb14cd46698","Type":"ContainerStarted","Data":"aa924cd65cf28af1d33b163268570233e7518117bb4cfbe75b3bc5dc5d187226"} Feb 19 10:24:33 crc kubenswrapper[4965]: I0219 10:24:33.287112 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lf5p2" event={"ID":"b3d2f922-3941-4ff3-92fc-6bb14cd46698","Type":"ContainerStarted","Data":"f7d44865b4ec1d8bc1fa5ef1878df0d643604e949e31b6cfc810961286f83d21"} Feb 19 10:24:33 crc kubenswrapper[4965]: I0219 10:24:33.311840 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lf5p2" podStartSLOduration=2.370111343 podStartE2EDuration="3.311817189s" podCreationTimestamp="2026-02-19 10:24:30 +0000 UTC" firstStartedPulling="2026-02-19 10:24:31.298572321 +0000 UTC m=+2526.919893631" lastFinishedPulling="2026-02-19 10:24:32.240278146 +0000 UTC m=+2527.861599477" observedRunningTime="2026-02-19 10:24:33.31140632 +0000 UTC m=+2528.932727640" watchObservedRunningTime="2026-02-19 10:24:33.311817189 +0000 UTC m=+2528.933138509" Feb 19 10:24:38 crc kubenswrapper[4965]: I0219 10:24:38.198254 4965 scope.go:117] "RemoveContainer" containerID="0adc6e28f055d539583ad7bb06c2cd0eb874f0b736c831a9dc5b0964a3e19dcf" Feb 19 10:24:38 crc kubenswrapper[4965]: E0219 10:24:38.199225 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:24:42 crc kubenswrapper[4965]: I0219 10:24:42.736230 4965 scope.go:117] "RemoveContainer" containerID="d8ec1da7b28e4959f004c9c7f4fbaa07ab53b11bb2f5bc9123e6a5aeec882b49" Feb 19 10:24:42 crc kubenswrapper[4965]: I0219 10:24:42.780453 4965 scope.go:117] "RemoveContainer" containerID="0851b1f10f4f21bf901efa2698631699fb0bb107a71a0ebd29f7f8bb64ce6714" Feb 19 10:24:42 crc kubenswrapper[4965]: I0219 10:24:42.829282 4965 scope.go:117] "RemoveContainer" containerID="07678727b9e30790edc1a0dfa5200f70e29e6ee192bbb5479d80e2701deab95c" Feb 19 10:24:51 crc kubenswrapper[4965]: I0219 10:24:51.199057 4965 scope.go:117] "RemoveContainer" containerID="0adc6e28f055d539583ad7bb06c2cd0eb874f0b736c831a9dc5b0964a3e19dcf" Feb 19 10:24:51 crc kubenswrapper[4965]: I0219 10:24:51.723344 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" event={"ID":"63ef3eb8-6103-492d-b6ef-f16081d15e83","Type":"ContainerStarted","Data":"d46dcae3f2586439cac7ca3262f83dacf19b8435d7db00d5455101ba06915208"} Feb 19 10:25:25 crc kubenswrapper[4965]: I0219 10:25:25.691737 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5wvxk"] Feb 19 10:25:25 crc kubenswrapper[4965]: I0219 10:25:25.694321 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5wvxk" Feb 19 10:25:25 crc kubenswrapper[4965]: I0219 10:25:25.717549 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5wvxk"] Feb 19 10:25:25 crc kubenswrapper[4965]: I0219 10:25:25.804990 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkn6b\" (UniqueName: \"kubernetes.io/projected/5627a26d-39c2-46ea-8a47-47037d30662c-kube-api-access-rkn6b\") pod \"redhat-operators-5wvxk\" (UID: \"5627a26d-39c2-46ea-8a47-47037d30662c\") " pod="openshift-marketplace/redhat-operators-5wvxk" Feb 19 10:25:25 crc kubenswrapper[4965]: I0219 10:25:25.805064 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5627a26d-39c2-46ea-8a47-47037d30662c-catalog-content\") pod \"redhat-operators-5wvxk\" (UID: \"5627a26d-39c2-46ea-8a47-47037d30662c\") " pod="openshift-marketplace/redhat-operators-5wvxk" Feb 19 10:25:25 crc kubenswrapper[4965]: I0219 10:25:25.805125 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5627a26d-39c2-46ea-8a47-47037d30662c-utilities\") pod \"redhat-operators-5wvxk\" (UID: \"5627a26d-39c2-46ea-8a47-47037d30662c\") " pod="openshift-marketplace/redhat-operators-5wvxk" Feb 19 10:25:25 crc kubenswrapper[4965]: I0219 10:25:25.907414 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkn6b\" (UniqueName: \"kubernetes.io/projected/5627a26d-39c2-46ea-8a47-47037d30662c-kube-api-access-rkn6b\") pod \"redhat-operators-5wvxk\" (UID: \"5627a26d-39c2-46ea-8a47-47037d30662c\") " pod="openshift-marketplace/redhat-operators-5wvxk" Feb 19 10:25:25 crc kubenswrapper[4965]: I0219 10:25:25.907740 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5627a26d-39c2-46ea-8a47-47037d30662c-catalog-content\") pod \"redhat-operators-5wvxk\" (UID: \"5627a26d-39c2-46ea-8a47-47037d30662c\") " pod="openshift-marketplace/redhat-operators-5wvxk" Feb 19 10:25:25 crc kubenswrapper[4965]: I0219 10:25:25.907949 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5627a26d-39c2-46ea-8a47-47037d30662c-utilities\") pod \"redhat-operators-5wvxk\" (UID: \"5627a26d-39c2-46ea-8a47-47037d30662c\") " pod="openshift-marketplace/redhat-operators-5wvxk" Feb 19 10:25:25 crc kubenswrapper[4965]: I0219 10:25:25.908455 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5627a26d-39c2-46ea-8a47-47037d30662c-utilities\") pod \"redhat-operators-5wvxk\" (UID: \"5627a26d-39c2-46ea-8a47-47037d30662c\") " pod="openshift-marketplace/redhat-operators-5wvxk" Feb 19 10:25:25 crc kubenswrapper[4965]: I0219 10:25:25.909657 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5627a26d-39c2-46ea-8a47-47037d30662c-catalog-content\") pod \"redhat-operators-5wvxk\" (UID: \"5627a26d-39c2-46ea-8a47-47037d30662c\") " pod="openshift-marketplace/redhat-operators-5wvxk" Feb 19 10:25:25 crc kubenswrapper[4965]: I0219 10:25:25.930365 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkn6b\" (UniqueName: \"kubernetes.io/projected/5627a26d-39c2-46ea-8a47-47037d30662c-kube-api-access-rkn6b\") pod \"redhat-operators-5wvxk\" (UID: \"5627a26d-39c2-46ea-8a47-47037d30662c\") " pod="openshift-marketplace/redhat-operators-5wvxk" Feb 19 10:25:26 crc kubenswrapper[4965]: I0219 10:25:26.019536 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5wvxk" Feb 19 10:25:26 crc kubenswrapper[4965]: I0219 10:25:26.523072 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5wvxk"] Feb 19 10:25:27 crc kubenswrapper[4965]: I0219 10:25:27.125406 4965 generic.go:334] "Generic (PLEG): container finished" podID="5627a26d-39c2-46ea-8a47-47037d30662c" containerID="554c2749ba48407b5501d05f647d066b2ca2a523453f187acdd786e0d0ff75ed" exitCode=0 Feb 19 10:25:27 crc kubenswrapper[4965]: I0219 10:25:27.125461 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5wvxk" event={"ID":"5627a26d-39c2-46ea-8a47-47037d30662c","Type":"ContainerDied","Data":"554c2749ba48407b5501d05f647d066b2ca2a523453f187acdd786e0d0ff75ed"} Feb 19 10:25:27 crc kubenswrapper[4965]: I0219 10:25:27.125495 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5wvxk" event={"ID":"5627a26d-39c2-46ea-8a47-47037d30662c","Type":"ContainerStarted","Data":"3c4fe53937a1d9c2687062c8521139c7a880c61e2e6d3af16d7e915baba2325b"} Feb 19 10:25:28 crc kubenswrapper[4965]: I0219 10:25:28.136506 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5wvxk" event={"ID":"5627a26d-39c2-46ea-8a47-47037d30662c","Type":"ContainerStarted","Data":"40da0ac7ca0dfed3f2165bc3931cb83266454b95f1277fa9c43d33f69d7eb78f"} Feb 19 10:25:35 crc kubenswrapper[4965]: I0219 10:25:35.211324 4965 generic.go:334] "Generic (PLEG): container finished" podID="5627a26d-39c2-46ea-8a47-47037d30662c" containerID="40da0ac7ca0dfed3f2165bc3931cb83266454b95f1277fa9c43d33f69d7eb78f" exitCode=0 Feb 19 10:25:35 crc kubenswrapper[4965]: I0219 10:25:35.211846 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5wvxk" event={"ID":"5627a26d-39c2-46ea-8a47-47037d30662c","Type":"ContainerDied","Data":"40da0ac7ca0dfed3f2165bc3931cb83266454b95f1277fa9c43d33f69d7eb78f"} Feb 19 10:25:36 crc kubenswrapper[4965]: I0219 10:25:36.223700 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5wvxk" event={"ID":"5627a26d-39c2-46ea-8a47-47037d30662c","Type":"ContainerStarted","Data":"9e905986d2170c5853790c41e2e127c1c7ac656b5638e94e077c588f7ff6ec7c"} Feb 19 10:25:36 crc kubenswrapper[4965]: I0219 10:25:36.246091 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5wvxk" podStartSLOduration=2.630233508 podStartE2EDuration="11.246074408s" podCreationTimestamp="2026-02-19 10:25:25 +0000 UTC" firstStartedPulling="2026-02-19 10:25:27.128490477 +0000 UTC m=+2582.749811787" lastFinishedPulling="2026-02-19 10:25:35.744331377 +0000 UTC m=+2591.365652687" observedRunningTime="2026-02-19 10:25:36.241573399 +0000 UTC m=+2591.862894709" watchObservedRunningTime="2026-02-19 10:25:36.246074408 +0000 UTC m=+2591.867395718" Feb 19 10:25:46 crc kubenswrapper[4965]: I0219 10:25:46.020041 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5wvxk" Feb 19 10:25:46 crc kubenswrapper[4965]: I0219 10:25:46.020713 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5wvxk" Feb 19 10:25:47 crc kubenswrapper[4965]: I0219 10:25:47.067115 4965 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5wvxk" podUID="5627a26d-39c2-46ea-8a47-47037d30662c" containerName="registry-server" probeResult="failure" output=< Feb 19 10:25:47 crc kubenswrapper[4965]: timeout: failed to connect service ":50051" within 1s Feb 19 10:25:47 crc kubenswrapper[4965]: > Feb 19 10:25:57 crc kubenswrapper[4965]: I0219 10:25:57.076061 4965 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5wvxk" podUID="5627a26d-39c2-46ea-8a47-47037d30662c" containerName="registry-server" probeResult="failure" output=< Feb 19 10:25:57 crc kubenswrapper[4965]: timeout: failed to connect service ":50051" within 1s Feb 19 10:25:57 crc kubenswrapper[4965]: > Feb 19 10:26:06 crc kubenswrapper[4965]: I0219 10:26:06.082597 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5wvxk" Feb 19 10:26:06 crc kubenswrapper[4965]: I0219 10:26:06.152275 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5wvxk" Feb 19 10:26:06 crc kubenswrapper[4965]: I0219 10:26:06.329905 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5wvxk"] Feb 19 10:26:07 crc kubenswrapper[4965]: I0219 10:26:07.571500 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5wvxk" podUID="5627a26d-39c2-46ea-8a47-47037d30662c" containerName="registry-server" containerID="cri-o://9e905986d2170c5853790c41e2e127c1c7ac656b5638e94e077c588f7ff6ec7c" gracePeriod=2 Feb 19 10:26:08 crc kubenswrapper[4965]: I0219 10:26:08.152847 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5wvxk" Feb 19 10:26:08 crc kubenswrapper[4965]: I0219 10:26:08.265611 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkn6b\" (UniqueName: \"kubernetes.io/projected/5627a26d-39c2-46ea-8a47-47037d30662c-kube-api-access-rkn6b\") pod \"5627a26d-39c2-46ea-8a47-47037d30662c\" (UID: \"5627a26d-39c2-46ea-8a47-47037d30662c\") " Feb 19 10:26:08 crc kubenswrapper[4965]: I0219 10:26:08.265741 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5627a26d-39c2-46ea-8a47-47037d30662c-catalog-content\") pod \"5627a26d-39c2-46ea-8a47-47037d30662c\" (UID: \"5627a26d-39c2-46ea-8a47-47037d30662c\") " Feb 19 10:26:08 crc kubenswrapper[4965]: I0219 10:26:08.265968 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5627a26d-39c2-46ea-8a47-47037d30662c-utilities\") pod \"5627a26d-39c2-46ea-8a47-47037d30662c\" (UID: \"5627a26d-39c2-46ea-8a47-47037d30662c\") " Feb 19 10:26:08 crc kubenswrapper[4965]: I0219 10:26:08.267141 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5627a26d-39c2-46ea-8a47-47037d30662c-utilities" (OuterVolumeSpecName: "utilities") pod "5627a26d-39c2-46ea-8a47-47037d30662c" (UID: "5627a26d-39c2-46ea-8a47-47037d30662c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:26:08 crc kubenswrapper[4965]: I0219 10:26:08.275434 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5627a26d-39c2-46ea-8a47-47037d30662c-kube-api-access-rkn6b" (OuterVolumeSpecName: "kube-api-access-rkn6b") pod "5627a26d-39c2-46ea-8a47-47037d30662c" (UID: "5627a26d-39c2-46ea-8a47-47037d30662c"). InnerVolumeSpecName "kube-api-access-rkn6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:26:08 crc kubenswrapper[4965]: I0219 10:26:08.369015 4965 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5627a26d-39c2-46ea-8a47-47037d30662c-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:08 crc kubenswrapper[4965]: I0219 10:26:08.369056 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkn6b\" (UniqueName: \"kubernetes.io/projected/5627a26d-39c2-46ea-8a47-47037d30662c-kube-api-access-rkn6b\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:08 crc kubenswrapper[4965]: I0219 10:26:08.531156 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5627a26d-39c2-46ea-8a47-47037d30662c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5627a26d-39c2-46ea-8a47-47037d30662c" (UID: "5627a26d-39c2-46ea-8a47-47037d30662c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:26:08 crc kubenswrapper[4965]: I0219 10:26:08.573280 4965 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5627a26d-39c2-46ea-8a47-47037d30662c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:08 crc kubenswrapper[4965]: I0219 10:26:08.582483 4965 generic.go:334] "Generic (PLEG): container finished" podID="5627a26d-39c2-46ea-8a47-47037d30662c" containerID="9e905986d2170c5853790c41e2e127c1c7ac656b5638e94e077c588f7ff6ec7c" exitCode=0 Feb 19 10:26:08 crc kubenswrapper[4965]: I0219 10:26:08.582523 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5wvxk" event={"ID":"5627a26d-39c2-46ea-8a47-47037d30662c","Type":"ContainerDied","Data":"9e905986d2170c5853790c41e2e127c1c7ac656b5638e94e077c588f7ff6ec7c"} Feb 19 10:26:08 crc kubenswrapper[4965]: I0219 10:26:08.582556 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5wvxk" event={"ID":"5627a26d-39c2-46ea-8a47-47037d30662c","Type":"ContainerDied","Data":"3c4fe53937a1d9c2687062c8521139c7a880c61e2e6d3af16d7e915baba2325b"} Feb 19 10:26:08 crc kubenswrapper[4965]: I0219 10:26:08.582573 4965 scope.go:117] "RemoveContainer" containerID="9e905986d2170c5853790c41e2e127c1c7ac656b5638e94e077c588f7ff6ec7c" Feb 19 10:26:08 crc kubenswrapper[4965]: I0219 10:26:08.583007 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5wvxk" Feb 19 10:26:08 crc kubenswrapper[4965]: I0219 10:26:08.602933 4965 scope.go:117] "RemoveContainer" containerID="40da0ac7ca0dfed3f2165bc3931cb83266454b95f1277fa9c43d33f69d7eb78f" Feb 19 10:26:08 crc kubenswrapper[4965]: I0219 10:26:08.620213 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5wvxk"] Feb 19 10:26:08 crc kubenswrapper[4965]: I0219 10:26:08.633782 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5wvxk"] Feb 19 10:26:08 crc kubenswrapper[4965]: I0219 10:26:08.651946 4965 scope.go:117] "RemoveContainer" containerID="554c2749ba48407b5501d05f647d066b2ca2a523453f187acdd786e0d0ff75ed" Feb 19 10:26:08 crc kubenswrapper[4965]: I0219 10:26:08.684935 4965 scope.go:117] "RemoveContainer" containerID="9e905986d2170c5853790c41e2e127c1c7ac656b5638e94e077c588f7ff6ec7c" Feb 19 10:26:08 crc kubenswrapper[4965]: E0219 10:26:08.685645 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e905986d2170c5853790c41e2e127c1c7ac656b5638e94e077c588f7ff6ec7c\": container with ID starting with 9e905986d2170c5853790c41e2e127c1c7ac656b5638e94e077c588f7ff6ec7c not found: ID does not exist" containerID="9e905986d2170c5853790c41e2e127c1c7ac656b5638e94e077c588f7ff6ec7c" Feb 19 10:26:08 crc kubenswrapper[4965]: I0219 10:26:08.685684 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e905986d2170c5853790c41e2e127c1c7ac656b5638e94e077c588f7ff6ec7c"} err="failed to get container status \"9e905986d2170c5853790c41e2e127c1c7ac656b5638e94e077c588f7ff6ec7c\": rpc error: code = NotFound desc = could not find container \"9e905986d2170c5853790c41e2e127c1c7ac656b5638e94e077c588f7ff6ec7c\": container with ID starting with 9e905986d2170c5853790c41e2e127c1c7ac656b5638e94e077c588f7ff6ec7c not found: ID does not exist" Feb 19 10:26:08 crc kubenswrapper[4965]: I0219 10:26:08.685735 4965 scope.go:117] "RemoveContainer" containerID="40da0ac7ca0dfed3f2165bc3931cb83266454b95f1277fa9c43d33f69d7eb78f" Feb 19 10:26:08 crc kubenswrapper[4965]: E0219 10:26:08.686162 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40da0ac7ca0dfed3f2165bc3931cb83266454b95f1277fa9c43d33f69d7eb78f\": container with ID starting with 40da0ac7ca0dfed3f2165bc3931cb83266454b95f1277fa9c43d33f69d7eb78f not found: ID does not exist" containerID="40da0ac7ca0dfed3f2165bc3931cb83266454b95f1277fa9c43d33f69d7eb78f" Feb 19 10:26:08 crc kubenswrapper[4965]: I0219 10:26:08.686217 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40da0ac7ca0dfed3f2165bc3931cb83266454b95f1277fa9c43d33f69d7eb78f"} err="failed to get container status \"40da0ac7ca0dfed3f2165bc3931cb83266454b95f1277fa9c43d33f69d7eb78f\": rpc error: code = NotFound desc = could not find container \"40da0ac7ca0dfed3f2165bc3931cb83266454b95f1277fa9c43d33f69d7eb78f\": container with ID starting with 40da0ac7ca0dfed3f2165bc3931cb83266454b95f1277fa9c43d33f69d7eb78f not found: ID does not exist" Feb 19 10:26:08 crc kubenswrapper[4965]: I0219 10:26:08.686249 4965 scope.go:117] "RemoveContainer" containerID="554c2749ba48407b5501d05f647d066b2ca2a523453f187acdd786e0d0ff75ed" Feb 19 10:26:08 crc kubenswrapper[4965]: E0219 10:26:08.686646 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"554c2749ba48407b5501d05f647d066b2ca2a523453f187acdd786e0d0ff75ed\": container with ID starting with 554c2749ba48407b5501d05f647d066b2ca2a523453f187acdd786e0d0ff75ed not found: ID does not exist" containerID="554c2749ba48407b5501d05f647d066b2ca2a523453f187acdd786e0d0ff75ed" Feb 19 10:26:08 crc kubenswrapper[4965]: I0219 10:26:08.686713 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"554c2749ba48407b5501d05f647d066b2ca2a523453f187acdd786e0d0ff75ed"} err="failed to get container status \"554c2749ba48407b5501d05f647d066b2ca2a523453f187acdd786e0d0ff75ed\": rpc error: code = NotFound desc = could not find container \"554c2749ba48407b5501d05f647d066b2ca2a523453f187acdd786e0d0ff75ed\": container with ID starting with 554c2749ba48407b5501d05f647d066b2ca2a523453f187acdd786e0d0ff75ed not found: ID does not exist" Feb 19 10:26:09 crc kubenswrapper[4965]: I0219 10:26:09.210561 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5627a26d-39c2-46ea-8a47-47037d30662c" path="/var/lib/kubelet/pods/5627a26d-39c2-46ea-8a47-47037d30662c/volumes" Feb 19 10:26:48 crc kubenswrapper[4965]: I0219 10:26:48.988241 4965 generic.go:334] "Generic (PLEG): container finished" podID="b3d2f922-3941-4ff3-92fc-6bb14cd46698" containerID="f7d44865b4ec1d8bc1fa5ef1878df0d643604e949e31b6cfc810961286f83d21" exitCode=0 Feb 19 10:26:48 crc kubenswrapper[4965]: I0219 10:26:48.988368 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lf5p2" event={"ID":"b3d2f922-3941-4ff3-92fc-6bb14cd46698","Type":"ContainerDied","Data":"f7d44865b4ec1d8bc1fa5ef1878df0d643604e949e31b6cfc810961286f83d21"} Feb 19 10:26:50 crc kubenswrapper[4965]: I0219 10:26:50.704871 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lf5p2" Feb 19 10:26:50 crc kubenswrapper[4965]: I0219 10:26:50.831583 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b3d2f922-3941-4ff3-92fc-6bb14cd46698-inventory\") pod \"b3d2f922-3941-4ff3-92fc-6bb14cd46698\" (UID: \"b3d2f922-3941-4ff3-92fc-6bb14cd46698\") " Feb 19 10:26:50 crc kubenswrapper[4965]: I0219 10:26:50.831673 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b3d2f922-3941-4ff3-92fc-6bb14cd46698-ceilometer-compute-config-data-0\") pod \"b3d2f922-3941-4ff3-92fc-6bb14cd46698\" (UID: \"b3d2f922-3941-4ff3-92fc-6bb14cd46698\") " Feb 19 10:26:50 crc kubenswrapper[4965]: I0219 10:26:50.831713 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b3d2f922-3941-4ff3-92fc-6bb14cd46698-ceilometer-compute-config-data-2\") pod \"b3d2f922-3941-4ff3-92fc-6bb14cd46698\" (UID: \"b3d2f922-3941-4ff3-92fc-6bb14cd46698\") " Feb 19 10:26:50 crc kubenswrapper[4965]: I0219 10:26:50.831748 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3d2f922-3941-4ff3-92fc-6bb14cd46698-telemetry-combined-ca-bundle\") pod \"b3d2f922-3941-4ff3-92fc-6bb14cd46698\" (UID: \"b3d2f922-3941-4ff3-92fc-6bb14cd46698\") " Feb 19 10:26:50 crc kubenswrapper[4965]: I0219 10:26:50.831797 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sl7b8\" (UniqueName: \"kubernetes.io/projected/b3d2f922-3941-4ff3-92fc-6bb14cd46698-kube-api-access-sl7b8\") pod \"b3d2f922-3941-4ff3-92fc-6bb14cd46698\" (UID: \"b3d2f922-3941-4ff3-92fc-6bb14cd46698\") " Feb 19 10:26:50 crc kubenswrapper[4965]: I0219 10:26:50.831845 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b3d2f922-3941-4ff3-92fc-6bb14cd46698-ssh-key-openstack-edpm-ipam\") pod \"b3d2f922-3941-4ff3-92fc-6bb14cd46698\" (UID: \"b3d2f922-3941-4ff3-92fc-6bb14cd46698\") " Feb 19 10:26:50 crc kubenswrapper[4965]: I0219 10:26:50.831897 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b3d2f922-3941-4ff3-92fc-6bb14cd46698-ceilometer-compute-config-data-1\") pod \"b3d2f922-3941-4ff3-92fc-6bb14cd46698\" (UID: \"b3d2f922-3941-4ff3-92fc-6bb14cd46698\") " Feb 19 10:26:50 crc kubenswrapper[4965]: I0219 10:26:50.882339 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3d2f922-3941-4ff3-92fc-6bb14cd46698-kube-api-access-sl7b8" (OuterVolumeSpecName: "kube-api-access-sl7b8") pod "b3d2f922-3941-4ff3-92fc-6bb14cd46698" (UID: "b3d2f922-3941-4ff3-92fc-6bb14cd46698"). InnerVolumeSpecName "kube-api-access-sl7b8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:26:50 crc kubenswrapper[4965]: I0219 10:26:50.884695 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3d2f922-3941-4ff3-92fc-6bb14cd46698-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "b3d2f922-3941-4ff3-92fc-6bb14cd46698" (UID: "b3d2f922-3941-4ff3-92fc-6bb14cd46698"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:26:50 crc kubenswrapper[4965]: I0219 10:26:50.897286 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3d2f922-3941-4ff3-92fc-6bb14cd46698-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "b3d2f922-3941-4ff3-92fc-6bb14cd46698" (UID: "b3d2f922-3941-4ff3-92fc-6bb14cd46698"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:26:50 crc kubenswrapper[4965]: I0219 10:26:50.911933 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3d2f922-3941-4ff3-92fc-6bb14cd46698-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "b3d2f922-3941-4ff3-92fc-6bb14cd46698" (UID: "b3d2f922-3941-4ff3-92fc-6bb14cd46698"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:26:50 crc kubenswrapper[4965]: I0219 10:26:50.919344 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3d2f922-3941-4ff3-92fc-6bb14cd46698-inventory" (OuterVolumeSpecName: "inventory") pod "b3d2f922-3941-4ff3-92fc-6bb14cd46698" (UID: "b3d2f922-3941-4ff3-92fc-6bb14cd46698"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:26:50 crc kubenswrapper[4965]: I0219 10:26:50.923452 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3d2f922-3941-4ff3-92fc-6bb14cd46698-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "b3d2f922-3941-4ff3-92fc-6bb14cd46698" (UID: "b3d2f922-3941-4ff3-92fc-6bb14cd46698"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:26:50 crc kubenswrapper[4965]: I0219 10:26:50.935050 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3d2f922-3941-4ff3-92fc-6bb14cd46698-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b3d2f922-3941-4ff3-92fc-6bb14cd46698" (UID: "b3d2f922-3941-4ff3-92fc-6bb14cd46698"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:26:50 crc kubenswrapper[4965]: I0219 10:26:50.935142 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b3d2f922-3941-4ff3-92fc-6bb14cd46698-ssh-key-openstack-edpm-ipam\") pod \"b3d2f922-3941-4ff3-92fc-6bb14cd46698\" (UID: \"b3d2f922-3941-4ff3-92fc-6bb14cd46698\") " Feb 19 10:26:50 crc kubenswrapper[4965]: I0219 10:26:50.935696 4965 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b3d2f922-3941-4ff3-92fc-6bb14cd46698-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:50 crc kubenswrapper[4965]: I0219 10:26:50.935718 4965 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b3d2f922-3941-4ff3-92fc-6bb14cd46698-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:50 crc kubenswrapper[4965]: I0219 10:26:50.935728 4965 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b3d2f922-3941-4ff3-92fc-6bb14cd46698-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:50 crc kubenswrapper[4965]: I0219 10:26:50.935739 4965 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b3d2f922-3941-4ff3-92fc-6bb14cd46698-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:50 crc kubenswrapper[4965]: I0219 10:26:50.935747 4965 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3d2f922-3941-4ff3-92fc-6bb14cd46698-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:50 crc kubenswrapper[4965]: I0219 10:26:50.935757 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sl7b8\" (UniqueName: \"kubernetes.io/projected/b3d2f922-3941-4ff3-92fc-6bb14cd46698-kube-api-access-sl7b8\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:50 crc kubenswrapper[4965]: W0219 10:26:50.935810 4965 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/b3d2f922-3941-4ff3-92fc-6bb14cd46698/volumes/kubernetes.io~secret/ssh-key-openstack-edpm-ipam Feb 19 10:26:50 crc kubenswrapper[4965]: I0219 10:26:50.935818 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3d2f922-3941-4ff3-92fc-6bb14cd46698-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b3d2f922-3941-4ff3-92fc-6bb14cd46698" (UID: "b3d2f922-3941-4ff3-92fc-6bb14cd46698"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:26:51 crc kubenswrapper[4965]: I0219 10:26:51.008538 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lf5p2" event={"ID":"b3d2f922-3941-4ff3-92fc-6bb14cd46698","Type":"ContainerDied","Data":"aa924cd65cf28af1d33b163268570233e7518117bb4cfbe75b3bc5dc5d187226"} Feb 19 10:26:51 crc kubenswrapper[4965]: I0219 10:26:51.008589 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa924cd65cf28af1d33b163268570233e7518117bb4cfbe75b3bc5dc5d187226" Feb 19 10:26:51 crc kubenswrapper[4965]: I0219 10:26:51.008664 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lf5p2" Feb 19 10:26:51 crc kubenswrapper[4965]: I0219 10:26:51.037825 4965 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b3d2f922-3941-4ff3-92fc-6bb14cd46698-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:16 crc kubenswrapper[4965]: I0219 10:27:16.601234 4965 patch_prober.go:28] interesting pod/machine-config-daemon-7mhh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:27:16 crc kubenswrapper[4965]: I0219 10:27:16.601873 4965 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:27:26 crc kubenswrapper[4965]: E0219 10:27:26.270162 4965 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.196:38776->38.102.83.196:35139: write tcp 38.102.83.196:38776->38.102.83.196:35139: write: broken pipe Feb 19 10:27:46 crc kubenswrapper[4965]: I0219 10:27:46.601836 4965 patch_prober.go:28] interesting pod/machine-config-daemon-7mhh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:27:46 crc kubenswrapper[4965]: I0219 10:27:46.602518 4965 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:28:16 crc kubenswrapper[4965]: I0219 10:28:16.600728 4965 patch_prober.go:28] interesting pod/machine-config-daemon-7mhh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:28:16 crc kubenswrapper[4965]: I0219 10:28:16.601248 4965 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:28:16 crc kubenswrapper[4965]: I0219 10:28:16.601300 4965 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" Feb 19 10:28:16 crc kubenswrapper[4965]: I0219 10:28:16.602180 4965 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d46dcae3f2586439cac7ca3262f83dacf19b8435d7db00d5455101ba06915208"} pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 10:28:16 crc kubenswrapper[4965]: I0219 10:28:16.602271 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerName="machine-config-daemon" containerID="cri-o://d46dcae3f2586439cac7ca3262f83dacf19b8435d7db00d5455101ba06915208" gracePeriod=600 Feb 19 10:28:16 crc kubenswrapper[4965]: I0219 10:28:16.930579 4965 generic.go:334] "Generic (PLEG): container finished" podID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerID="d46dcae3f2586439cac7ca3262f83dacf19b8435d7db00d5455101ba06915208" exitCode=0 Feb 19 10:28:16 crc kubenswrapper[4965]: I0219 10:28:16.930620 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" event={"ID":"63ef3eb8-6103-492d-b6ef-f16081d15e83","Type":"ContainerDied","Data":"d46dcae3f2586439cac7ca3262f83dacf19b8435d7db00d5455101ba06915208"} Feb 19 10:28:16 crc kubenswrapper[4965]: I0219 10:28:16.930894 4965 scope.go:117] "RemoveContainer" containerID="0adc6e28f055d539583ad7bb06c2cd0eb874f0b736c831a9dc5b0964a3e19dcf" Feb 19 10:28:17 crc kubenswrapper[4965]: I0219 10:28:17.943188 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" event={"ID":"63ef3eb8-6103-492d-b6ef-f16081d15e83","Type":"ContainerStarted","Data":"b29f2a4df3d1b4e7054216ba139e5ed515eb129972b5c4cb743ccf8db61bf96b"} Feb 19 10:28:26 crc kubenswrapper[4965]: I0219 10:28:26.412426 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 19 10:28:26 crc kubenswrapper[4965]: E0219 10:28:26.413859 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3d2f922-3941-4ff3-92fc-6bb14cd46698" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 19 10:28:26 crc kubenswrapper[4965]: I0219 10:28:26.413888 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3d2f922-3941-4ff3-92fc-6bb14cd46698" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 19 10:28:26 crc kubenswrapper[4965]: E0219 10:28:26.413912 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5627a26d-39c2-46ea-8a47-47037d30662c" containerName="extract-content" Feb 19 10:28:26 crc kubenswrapper[4965]: I0219 10:28:26.413925 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="5627a26d-39c2-46ea-8a47-47037d30662c" containerName="extract-content" Feb 19 10:28:26 crc kubenswrapper[4965]: E0219 10:28:26.413944 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5627a26d-39c2-46ea-8a47-47037d30662c" containerName="registry-server" Feb 19 10:28:26 crc kubenswrapper[4965]: I0219 10:28:26.413955 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="5627a26d-39c2-46ea-8a47-47037d30662c" containerName="registry-server" Feb 19 10:28:26 crc kubenswrapper[4965]: E0219 10:28:26.413982 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5627a26d-39c2-46ea-8a47-47037d30662c" containerName="extract-utilities" Feb 19 10:28:26 crc kubenswrapper[4965]: I0219 10:28:26.413992 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="5627a26d-39c2-46ea-8a47-47037d30662c" containerName="extract-utilities" Feb 19 10:28:26 crc kubenswrapper[4965]: I0219 10:28:26.414344 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="5627a26d-39c2-46ea-8a47-47037d30662c" containerName="registry-server" Feb 19 10:28:26 crc kubenswrapper[4965]: I0219 10:28:26.414394 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3d2f922-3941-4ff3-92fc-6bb14cd46698" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 19 10:28:26 crc kubenswrapper[4965]: I0219 10:28:26.415592 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 19 10:28:26 crc kubenswrapper[4965]: I0219 10:28:26.423062 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 19 10:28:26 crc kubenswrapper[4965]: I0219 10:28:26.423328 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 19 10:28:26 crc kubenswrapper[4965]: I0219 10:28:26.425604 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 19 10:28:26 crc kubenswrapper[4965]: I0219 10:28:26.426446 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-xx4mq" Feb 19 10:28:26 crc kubenswrapper[4965]: I0219 10:28:26.429085 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 19 10:28:26 crc kubenswrapper[4965]: I0219 10:28:26.575222 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a\") " pod="openstack/tempest-tests-tempest" Feb 19 10:28:26 crc kubenswrapper[4965]: I0219 10:28:26.575319 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a-config-data\") pod \"tempest-tests-tempest\" (UID: \"acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a\") " pod="openstack/tempest-tests-tempest" Feb 19 10:28:26 crc kubenswrapper[4965]: I0219 10:28:26.575392 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a\") " pod="openstack/tempest-tests-tempest" Feb 19 10:28:26 crc kubenswrapper[4965]: I0219 10:28:26.575425 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a\") " pod="openstack/tempest-tests-tempest" Feb 19 10:28:26 crc kubenswrapper[4965]: I0219 10:28:26.575469 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a\") " pod="openstack/tempest-tests-tempest" Feb 19 10:28:26 crc kubenswrapper[4965]: I0219 10:28:26.575514 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a\") " pod="openstack/tempest-tests-tempest" Feb 19 10:28:26 crc kubenswrapper[4965]: I0219 10:28:26.575559 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xgvc\" (UniqueName: \"kubernetes.io/projected/acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a-kube-api-access-9xgvc\") pod \"tempest-tests-tempest\" (UID: \"acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a\") " pod="openstack/tempest-tests-tempest" Feb 19 10:28:26 crc kubenswrapper[4965]: I0219 10:28:26.575584 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a\") " pod="openstack/tempest-tests-tempest" Feb 19 10:28:26 crc kubenswrapper[4965]: I0219 10:28:26.575613 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a\") " pod="openstack/tempest-tests-tempest" Feb 19 10:28:26 crc kubenswrapper[4965]: I0219 10:28:26.678059 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a\") " pod="openstack/tempest-tests-tempest" Feb 19 10:28:26 crc kubenswrapper[4965]: I0219 10:28:26.678119 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a-config-data\") pod \"tempest-tests-tempest\" (UID: \"acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a\") " pod="openstack/tempest-tests-tempest" Feb 19 10:28:26 crc kubenswrapper[4965]: I0219 10:28:26.678181 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a\") " pod="openstack/tempest-tests-tempest" Feb 19 10:28:26 crc kubenswrapper[4965]: I0219 10:28:26.678232 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a\") " pod="openstack/tempest-tests-tempest" Feb 19 10:28:26 crc kubenswrapper[4965]: I0219 10:28:26.678285 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a\") " pod="openstack/tempest-tests-tempest" Feb 19 10:28:26 crc kubenswrapper[4965]: I0219 10:28:26.678321 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a\") " pod="openstack/tempest-tests-tempest" Feb 19 10:28:26 crc kubenswrapper[4965]: I0219 10:28:26.678360 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xgvc\" (UniqueName: \"kubernetes.io/projected/acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a-kube-api-access-9xgvc\") pod \"tempest-tests-tempest\" (UID: \"acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a\") " pod="openstack/tempest-tests-tempest" Feb 19 10:28:26 crc kubenswrapper[4965]: I0219 10:28:26.678391 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a\") " pod="openstack/tempest-tests-tempest" Feb 19 10:28:26 crc kubenswrapper[4965]: I0219 10:28:26.678416 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a\") " pod="openstack/tempest-tests-tempest" Feb 19 10:28:26 crc kubenswrapper[4965]: I0219 10:28:26.678990 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a\") " pod="openstack/tempest-tests-tempest" Feb 19 10:28:26 crc kubenswrapper[4965]: I0219 10:28:26.679165 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a\") " pod="openstack/tempest-tests-tempest" Feb 19 10:28:26 crc kubenswrapper[4965]: I0219 10:28:26.679704 4965 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/tempest-tests-tempest" Feb 19 10:28:26 crc kubenswrapper[4965]: I0219 10:28:26.679764 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a\") " pod="openstack/tempest-tests-tempest" Feb 19 10:28:26 crc kubenswrapper[4965]: I0219 10:28:26.680681 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a-config-data\") pod \"tempest-tests-tempest\" (UID: \"acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a\") " pod="openstack/tempest-tests-tempest" Feb 19 10:28:26 crc kubenswrapper[4965]: I0219 10:28:26.684344 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a\") " pod="openstack/tempest-tests-tempest" Feb 19 10:28:26 crc kubenswrapper[4965]: I0219 10:28:26.685642 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a\") " pod="openstack/tempest-tests-tempest" Feb 19 10:28:26 crc kubenswrapper[4965]: I0219 10:28:26.696395 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a\") " pod="openstack/tempest-tests-tempest" Feb 19 10:28:26 crc kubenswrapper[4965]: I0219 10:28:26.707329 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xgvc\" (UniqueName: \"kubernetes.io/projected/acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a-kube-api-access-9xgvc\") pod \"tempest-tests-tempest\" (UID: \"acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a\") " pod="openstack/tempest-tests-tempest" Feb 19 10:28:26 crc kubenswrapper[4965]: I0219 10:28:26.722731 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a\") " pod="openstack/tempest-tests-tempest" Feb 19 10:28:26 crc kubenswrapper[4965]: I0219 10:28:26.747859 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 19 10:28:27 crc kubenswrapper[4965]: I0219 10:28:27.262625 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 19 10:28:27 crc kubenswrapper[4965]: I0219 10:28:27.303938 4965 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 10:28:28 crc kubenswrapper[4965]: I0219 10:28:28.045416 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a","Type":"ContainerStarted","Data":"a832fb108f47dade4554a1edebd06ec73d36c02cfd8817239c66fb5387bf0c36"} Feb 19 10:29:05 crc kubenswrapper[4965]: E0219 10:29:05.199590 4965 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Feb 19 10:29:05 crc kubenswrapper[4965]: E0219 10:29:05.200197 4965 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9xgvc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 10:29:05 crc kubenswrapper[4965]: E0219 10:29:05.201671 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a" Feb 19 10:29:05 crc kubenswrapper[4965]: E0219 10:29:05.472721 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a" Feb 19 10:29:23 crc kubenswrapper[4965]: I0219 10:29:23.668708 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a","Type":"ContainerStarted","Data":"8b2affc2f3e6e50f0631edaa1a69fd1eab1d6a7c4850d0ae0fd554c0615ce2f7"} Feb 19 10:29:23 crc kubenswrapper[4965]: I0219 10:29:23.690698 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.838046993 podStartE2EDuration="58.690675609s" podCreationTimestamp="2026-02-19 10:28:25 +0000 UTC" firstStartedPulling="2026-02-19 10:28:27.30364237 +0000 UTC m=+2762.924963680" lastFinishedPulling="2026-02-19 10:29:22.156270986 +0000 UTC m=+2817.777592296" observedRunningTime="2026-02-19 10:29:23.690234799 +0000 UTC m=+2819.311556119" watchObservedRunningTime="2026-02-19 10:29:23.690675609 +0000 UTC m=+2819.311996919" Feb 19 10:29:42 crc kubenswrapper[4965]: I0219 10:29:42.296995 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2zrrh"] Feb 19 10:29:42 crc kubenswrapper[4965]: I0219 10:29:42.300034 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2zrrh" Feb 19 10:29:42 crc kubenswrapper[4965]: I0219 10:29:42.319114 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2zrrh"] Feb 19 10:29:42 crc kubenswrapper[4965]: I0219 10:29:42.357171 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74608448-9e6b-408f-a0fa-3958e932e2aa-catalog-content\") pod \"redhat-marketplace-2zrrh\" (UID: \"74608448-9e6b-408f-a0fa-3958e932e2aa\") " pod="openshift-marketplace/redhat-marketplace-2zrrh" Feb 19 10:29:42 crc kubenswrapper[4965]: I0219 10:29:42.357607 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74608448-9e6b-408f-a0fa-3958e932e2aa-utilities\") pod \"redhat-marketplace-2zrrh\" (UID: \"74608448-9e6b-408f-a0fa-3958e932e2aa\") " pod="openshift-marketplace/redhat-marketplace-2zrrh" Feb 19 10:29:42 crc kubenswrapper[4965]: I0219 10:29:42.358170 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln2tr\" (UniqueName: \"kubernetes.io/projected/74608448-9e6b-408f-a0fa-3958e932e2aa-kube-api-access-ln2tr\") pod \"redhat-marketplace-2zrrh\" (UID: \"74608448-9e6b-408f-a0fa-3958e932e2aa\") " pod="openshift-marketplace/redhat-marketplace-2zrrh" Feb 19 10:29:42 crc kubenswrapper[4965]: I0219 10:29:42.460366 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ln2tr\" (UniqueName: \"kubernetes.io/projected/74608448-9e6b-408f-a0fa-3958e932e2aa-kube-api-access-ln2tr\") pod \"redhat-marketplace-2zrrh\" (UID: \"74608448-9e6b-408f-a0fa-3958e932e2aa\") " pod="openshift-marketplace/redhat-marketplace-2zrrh" Feb 19 10:29:42 crc kubenswrapper[4965]: I0219 10:29:42.460455 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74608448-9e6b-408f-a0fa-3958e932e2aa-catalog-content\") pod \"redhat-marketplace-2zrrh\" (UID: \"74608448-9e6b-408f-a0fa-3958e932e2aa\") " pod="openshift-marketplace/redhat-marketplace-2zrrh" Feb 19 10:29:42 crc kubenswrapper[4965]: I0219 10:29:42.460481 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74608448-9e6b-408f-a0fa-3958e932e2aa-utilities\") pod \"redhat-marketplace-2zrrh\" (UID: \"74608448-9e6b-408f-a0fa-3958e932e2aa\") " pod="openshift-marketplace/redhat-marketplace-2zrrh" Feb 19 10:29:42 crc kubenswrapper[4965]: I0219 10:29:42.460983 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74608448-9e6b-408f-a0fa-3958e932e2aa-catalog-content\") pod \"redhat-marketplace-2zrrh\" (UID: \"74608448-9e6b-408f-a0fa-3958e932e2aa\") " pod="openshift-marketplace/redhat-marketplace-2zrrh" Feb 19 10:29:42 crc kubenswrapper[4965]: I0219 10:29:42.461025 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74608448-9e6b-408f-a0fa-3958e932e2aa-utilities\") pod \"redhat-marketplace-2zrrh\" (UID: \"74608448-9e6b-408f-a0fa-3958e932e2aa\") " pod="openshift-marketplace/redhat-marketplace-2zrrh" Feb 19 10:29:42 crc kubenswrapper[4965]: I0219 10:29:42.490173 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln2tr\" (UniqueName: \"kubernetes.io/projected/74608448-9e6b-408f-a0fa-3958e932e2aa-kube-api-access-ln2tr\") pod \"redhat-marketplace-2zrrh\" (UID: \"74608448-9e6b-408f-a0fa-3958e932e2aa\") " pod="openshift-marketplace/redhat-marketplace-2zrrh" Feb 19 10:29:42 crc kubenswrapper[4965]: I0219 10:29:42.625048 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2zrrh" Feb 19 10:29:43 crc kubenswrapper[4965]: I0219 10:29:43.151534 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2zrrh"] Feb 19 10:29:43 crc kubenswrapper[4965]: W0219 10:29:43.159334 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74608448_9e6b_408f_a0fa_3958e932e2aa.slice/crio-05d99659c511488356b66d4f7fcbbf8aec118446e3354fa602bcda8fb88c0b56 WatchSource:0}: Error finding container 05d99659c511488356b66d4f7fcbbf8aec118446e3354fa602bcda8fb88c0b56: Status 404 returned error can't find the container with id 05d99659c511488356b66d4f7fcbbf8aec118446e3354fa602bcda8fb88c0b56 Feb 19 10:29:43 crc kubenswrapper[4965]: I0219 10:29:43.880539 4965 generic.go:334] "Generic (PLEG): container finished" podID="74608448-9e6b-408f-a0fa-3958e932e2aa" containerID="08f40b20e625f20abc8e942f2807b822165f213b8edb157701643cb137bd74e6" exitCode=0 Feb 19 10:29:43 crc kubenswrapper[4965]: I0219 10:29:43.880607 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2zrrh" event={"ID":"74608448-9e6b-408f-a0fa-3958e932e2aa","Type":"ContainerDied","Data":"08f40b20e625f20abc8e942f2807b822165f213b8edb157701643cb137bd74e6"} Feb 19 10:29:43 crc kubenswrapper[4965]: I0219 10:29:43.880816 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2zrrh" event={"ID":"74608448-9e6b-408f-a0fa-3958e932e2aa","Type":"ContainerStarted","Data":"05d99659c511488356b66d4f7fcbbf8aec118446e3354fa602bcda8fb88c0b56"} Feb 19 10:29:45 crc kubenswrapper[4965]: I0219 10:29:45.899019 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2zrrh" event={"ID":"74608448-9e6b-408f-a0fa-3958e932e2aa","Type":"ContainerStarted","Data":"e4f777b532488a35bc05bf8cefee8faeb44c57176167838ae106a707a19759cc"} Feb 19 10:29:46 crc kubenswrapper[4965]: I0219 10:29:46.910360 4965 generic.go:334] "Generic (PLEG): container finished" podID="74608448-9e6b-408f-a0fa-3958e932e2aa" containerID="e4f777b532488a35bc05bf8cefee8faeb44c57176167838ae106a707a19759cc" exitCode=0 Feb 19 10:29:46 crc kubenswrapper[4965]: I0219 10:29:46.910424 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2zrrh" event={"ID":"74608448-9e6b-408f-a0fa-3958e932e2aa","Type":"ContainerDied","Data":"e4f777b532488a35bc05bf8cefee8faeb44c57176167838ae106a707a19759cc"} Feb 19 10:29:47 crc kubenswrapper[4965]: I0219 10:29:47.925485 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2zrrh" event={"ID":"74608448-9e6b-408f-a0fa-3958e932e2aa","Type":"ContainerStarted","Data":"82d760afeea779fb2deb3f8dd695ccd9234a2e3562f3f494e02aa1f825721b56"} Feb 19 10:29:48 crc kubenswrapper[4965]: I0219 10:29:48.962039 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2zrrh" podStartSLOduration=3.22372302 podStartE2EDuration="6.962007973s" podCreationTimestamp="2026-02-19 10:29:42 +0000 UTC" firstStartedPulling="2026-02-19 10:29:43.883597057 +0000 UTC m=+2839.504918377" lastFinishedPulling="2026-02-19 10:29:47.62188202 +0000 UTC m=+2843.243203330" observedRunningTime="2026-02-19 10:29:48.952698258 +0000 UTC m=+2844.574019588" watchObservedRunningTime="2026-02-19 10:29:48.962007973 +0000 UTC m=+2844.583329283" Feb 19 10:29:52 crc kubenswrapper[4965]: I0219 10:29:52.625151 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2zrrh" Feb 19 10:29:52 crc kubenswrapper[4965]: I0219 10:29:52.625766 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2zrrh" Feb 19 10:29:52 crc kubenswrapper[4965]: I0219 10:29:52.678901 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2zrrh" Feb 19 10:29:53 crc kubenswrapper[4965]: I0219 10:29:53.015644 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2zrrh" Feb 19 10:29:53 crc kubenswrapper[4965]: I0219 10:29:53.065932 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2zrrh"] Feb 19 10:29:54 crc kubenswrapper[4965]: I0219 10:29:54.989462 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2zrrh" podUID="74608448-9e6b-408f-a0fa-3958e932e2aa" containerName="registry-server" containerID="cri-o://82d760afeea779fb2deb3f8dd695ccd9234a2e3562f3f494e02aa1f825721b56" gracePeriod=2 Feb 19 10:29:56 crc kubenswrapper[4965]: I0219 10:29:56.740668 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2zrrh" Feb 19 10:29:56 crc kubenswrapper[4965]: I0219 10:29:56.873896 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74608448-9e6b-408f-a0fa-3958e932e2aa-utilities\") pod \"74608448-9e6b-408f-a0fa-3958e932e2aa\" (UID: \"74608448-9e6b-408f-a0fa-3958e932e2aa\") " Feb 19 10:29:56 crc kubenswrapper[4965]: I0219 10:29:56.874218 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74608448-9e6b-408f-a0fa-3958e932e2aa-catalog-content\") pod \"74608448-9e6b-408f-a0fa-3958e932e2aa\" (UID: \"74608448-9e6b-408f-a0fa-3958e932e2aa\") " Feb 19 10:29:56 crc kubenswrapper[4965]: I0219 10:29:56.874347 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ln2tr\" (UniqueName: \"kubernetes.io/projected/74608448-9e6b-408f-a0fa-3958e932e2aa-kube-api-access-ln2tr\") pod \"74608448-9e6b-408f-a0fa-3958e932e2aa\" (UID: \"74608448-9e6b-408f-a0fa-3958e932e2aa\") " Feb 19 10:29:56 crc kubenswrapper[4965]: I0219 10:29:56.875253 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74608448-9e6b-408f-a0fa-3958e932e2aa-utilities" (OuterVolumeSpecName: "utilities") pod "74608448-9e6b-408f-a0fa-3958e932e2aa" (UID: "74608448-9e6b-408f-a0fa-3958e932e2aa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:29:56 crc kubenswrapper[4965]: I0219 10:29:56.884121 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74608448-9e6b-408f-a0fa-3958e932e2aa-kube-api-access-ln2tr" (OuterVolumeSpecName: "kube-api-access-ln2tr") pod "74608448-9e6b-408f-a0fa-3958e932e2aa" (UID: "74608448-9e6b-408f-a0fa-3958e932e2aa"). InnerVolumeSpecName "kube-api-access-ln2tr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:29:56 crc kubenswrapper[4965]: I0219 10:29:56.906618 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74608448-9e6b-408f-a0fa-3958e932e2aa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "74608448-9e6b-408f-a0fa-3958e932e2aa" (UID: "74608448-9e6b-408f-a0fa-3958e932e2aa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:29:56 crc kubenswrapper[4965]: I0219 10:29:56.977279 4965 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74608448-9e6b-408f-a0fa-3958e932e2aa-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:56 crc kubenswrapper[4965]: I0219 10:29:56.977343 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ln2tr\" (UniqueName: \"kubernetes.io/projected/74608448-9e6b-408f-a0fa-3958e932e2aa-kube-api-access-ln2tr\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:56 crc kubenswrapper[4965]: I0219 10:29:56.977387 4965 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74608448-9e6b-408f-a0fa-3958e932e2aa-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:57 crc kubenswrapper[4965]: I0219 10:29:57.020791 4965 generic.go:334] "Generic (PLEG): container finished" podID="74608448-9e6b-408f-a0fa-3958e932e2aa" containerID="82d760afeea779fb2deb3f8dd695ccd9234a2e3562f3f494e02aa1f825721b56" exitCode=0 Feb 19 10:29:57 crc kubenswrapper[4965]: I0219 10:29:57.020835 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2zrrh" event={"ID":"74608448-9e6b-408f-a0fa-3958e932e2aa","Type":"ContainerDied","Data":"82d760afeea779fb2deb3f8dd695ccd9234a2e3562f3f494e02aa1f825721b56"} Feb 19 10:29:57 crc kubenswrapper[4965]: I0219 10:29:57.020865 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2zrrh" event={"ID":"74608448-9e6b-408f-a0fa-3958e932e2aa","Type":"ContainerDied","Data":"05d99659c511488356b66d4f7fcbbf8aec118446e3354fa602bcda8fb88c0b56"} Feb 19 10:29:57 crc kubenswrapper[4965]: I0219 10:29:57.020882 4965 scope.go:117] "RemoveContainer" containerID="82d760afeea779fb2deb3f8dd695ccd9234a2e3562f3f494e02aa1f825721b56" Feb 19 10:29:57 crc kubenswrapper[4965]: I0219 10:29:57.020885 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2zrrh" Feb 19 10:29:57 crc kubenswrapper[4965]: I0219 10:29:57.049959 4965 scope.go:117] "RemoveContainer" containerID="e4f777b532488a35bc05bf8cefee8faeb44c57176167838ae106a707a19759cc" Feb 19 10:29:57 crc kubenswrapper[4965]: I0219 10:29:57.056176 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2zrrh"] Feb 19 10:29:57 crc kubenswrapper[4965]: I0219 10:29:57.066086 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2zrrh"] Feb 19 10:29:57 crc kubenswrapper[4965]: I0219 10:29:57.083622 4965 scope.go:117] "RemoveContainer" containerID="08f40b20e625f20abc8e942f2807b822165f213b8edb157701643cb137bd74e6" Feb 19 10:29:57 crc kubenswrapper[4965]: I0219 10:29:57.139493 4965 scope.go:117] "RemoveContainer" containerID="82d760afeea779fb2deb3f8dd695ccd9234a2e3562f3f494e02aa1f825721b56" Feb 19 10:29:57 crc kubenswrapper[4965]: E0219 10:29:57.140049 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82d760afeea779fb2deb3f8dd695ccd9234a2e3562f3f494e02aa1f825721b56\": container with ID starting with 82d760afeea779fb2deb3f8dd695ccd9234a2e3562f3f494e02aa1f825721b56 not found: ID does not exist" containerID="82d760afeea779fb2deb3f8dd695ccd9234a2e3562f3f494e02aa1f825721b56" Feb 19 10:29:57 crc kubenswrapper[4965]: I0219 10:29:57.140107 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82d760afeea779fb2deb3f8dd695ccd9234a2e3562f3f494e02aa1f825721b56"} err="failed to get container status \"82d760afeea779fb2deb3f8dd695ccd9234a2e3562f3f494e02aa1f825721b56\": rpc error: code = NotFound desc = could not find container \"82d760afeea779fb2deb3f8dd695ccd9234a2e3562f3f494e02aa1f825721b56\": container with ID starting with 82d760afeea779fb2deb3f8dd695ccd9234a2e3562f3f494e02aa1f825721b56 not found: ID does not exist" Feb 19 10:29:57 crc kubenswrapper[4965]: I0219 10:29:57.140143 4965 scope.go:117] "RemoveContainer" containerID="e4f777b532488a35bc05bf8cefee8faeb44c57176167838ae106a707a19759cc" Feb 19 10:29:57 crc kubenswrapper[4965]: E0219 10:29:57.140650 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4f777b532488a35bc05bf8cefee8faeb44c57176167838ae106a707a19759cc\": container with ID starting with e4f777b532488a35bc05bf8cefee8faeb44c57176167838ae106a707a19759cc not found: ID does not exist" containerID="e4f777b532488a35bc05bf8cefee8faeb44c57176167838ae106a707a19759cc" Feb 19 10:29:57 crc kubenswrapper[4965]: I0219 10:29:57.140794 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4f777b532488a35bc05bf8cefee8faeb44c57176167838ae106a707a19759cc"} err="failed to get container status \"e4f777b532488a35bc05bf8cefee8faeb44c57176167838ae106a707a19759cc\": rpc error: code = NotFound desc = could not find container \"e4f777b532488a35bc05bf8cefee8faeb44c57176167838ae106a707a19759cc\": container with ID starting with e4f777b532488a35bc05bf8cefee8faeb44c57176167838ae106a707a19759cc not found: ID does not exist" Feb 19 10:29:57 crc kubenswrapper[4965]: I0219 10:29:57.140916 4965 scope.go:117] "RemoveContainer" containerID="08f40b20e625f20abc8e942f2807b822165f213b8edb157701643cb137bd74e6" Feb 19 10:29:57 crc kubenswrapper[4965]: E0219 10:29:57.141377 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08f40b20e625f20abc8e942f2807b822165f213b8edb157701643cb137bd74e6\": container with ID starting with 08f40b20e625f20abc8e942f2807b822165f213b8edb157701643cb137bd74e6 not found: ID does not exist" containerID="08f40b20e625f20abc8e942f2807b822165f213b8edb157701643cb137bd74e6" Feb 19 10:29:57 crc kubenswrapper[4965]: I0219 10:29:57.141415 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08f40b20e625f20abc8e942f2807b822165f213b8edb157701643cb137bd74e6"} err="failed to get container status \"08f40b20e625f20abc8e942f2807b822165f213b8edb157701643cb137bd74e6\": rpc error: code = NotFound desc = could not find container \"08f40b20e625f20abc8e942f2807b822165f213b8edb157701643cb137bd74e6\": container with ID starting with 08f40b20e625f20abc8e942f2807b822165f213b8edb157701643cb137bd74e6 not found: ID does not exist" Feb 19 10:29:57 crc kubenswrapper[4965]: I0219 10:29:57.215633 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74608448-9e6b-408f-a0fa-3958e932e2aa" path="/var/lib/kubelet/pods/74608448-9e6b-408f-a0fa-3958e932e2aa/volumes" Feb 19 10:30:00 crc kubenswrapper[4965]: I0219 10:30:00.148537 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524950-zvlr4"] Feb 19 10:30:00 crc kubenswrapper[4965]: E0219 10:30:00.149383 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74608448-9e6b-408f-a0fa-3958e932e2aa" containerName="registry-server" Feb 19 10:30:00 crc kubenswrapper[4965]: I0219 10:30:00.149408 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="74608448-9e6b-408f-a0fa-3958e932e2aa" containerName="registry-server" Feb 19 10:30:00 crc kubenswrapper[4965]: E0219 10:30:00.149456 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74608448-9e6b-408f-a0fa-3958e932e2aa" containerName="extract-content" Feb 19 10:30:00 crc kubenswrapper[4965]: I0219 10:30:00.149464 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="74608448-9e6b-408f-a0fa-3958e932e2aa" containerName="extract-content" Feb 19 10:30:00 crc kubenswrapper[4965]: E0219 10:30:00.149479 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74608448-9e6b-408f-a0fa-3958e932e2aa" containerName="extract-utilities" Feb 19 10:30:00 crc kubenswrapper[4965]: I0219 10:30:00.149488 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="74608448-9e6b-408f-a0fa-3958e932e2aa" containerName="extract-utilities" Feb 19 10:30:00 crc kubenswrapper[4965]: I0219 10:30:00.149722 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="74608448-9e6b-408f-a0fa-3958e932e2aa" containerName="registry-server" Feb 19 10:30:00 crc kubenswrapper[4965]: I0219 10:30:00.150490 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524950-zvlr4" Feb 19 10:30:00 crc kubenswrapper[4965]: I0219 10:30:00.161006 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 10:30:00 crc kubenswrapper[4965]: I0219 10:30:00.161380 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 10:30:00 crc kubenswrapper[4965]: I0219 10:30:00.166722 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524950-zvlr4"] Feb 19 10:30:00 crc kubenswrapper[4965]: I0219 10:30:00.255758 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a85b9c1-d5bc-420f-af25-6f7b9f2e2d5a-secret-volume\") pod \"collect-profiles-29524950-zvlr4\" (UID: \"7a85b9c1-d5bc-420f-af25-6f7b9f2e2d5a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524950-zvlr4" Feb 19 10:30:00 crc kubenswrapper[4965]: I0219 10:30:00.255992 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkztz\" (UniqueName: \"kubernetes.io/projected/7a85b9c1-d5bc-420f-af25-6f7b9f2e2d5a-kube-api-access-hkztz\") pod \"collect-profiles-29524950-zvlr4\" (UID: \"7a85b9c1-d5bc-420f-af25-6f7b9f2e2d5a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524950-zvlr4" Feb 19 10:30:00 crc kubenswrapper[4965]: I0219 10:30:00.256032 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a85b9c1-d5bc-420f-af25-6f7b9f2e2d5a-config-volume\") pod \"collect-profiles-29524950-zvlr4\" (UID: \"7a85b9c1-d5bc-420f-af25-6f7b9f2e2d5a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524950-zvlr4" Feb 19 10:30:00 crc kubenswrapper[4965]: I0219 10:30:00.357975 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkztz\" (UniqueName: \"kubernetes.io/projected/7a85b9c1-d5bc-420f-af25-6f7b9f2e2d5a-kube-api-access-hkztz\") pod \"collect-profiles-29524950-zvlr4\" (UID: \"7a85b9c1-d5bc-420f-af25-6f7b9f2e2d5a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524950-zvlr4" Feb 19 10:30:00 crc kubenswrapper[4965]: I0219 10:30:00.358032 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a85b9c1-d5bc-420f-af25-6f7b9f2e2d5a-config-volume\") pod \"collect-profiles-29524950-zvlr4\" (UID: \"7a85b9c1-d5bc-420f-af25-6f7b9f2e2d5a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524950-zvlr4" Feb 19 10:30:00 crc kubenswrapper[4965]: I0219 10:30:00.358144 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a85b9c1-d5bc-420f-af25-6f7b9f2e2d5a-secret-volume\") pod \"collect-profiles-29524950-zvlr4\" (UID: \"7a85b9c1-d5bc-420f-af25-6f7b9f2e2d5a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524950-zvlr4" Feb 19 10:30:00 crc kubenswrapper[4965]: I0219 10:30:00.360287 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a85b9c1-d5bc-420f-af25-6f7b9f2e2d5a-config-volume\") pod \"collect-profiles-29524950-zvlr4\" (UID: \"7a85b9c1-d5bc-420f-af25-6f7b9f2e2d5a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524950-zvlr4" Feb 19 10:30:00 crc kubenswrapper[4965]: I0219 10:30:00.373839 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a85b9c1-d5bc-420f-af25-6f7b9f2e2d5a-secret-volume\") pod \"collect-profiles-29524950-zvlr4\" (UID: \"7a85b9c1-d5bc-420f-af25-6f7b9f2e2d5a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524950-zvlr4" Feb 19 10:30:00 crc kubenswrapper[4965]: I0219 10:30:00.375293 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkztz\" (UniqueName: \"kubernetes.io/projected/7a85b9c1-d5bc-420f-af25-6f7b9f2e2d5a-kube-api-access-hkztz\") pod \"collect-profiles-29524950-zvlr4\" (UID: \"7a85b9c1-d5bc-420f-af25-6f7b9f2e2d5a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524950-zvlr4" Feb 19 10:30:00 crc kubenswrapper[4965]: I0219 10:30:00.473489 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524950-zvlr4" Feb 19 10:30:00 crc kubenswrapper[4965]: I0219 10:30:00.995912 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524950-zvlr4"] Feb 19 10:30:01 crc kubenswrapper[4965]: I0219 10:30:01.071487 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524950-zvlr4" event={"ID":"7a85b9c1-d5bc-420f-af25-6f7b9f2e2d5a","Type":"ContainerStarted","Data":"f8ca63c9e81e46ac0688e89e0ddf669dd267863075b9030ec5027ca32c42d021"} Feb 19 10:30:02 crc kubenswrapper[4965]: I0219 10:30:02.084183 4965 generic.go:334] "Generic (PLEG): container finished" podID="7a85b9c1-d5bc-420f-af25-6f7b9f2e2d5a" containerID="79a999f9d8f335b0c01464bcd060a66ad9f27ade7ac8cc6f4a8927d03c8bc644" exitCode=0 Feb 19 10:30:02 crc kubenswrapper[4965]: I0219 10:30:02.084237 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524950-zvlr4" event={"ID":"7a85b9c1-d5bc-420f-af25-6f7b9f2e2d5a","Type":"ContainerDied","Data":"79a999f9d8f335b0c01464bcd060a66ad9f27ade7ac8cc6f4a8927d03c8bc644"} Feb 19 10:30:03 crc kubenswrapper[4965]: I0219 10:30:03.554251 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524950-zvlr4" Feb 19 10:30:03 crc kubenswrapper[4965]: I0219 10:30:03.646364 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a85b9c1-d5bc-420f-af25-6f7b9f2e2d5a-secret-volume\") pod \"7a85b9c1-d5bc-420f-af25-6f7b9f2e2d5a\" (UID: \"7a85b9c1-d5bc-420f-af25-6f7b9f2e2d5a\") " Feb 19 10:30:03 crc kubenswrapper[4965]: I0219 10:30:03.646447 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkztz\" (UniqueName: \"kubernetes.io/projected/7a85b9c1-d5bc-420f-af25-6f7b9f2e2d5a-kube-api-access-hkztz\") pod \"7a85b9c1-d5bc-420f-af25-6f7b9f2e2d5a\" (UID: \"7a85b9c1-d5bc-420f-af25-6f7b9f2e2d5a\") " Feb 19 10:30:03 crc kubenswrapper[4965]: I0219 10:30:03.646641 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a85b9c1-d5bc-420f-af25-6f7b9f2e2d5a-config-volume\") pod \"7a85b9c1-d5bc-420f-af25-6f7b9f2e2d5a\" (UID: \"7a85b9c1-d5bc-420f-af25-6f7b9f2e2d5a\") " Feb 19 10:30:03 crc kubenswrapper[4965]: I0219 10:30:03.647671 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a85b9c1-d5bc-420f-af25-6f7b9f2e2d5a-config-volume" (OuterVolumeSpecName: "config-volume") pod "7a85b9c1-d5bc-420f-af25-6f7b9f2e2d5a" (UID: "7a85b9c1-d5bc-420f-af25-6f7b9f2e2d5a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:30:03 crc kubenswrapper[4965]: I0219 10:30:03.651718 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a85b9c1-d5bc-420f-af25-6f7b9f2e2d5a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7a85b9c1-d5bc-420f-af25-6f7b9f2e2d5a" (UID: "7a85b9c1-d5bc-420f-af25-6f7b9f2e2d5a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:30:03 crc kubenswrapper[4965]: I0219 10:30:03.660461 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a85b9c1-d5bc-420f-af25-6f7b9f2e2d5a-kube-api-access-hkztz" (OuterVolumeSpecName: "kube-api-access-hkztz") pod "7a85b9c1-d5bc-420f-af25-6f7b9f2e2d5a" (UID: "7a85b9c1-d5bc-420f-af25-6f7b9f2e2d5a"). InnerVolumeSpecName "kube-api-access-hkztz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:30:03 crc kubenswrapper[4965]: I0219 10:30:03.749616 4965 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a85b9c1-d5bc-420f-af25-6f7b9f2e2d5a-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 10:30:03 crc kubenswrapper[4965]: I0219 10:30:03.749855 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkztz\" (UniqueName: \"kubernetes.io/projected/7a85b9c1-d5bc-420f-af25-6f7b9f2e2d5a-kube-api-access-hkztz\") on node \"crc\" DevicePath \"\"" Feb 19 10:30:03 crc kubenswrapper[4965]: I0219 10:30:03.749868 4965 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a85b9c1-d5bc-420f-af25-6f7b9f2e2d5a-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 10:30:04 crc kubenswrapper[4965]: I0219 10:30:04.106915 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524950-zvlr4" event={"ID":"7a85b9c1-d5bc-420f-af25-6f7b9f2e2d5a","Type":"ContainerDied","Data":"f8ca63c9e81e46ac0688e89e0ddf669dd267863075b9030ec5027ca32c42d021"} Feb 19 10:30:04 crc kubenswrapper[4965]: I0219 10:30:04.106969 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8ca63c9e81e46ac0688e89e0ddf669dd267863075b9030ec5027ca32c42d021" Feb 19 10:30:04 crc kubenswrapper[4965]: I0219 10:30:04.107250 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524950-zvlr4" Feb 19 10:30:04 crc kubenswrapper[4965]: I0219 10:30:04.632100 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524905-682z8"] Feb 19 10:30:04 crc kubenswrapper[4965]: I0219 10:30:04.649154 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524905-682z8"] Feb 19 10:30:05 crc kubenswrapper[4965]: I0219 10:30:05.211876 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79e5acf4-3803-4356-aa12-622cceae90a5" path="/var/lib/kubelet/pods/79e5acf4-3803-4356-aa12-622cceae90a5/volumes" Feb 19 10:30:07 crc kubenswrapper[4965]: I0219 10:30:07.079304 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sljw4"] Feb 19 10:30:07 crc kubenswrapper[4965]: E0219 10:30:07.080918 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a85b9c1-d5bc-420f-af25-6f7b9f2e2d5a" containerName="collect-profiles" Feb 19 10:30:07 crc kubenswrapper[4965]: I0219 10:30:07.081172 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a85b9c1-d5bc-420f-af25-6f7b9f2e2d5a" containerName="collect-profiles" Feb 19 10:30:07 crc kubenswrapper[4965]: I0219 10:30:07.081659 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a85b9c1-d5bc-420f-af25-6f7b9f2e2d5a" containerName="collect-profiles" Feb 19 10:30:07 crc kubenswrapper[4965]: I0219 10:30:07.084539 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sljw4" Feb 19 10:30:07 crc kubenswrapper[4965]: I0219 10:30:07.100109 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sljw4"] Feb 19 10:30:07 crc kubenswrapper[4965]: I0219 10:30:07.237497 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb737c71-ed34-4949-9b9d-4a18d2660300-utilities\") pod \"certified-operators-sljw4\" (UID: \"eb737c71-ed34-4949-9b9d-4a18d2660300\") " pod="openshift-marketplace/certified-operators-sljw4" Feb 19 10:30:07 crc kubenswrapper[4965]: I0219 10:30:07.237567 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhhpd\" (UniqueName: \"kubernetes.io/projected/eb737c71-ed34-4949-9b9d-4a18d2660300-kube-api-access-xhhpd\") pod \"certified-operators-sljw4\" (UID: \"eb737c71-ed34-4949-9b9d-4a18d2660300\") " pod="openshift-marketplace/certified-operators-sljw4" Feb 19 10:30:07 crc kubenswrapper[4965]: I0219 10:30:07.237817 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb737c71-ed34-4949-9b9d-4a18d2660300-catalog-content\") pod \"certified-operators-sljw4\" (UID: \"eb737c71-ed34-4949-9b9d-4a18d2660300\") " pod="openshift-marketplace/certified-operators-sljw4" Feb 19 10:30:07 crc kubenswrapper[4965]: I0219 10:30:07.340500 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb737c71-ed34-4949-9b9d-4a18d2660300-catalog-content\") pod \"certified-operators-sljw4\" (UID: \"eb737c71-ed34-4949-9b9d-4a18d2660300\") " pod="openshift-marketplace/certified-operators-sljw4" Feb 19 10:30:07 crc kubenswrapper[4965]: I0219 10:30:07.340703 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb737c71-ed34-4949-9b9d-4a18d2660300-utilities\") pod \"certified-operators-sljw4\" (UID: \"eb737c71-ed34-4949-9b9d-4a18d2660300\") " pod="openshift-marketplace/certified-operators-sljw4" Feb 19 10:30:07 crc kubenswrapper[4965]: I0219 10:30:07.340729 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhhpd\" (UniqueName: \"kubernetes.io/projected/eb737c71-ed34-4949-9b9d-4a18d2660300-kube-api-access-xhhpd\") pod \"certified-operators-sljw4\" (UID: \"eb737c71-ed34-4949-9b9d-4a18d2660300\") " pod="openshift-marketplace/certified-operators-sljw4" Feb 19 10:30:07 crc kubenswrapper[4965]: I0219 10:30:07.341298 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb737c71-ed34-4949-9b9d-4a18d2660300-utilities\") pod \"certified-operators-sljw4\" (UID: \"eb737c71-ed34-4949-9b9d-4a18d2660300\") " pod="openshift-marketplace/certified-operators-sljw4" Feb 19 10:30:07 crc kubenswrapper[4965]: I0219 10:30:07.342081 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb737c71-ed34-4949-9b9d-4a18d2660300-catalog-content\") pod \"certified-operators-sljw4\" (UID: \"eb737c71-ed34-4949-9b9d-4a18d2660300\") " pod="openshift-marketplace/certified-operators-sljw4" Feb 19 10:30:07 crc kubenswrapper[4965]: I0219 10:30:07.370087 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhhpd\" (UniqueName: \"kubernetes.io/projected/eb737c71-ed34-4949-9b9d-4a18d2660300-kube-api-access-xhhpd\") pod \"certified-operators-sljw4\" (UID: \"eb737c71-ed34-4949-9b9d-4a18d2660300\") " pod="openshift-marketplace/certified-operators-sljw4" Feb 19 10:30:07 crc kubenswrapper[4965]: I0219 10:30:07.415412 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sljw4" Feb 19 10:30:08 crc kubenswrapper[4965]: I0219 10:30:08.128819 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sljw4"] Feb 19 10:30:08 crc kubenswrapper[4965]: I0219 10:30:08.160573 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sljw4" event={"ID":"eb737c71-ed34-4949-9b9d-4a18d2660300","Type":"ContainerStarted","Data":"d5e19462e603fc669c35861c7d9cb60a1e7c2730e109cb616d8a31751e7e3815"} Feb 19 10:30:09 crc kubenswrapper[4965]: I0219 10:30:09.173000 4965 generic.go:334] "Generic (PLEG): container finished" podID="eb737c71-ed34-4949-9b9d-4a18d2660300" containerID="81e5fa66e56d9bf3f6f0df9c4a319d9d99a2e1e00b5bc3c7e7d3286b1d88053e" exitCode=0 Feb 19 10:30:09 crc kubenswrapper[4965]: I0219 10:30:09.174446 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sljw4" event={"ID":"eb737c71-ed34-4949-9b9d-4a18d2660300","Type":"ContainerDied","Data":"81e5fa66e56d9bf3f6f0df9c4a319d9d99a2e1e00b5bc3c7e7d3286b1d88053e"} Feb 19 10:30:10 crc kubenswrapper[4965]: I0219 10:30:10.186563 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sljw4" event={"ID":"eb737c71-ed34-4949-9b9d-4a18d2660300","Type":"ContainerStarted","Data":"998079bc1dae2ffd287ec870a0ec7806adcb3dc1de2ab4d007a45a5726b9507c"} Feb 19 10:30:12 crc kubenswrapper[4965]: I0219 10:30:12.242856 4965 generic.go:334] "Generic (PLEG): container finished" podID="eb737c71-ed34-4949-9b9d-4a18d2660300" containerID="998079bc1dae2ffd287ec870a0ec7806adcb3dc1de2ab4d007a45a5726b9507c" exitCode=0 Feb 19 10:30:12 crc kubenswrapper[4965]: I0219 10:30:12.243301 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sljw4" event={"ID":"eb737c71-ed34-4949-9b9d-4a18d2660300","Type":"ContainerDied","Data":"998079bc1dae2ffd287ec870a0ec7806adcb3dc1de2ab4d007a45a5726b9507c"} Feb 19 10:30:14 crc kubenswrapper[4965]: I0219 10:30:14.275822 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sljw4" event={"ID":"eb737c71-ed34-4949-9b9d-4a18d2660300","Type":"ContainerStarted","Data":"38a49a379e70e6cb0a9bae3c6afcf8ab002077ea46cc52ad919996c82e7a715f"} Feb 19 10:30:14 crc kubenswrapper[4965]: I0219 10:30:14.299171 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sljw4" podStartSLOduration=3.184064895 podStartE2EDuration="7.299145621s" podCreationTimestamp="2026-02-19 10:30:07 +0000 UTC" firstStartedPulling="2026-02-19 10:30:09.175238023 +0000 UTC m=+2864.796559333" lastFinishedPulling="2026-02-19 10:30:13.290318749 +0000 UTC m=+2868.911640059" observedRunningTime="2026-02-19 10:30:14.295745089 +0000 UTC m=+2869.917066399" watchObservedRunningTime="2026-02-19 10:30:14.299145621 +0000 UTC m=+2869.920466931" Feb 19 10:30:16 crc kubenswrapper[4965]: I0219 10:30:16.601628 4965 patch_prober.go:28] interesting pod/machine-config-daemon-7mhh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:30:16 crc kubenswrapper[4965]: I0219 10:30:16.601998 4965 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:30:17 crc kubenswrapper[4965]: I0219 10:30:17.415559 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sljw4" Feb 19 10:30:17 crc kubenswrapper[4965]: I0219 10:30:17.415844 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sljw4" Feb 19 10:30:17 crc kubenswrapper[4965]: I0219 10:30:17.467717 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sljw4" Feb 19 10:30:18 crc kubenswrapper[4965]: I0219 10:30:18.395705 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sljw4" Feb 19 10:30:18 crc kubenswrapper[4965]: I0219 10:30:18.450845 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sljw4"] Feb 19 10:30:20 crc kubenswrapper[4965]: I0219 10:30:20.360853 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sljw4" podUID="eb737c71-ed34-4949-9b9d-4a18d2660300" containerName="registry-server" containerID="cri-o://38a49a379e70e6cb0a9bae3c6afcf8ab002077ea46cc52ad919996c82e7a715f" gracePeriod=2 Feb 19 10:30:21 crc kubenswrapper[4965]: I0219 10:30:21.097859 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sljw4" Feb 19 10:30:21 crc kubenswrapper[4965]: I0219 10:30:21.199345 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb737c71-ed34-4949-9b9d-4a18d2660300-utilities\") pod \"eb737c71-ed34-4949-9b9d-4a18d2660300\" (UID: \"eb737c71-ed34-4949-9b9d-4a18d2660300\") " Feb 19 10:30:21 crc kubenswrapper[4965]: I0219 10:30:21.199488 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhhpd\" (UniqueName: \"kubernetes.io/projected/eb737c71-ed34-4949-9b9d-4a18d2660300-kube-api-access-xhhpd\") pod \"eb737c71-ed34-4949-9b9d-4a18d2660300\" (UID: \"eb737c71-ed34-4949-9b9d-4a18d2660300\") " Feb 19 10:30:21 crc kubenswrapper[4965]: I0219 10:30:21.199585 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb737c71-ed34-4949-9b9d-4a18d2660300-catalog-content\") pod \"eb737c71-ed34-4949-9b9d-4a18d2660300\" (UID: \"eb737c71-ed34-4949-9b9d-4a18d2660300\") " Feb 19 10:30:21 crc kubenswrapper[4965]: I0219 10:30:21.200224 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb737c71-ed34-4949-9b9d-4a18d2660300-utilities" (OuterVolumeSpecName: "utilities") pod "eb737c71-ed34-4949-9b9d-4a18d2660300" (UID: "eb737c71-ed34-4949-9b9d-4a18d2660300"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:30:21 crc kubenswrapper[4965]: I0219 10:30:21.200483 4965 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb737c71-ed34-4949-9b9d-4a18d2660300-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:30:21 crc kubenswrapper[4965]: I0219 10:30:21.204636 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb737c71-ed34-4949-9b9d-4a18d2660300-kube-api-access-xhhpd" (OuterVolumeSpecName: "kube-api-access-xhhpd") pod "eb737c71-ed34-4949-9b9d-4a18d2660300" (UID: "eb737c71-ed34-4949-9b9d-4a18d2660300"). InnerVolumeSpecName "kube-api-access-xhhpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:30:21 crc kubenswrapper[4965]: I0219 10:30:21.248935 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb737c71-ed34-4949-9b9d-4a18d2660300-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eb737c71-ed34-4949-9b9d-4a18d2660300" (UID: "eb737c71-ed34-4949-9b9d-4a18d2660300"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:30:21 crc kubenswrapper[4965]: I0219 10:30:21.304660 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhhpd\" (UniqueName: \"kubernetes.io/projected/eb737c71-ed34-4949-9b9d-4a18d2660300-kube-api-access-xhhpd\") on node \"crc\" DevicePath \"\"" Feb 19 10:30:21 crc kubenswrapper[4965]: I0219 10:30:21.304688 4965 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb737c71-ed34-4949-9b9d-4a18d2660300-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:30:21 crc kubenswrapper[4965]: I0219 10:30:21.373467 4965 generic.go:334] "Generic (PLEG): container finished" podID="eb737c71-ed34-4949-9b9d-4a18d2660300" containerID="38a49a379e70e6cb0a9bae3c6afcf8ab002077ea46cc52ad919996c82e7a715f" exitCode=0 Feb 19 10:30:21 crc kubenswrapper[4965]: I0219 10:30:21.373513 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sljw4" event={"ID":"eb737c71-ed34-4949-9b9d-4a18d2660300","Type":"ContainerDied","Data":"38a49a379e70e6cb0a9bae3c6afcf8ab002077ea46cc52ad919996c82e7a715f"} Feb 19 10:30:21 crc kubenswrapper[4965]: I0219 10:30:21.373552 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sljw4" event={"ID":"eb737c71-ed34-4949-9b9d-4a18d2660300","Type":"ContainerDied","Data":"d5e19462e603fc669c35861c7d9cb60a1e7c2730e109cb616d8a31751e7e3815"} Feb 19 10:30:21 crc kubenswrapper[4965]: I0219 10:30:21.373576 4965 scope.go:117] "RemoveContainer" containerID="38a49a379e70e6cb0a9bae3c6afcf8ab002077ea46cc52ad919996c82e7a715f" Feb 19 10:30:21 crc kubenswrapper[4965]: I0219 10:30:21.375172 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sljw4" Feb 19 10:30:21 crc kubenswrapper[4965]: I0219 10:30:21.421592 4965 scope.go:117] "RemoveContainer" containerID="998079bc1dae2ffd287ec870a0ec7806adcb3dc1de2ab4d007a45a5726b9507c" Feb 19 10:30:21 crc kubenswrapper[4965]: I0219 10:30:21.447587 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sljw4"] Feb 19 10:30:21 crc kubenswrapper[4965]: I0219 10:30:21.451633 4965 scope.go:117] "RemoveContainer" containerID="81e5fa66e56d9bf3f6f0df9c4a319d9d99a2e1e00b5bc3c7e7d3286b1d88053e" Feb 19 10:30:21 crc kubenswrapper[4965]: I0219 10:30:21.466893 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sljw4"] Feb 19 10:30:21 crc kubenswrapper[4965]: I0219 10:30:21.515785 4965 scope.go:117] "RemoveContainer" containerID="38a49a379e70e6cb0a9bae3c6afcf8ab002077ea46cc52ad919996c82e7a715f" Feb 19 10:30:21 crc kubenswrapper[4965]: E0219 10:30:21.516464 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38a49a379e70e6cb0a9bae3c6afcf8ab002077ea46cc52ad919996c82e7a715f\": container with ID starting with 38a49a379e70e6cb0a9bae3c6afcf8ab002077ea46cc52ad919996c82e7a715f not found: ID does not exist" containerID="38a49a379e70e6cb0a9bae3c6afcf8ab002077ea46cc52ad919996c82e7a715f" Feb 19 10:30:21 crc kubenswrapper[4965]: I0219 10:30:21.516510 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38a49a379e70e6cb0a9bae3c6afcf8ab002077ea46cc52ad919996c82e7a715f"} err="failed to get container status \"38a49a379e70e6cb0a9bae3c6afcf8ab002077ea46cc52ad919996c82e7a715f\": rpc error: code = NotFound desc = could not find container \"38a49a379e70e6cb0a9bae3c6afcf8ab002077ea46cc52ad919996c82e7a715f\": container with ID starting with 38a49a379e70e6cb0a9bae3c6afcf8ab002077ea46cc52ad919996c82e7a715f not found: ID does not exist" Feb 19 10:30:21 crc kubenswrapper[4965]: I0219 10:30:21.516537 4965 scope.go:117] "RemoveContainer" containerID="998079bc1dae2ffd287ec870a0ec7806adcb3dc1de2ab4d007a45a5726b9507c" Feb 19 10:30:21 crc kubenswrapper[4965]: E0219 10:30:21.516910 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"998079bc1dae2ffd287ec870a0ec7806adcb3dc1de2ab4d007a45a5726b9507c\": container with ID starting with 998079bc1dae2ffd287ec870a0ec7806adcb3dc1de2ab4d007a45a5726b9507c not found: ID does not exist" containerID="998079bc1dae2ffd287ec870a0ec7806adcb3dc1de2ab4d007a45a5726b9507c" Feb 19 10:30:21 crc kubenswrapper[4965]: I0219 10:30:21.516942 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"998079bc1dae2ffd287ec870a0ec7806adcb3dc1de2ab4d007a45a5726b9507c"} err="failed to get container status \"998079bc1dae2ffd287ec870a0ec7806adcb3dc1de2ab4d007a45a5726b9507c\": rpc error: code = NotFound desc = could not find container \"998079bc1dae2ffd287ec870a0ec7806adcb3dc1de2ab4d007a45a5726b9507c\": container with ID starting with 998079bc1dae2ffd287ec870a0ec7806adcb3dc1de2ab4d007a45a5726b9507c not found: ID does not exist" Feb 19 10:30:21 crc kubenswrapper[4965]: I0219 10:30:21.516957 4965 scope.go:117] "RemoveContainer" containerID="81e5fa66e56d9bf3f6f0df9c4a319d9d99a2e1e00b5bc3c7e7d3286b1d88053e" Feb 19 10:30:21 crc kubenswrapper[4965]: E0219 10:30:21.517474 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81e5fa66e56d9bf3f6f0df9c4a319d9d99a2e1e00b5bc3c7e7d3286b1d88053e\": container with ID starting with 81e5fa66e56d9bf3f6f0df9c4a319d9d99a2e1e00b5bc3c7e7d3286b1d88053e not found: ID does not exist" containerID="81e5fa66e56d9bf3f6f0df9c4a319d9d99a2e1e00b5bc3c7e7d3286b1d88053e" Feb 19 10:30:21 crc kubenswrapper[4965]: I0219 10:30:21.517499 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81e5fa66e56d9bf3f6f0df9c4a319d9d99a2e1e00b5bc3c7e7d3286b1d88053e"} err="failed to get container status \"81e5fa66e56d9bf3f6f0df9c4a319d9d99a2e1e00b5bc3c7e7d3286b1d88053e\": rpc error: code = NotFound desc = could not find container \"81e5fa66e56d9bf3f6f0df9c4a319d9d99a2e1e00b5bc3c7e7d3286b1d88053e\": container with ID starting with 81e5fa66e56d9bf3f6f0df9c4a319d9d99a2e1e00b5bc3c7e7d3286b1d88053e not found: ID does not exist" Feb 19 10:30:21 crc kubenswrapper[4965]: E0219 10:30:21.530779 4965 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb737c71_ed34_4949_9b9d_4a18d2660300.slice/crio-d5e19462e603fc669c35861c7d9cb60a1e7c2730e109cb616d8a31751e7e3815\": RecentStats: unable to find data in memory cache]" Feb 19 10:30:23 crc kubenswrapper[4965]: I0219 10:30:23.211013 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb737c71-ed34-4949-9b9d-4a18d2660300" path="/var/lib/kubelet/pods/eb737c71-ed34-4949-9b9d-4a18d2660300/volumes" Feb 19 10:30:43 crc kubenswrapper[4965]: I0219 10:30:43.060409 4965 scope.go:117] "RemoveContainer" containerID="c29553c314bda6ac7c978a596a9a6573778e5f62e54d551ff98c289ee574ae21" Feb 19 10:30:46 crc kubenswrapper[4965]: I0219 10:30:46.600877 4965 patch_prober.go:28] interesting pod/machine-config-daemon-7mhh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:30:46 crc kubenswrapper[4965]: I0219 10:30:46.601420 4965 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:31:16 crc kubenswrapper[4965]: I0219 10:31:16.601565 4965 patch_prober.go:28] interesting pod/machine-config-daemon-7mhh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:31:16 crc kubenswrapper[4965]: I0219 10:31:16.602071 4965 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:31:16 crc kubenswrapper[4965]: I0219 10:31:16.602113 4965 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" Feb 19 10:31:16 crc kubenswrapper[4965]: I0219 10:31:16.602883 4965 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b29f2a4df3d1b4e7054216ba139e5ed515eb129972b5c4cb743ccf8db61bf96b"} pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 10:31:16 crc kubenswrapper[4965]: I0219 10:31:16.602932 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerName="machine-config-daemon" containerID="cri-o://b29f2a4df3d1b4e7054216ba139e5ed515eb129972b5c4cb743ccf8db61bf96b" gracePeriod=600 Feb 19 10:31:16 crc kubenswrapper[4965]: E0219 10:31:16.723685 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:31:16 crc kubenswrapper[4965]: I0219 10:31:16.910686 4965 generic.go:334] "Generic (PLEG): container finished" podID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerID="b29f2a4df3d1b4e7054216ba139e5ed515eb129972b5c4cb743ccf8db61bf96b" exitCode=0 Feb 19 10:31:16 crc kubenswrapper[4965]: I0219 10:31:16.910731 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" event={"ID":"63ef3eb8-6103-492d-b6ef-f16081d15e83","Type":"ContainerDied","Data":"b29f2a4df3d1b4e7054216ba139e5ed515eb129972b5c4cb743ccf8db61bf96b"} Feb 19 10:31:16 crc kubenswrapper[4965]: I0219 10:31:16.910764 4965 scope.go:117] "RemoveContainer" containerID="d46dcae3f2586439cac7ca3262f83dacf19b8435d7db00d5455101ba06915208" Feb 19 10:31:16 crc kubenswrapper[4965]: I0219 10:31:16.911423 4965 scope.go:117] "RemoveContainer" containerID="b29f2a4df3d1b4e7054216ba139e5ed515eb129972b5c4cb743ccf8db61bf96b" Feb 19 10:31:16 crc kubenswrapper[4965]: E0219 10:31:16.911796 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:31:28 crc kubenswrapper[4965]: I0219 10:31:28.198687 4965 scope.go:117] "RemoveContainer" containerID="b29f2a4df3d1b4e7054216ba139e5ed515eb129972b5c4cb743ccf8db61bf96b" Feb 19 10:31:28 crc kubenswrapper[4965]: E0219 10:31:28.200218 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:31:39 crc kubenswrapper[4965]: I0219 10:31:39.198499 4965 scope.go:117] "RemoveContainer" containerID="b29f2a4df3d1b4e7054216ba139e5ed515eb129972b5c4cb743ccf8db61bf96b" Feb 19 10:31:39 crc kubenswrapper[4965]: E0219 10:31:39.199283 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:31:50 crc kubenswrapper[4965]: I0219 10:31:50.200159 4965 scope.go:117] "RemoveContainer" containerID="b29f2a4df3d1b4e7054216ba139e5ed515eb129972b5c4cb743ccf8db61bf96b" Feb 19 10:31:50 crc kubenswrapper[4965]: E0219 10:31:50.208037 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:32:03 crc kubenswrapper[4965]: I0219 10:32:03.198017 4965 scope.go:117] "RemoveContainer" containerID="b29f2a4df3d1b4e7054216ba139e5ed515eb129972b5c4cb743ccf8db61bf96b" Feb 19 10:32:03 crc kubenswrapper[4965]: E0219 10:32:03.200102 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:32:17 crc kubenswrapper[4965]: I0219 10:32:17.202909 4965 scope.go:117] "RemoveContainer" containerID="b29f2a4df3d1b4e7054216ba139e5ed515eb129972b5c4cb743ccf8db61bf96b" Feb 19 10:32:17 crc kubenswrapper[4965]: E0219 10:32:17.203891 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:32:30 crc kubenswrapper[4965]: I0219 10:32:30.199067 4965 scope.go:117] "RemoveContainer" containerID="b29f2a4df3d1b4e7054216ba139e5ed515eb129972b5c4cb743ccf8db61bf96b" Feb 19 10:32:30 crc kubenswrapper[4965]: E0219 10:32:30.199945 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:32:43 crc kubenswrapper[4965]: I0219 10:32:43.198350 4965 scope.go:117] "RemoveContainer" containerID="b29f2a4df3d1b4e7054216ba139e5ed515eb129972b5c4cb743ccf8db61bf96b" Feb 19 10:32:43 crc kubenswrapper[4965]: E0219 10:32:43.199330 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:32:51 crc kubenswrapper[4965]: I0219 10:32:51.180342 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bqcqs"] Feb 19 10:32:51 crc kubenswrapper[4965]: E0219 10:32:51.181602 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb737c71-ed34-4949-9b9d-4a18d2660300" containerName="extract-utilities" Feb 19 10:32:51 crc kubenswrapper[4965]: I0219 10:32:51.181620 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb737c71-ed34-4949-9b9d-4a18d2660300" containerName="extract-utilities" Feb 19 10:32:51 crc kubenswrapper[4965]: E0219 10:32:51.181639 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb737c71-ed34-4949-9b9d-4a18d2660300" containerName="registry-server" Feb 19 10:32:51 crc kubenswrapper[4965]: I0219 10:32:51.181647 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb737c71-ed34-4949-9b9d-4a18d2660300" containerName="registry-server" Feb 19 10:32:51 crc kubenswrapper[4965]: E0219 10:32:51.181682 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb737c71-ed34-4949-9b9d-4a18d2660300" containerName="extract-content" Feb 19 10:32:51 crc kubenswrapper[4965]: I0219 10:32:51.181691 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb737c71-ed34-4949-9b9d-4a18d2660300" containerName="extract-content" Feb 19 10:32:51 crc kubenswrapper[4965]: I0219 10:32:51.182148 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb737c71-ed34-4949-9b9d-4a18d2660300" containerName="registry-server" Feb 19 10:32:51 crc kubenswrapper[4965]: I0219 10:32:51.184014 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bqcqs" Feb 19 10:32:51 crc kubenswrapper[4965]: I0219 10:32:51.216686 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bqcqs"] Feb 19 10:32:51 crc kubenswrapper[4965]: I0219 10:32:51.328276 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12934331-145e-4479-a97c-5412cd747b3d-catalog-content\") pod \"community-operators-bqcqs\" (UID: \"12934331-145e-4479-a97c-5412cd747b3d\") " pod="openshift-marketplace/community-operators-bqcqs" Feb 19 10:32:51 crc kubenswrapper[4965]: I0219 10:32:51.328706 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv5qg\" (UniqueName: \"kubernetes.io/projected/12934331-145e-4479-a97c-5412cd747b3d-kube-api-access-pv5qg\") pod \"community-operators-bqcqs\" (UID: \"12934331-145e-4479-a97c-5412cd747b3d\") " pod="openshift-marketplace/community-operators-bqcqs" Feb 19 10:32:51 crc kubenswrapper[4965]: I0219 10:32:51.328820 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12934331-145e-4479-a97c-5412cd747b3d-utilities\") pod \"community-operators-bqcqs\" (UID: \"12934331-145e-4479-a97c-5412cd747b3d\") " pod="openshift-marketplace/community-operators-bqcqs" Feb 19 10:32:51 crc kubenswrapper[4965]: I0219 10:32:51.430926 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pv5qg\" (UniqueName: \"kubernetes.io/projected/12934331-145e-4479-a97c-5412cd747b3d-kube-api-access-pv5qg\") pod \"community-operators-bqcqs\" (UID: \"12934331-145e-4479-a97c-5412cd747b3d\") " pod="openshift-marketplace/community-operators-bqcqs" Feb 19 10:32:51 crc kubenswrapper[4965]: I0219 10:32:51.431291 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12934331-145e-4479-a97c-5412cd747b3d-utilities\") pod \"community-operators-bqcqs\" (UID: \"12934331-145e-4479-a97c-5412cd747b3d\") " pod="openshift-marketplace/community-operators-bqcqs" Feb 19 10:32:51 crc kubenswrapper[4965]: I0219 10:32:51.431547 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12934331-145e-4479-a97c-5412cd747b3d-catalog-content\") pod \"community-operators-bqcqs\" (UID: \"12934331-145e-4479-a97c-5412cd747b3d\") " pod="openshift-marketplace/community-operators-bqcqs" Feb 19 10:32:51 crc kubenswrapper[4965]: I0219 10:32:51.431919 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12934331-145e-4479-a97c-5412cd747b3d-utilities\") pod \"community-operators-bqcqs\" (UID: \"12934331-145e-4479-a97c-5412cd747b3d\") " pod="openshift-marketplace/community-operators-bqcqs" Feb 19 10:32:51 crc kubenswrapper[4965]: I0219 10:32:51.432042 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12934331-145e-4479-a97c-5412cd747b3d-catalog-content\") pod \"community-operators-bqcqs\" (UID: \"12934331-145e-4479-a97c-5412cd747b3d\") " pod="openshift-marketplace/community-operators-bqcqs" Feb 19 10:32:51 crc kubenswrapper[4965]: I0219 10:32:51.458076 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pv5qg\" (UniqueName: \"kubernetes.io/projected/12934331-145e-4479-a97c-5412cd747b3d-kube-api-access-pv5qg\") pod \"community-operators-bqcqs\" (UID: \"12934331-145e-4479-a97c-5412cd747b3d\") " pod="openshift-marketplace/community-operators-bqcqs" Feb 19 10:32:51 crc kubenswrapper[4965]: I0219 10:32:51.514766 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bqcqs" Feb 19 10:32:52 crc kubenswrapper[4965]: I0219 10:32:52.095974 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bqcqs"] Feb 19 10:32:52 crc kubenswrapper[4965]: I0219 10:32:52.899722 4965 generic.go:334] "Generic (PLEG): container finished" podID="12934331-145e-4479-a97c-5412cd747b3d" containerID="b498d288372fab57067c36ee619d866f0c995317437eaf567b9741346a33d545" exitCode=0 Feb 19 10:32:52 crc kubenswrapper[4965]: I0219 10:32:52.900028 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bqcqs" event={"ID":"12934331-145e-4479-a97c-5412cd747b3d","Type":"ContainerDied","Data":"b498d288372fab57067c36ee619d866f0c995317437eaf567b9741346a33d545"} Feb 19 10:32:52 crc kubenswrapper[4965]: I0219 10:32:52.900058 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bqcqs" event={"ID":"12934331-145e-4479-a97c-5412cd747b3d","Type":"ContainerStarted","Data":"703add77763e6e4413a5e026c929b00b44e63cbb21b4535c5b5a74fdc4f6d674"} Feb 19 10:32:53 crc kubenswrapper[4965]: I0219 10:32:53.922175 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bqcqs" event={"ID":"12934331-145e-4479-a97c-5412cd747b3d","Type":"ContainerStarted","Data":"668eaad3520da6635110af273c84ab47762fcd160337aa8d9594cb7fabc95dc7"} Feb 19 10:32:54 crc kubenswrapper[4965]: I0219 10:32:54.198404 4965 scope.go:117] "RemoveContainer" containerID="b29f2a4df3d1b4e7054216ba139e5ed515eb129972b5c4cb743ccf8db61bf96b" Feb 19 10:32:54 crc kubenswrapper[4965]: E0219 10:32:54.198614 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:32:55 crc kubenswrapper[4965]: I0219 10:32:55.953842 4965 generic.go:334] "Generic (PLEG): container finished" podID="12934331-145e-4479-a97c-5412cd747b3d" containerID="668eaad3520da6635110af273c84ab47762fcd160337aa8d9594cb7fabc95dc7" exitCode=0 Feb 19 10:32:55 crc kubenswrapper[4965]: I0219 10:32:55.953945 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bqcqs" event={"ID":"12934331-145e-4479-a97c-5412cd747b3d","Type":"ContainerDied","Data":"668eaad3520da6635110af273c84ab47762fcd160337aa8d9594cb7fabc95dc7"} Feb 19 10:32:56 crc kubenswrapper[4965]: I0219 10:32:56.967744 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bqcqs" event={"ID":"12934331-145e-4479-a97c-5412cd747b3d","Type":"ContainerStarted","Data":"4a10101a521d58990efb687ead540cdcd8acb0c42cc6611f73c1266ab0fac67d"} Feb 19 10:32:56 crc kubenswrapper[4965]: I0219 10:32:56.995931 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bqcqs" podStartSLOduration=2.556363282 podStartE2EDuration="5.995912287s" podCreationTimestamp="2026-02-19 10:32:51 +0000 UTC" firstStartedPulling="2026-02-19 10:32:52.905835275 +0000 UTC m=+3028.527156585" lastFinishedPulling="2026-02-19 10:32:56.34538428 +0000 UTC m=+3031.966705590" observedRunningTime="2026-02-19 10:32:56.985177127 +0000 UTC m=+3032.606498447" watchObservedRunningTime="2026-02-19 10:32:56.995912287 +0000 UTC m=+3032.617233597" Feb 19 10:33:01 crc kubenswrapper[4965]: I0219 10:33:01.515430 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bqcqs" Feb 19 10:33:01 crc kubenswrapper[4965]: I0219 10:33:01.516983 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bqcqs" Feb 19 10:33:01 crc kubenswrapper[4965]: I0219 10:33:01.591500 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bqcqs" Feb 19 10:33:02 crc kubenswrapper[4965]: I0219 10:33:02.084666 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bqcqs" Feb 19 10:33:02 crc kubenswrapper[4965]: I0219 10:33:02.775923 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bqcqs"] Feb 19 10:33:04 crc kubenswrapper[4965]: I0219 10:33:04.042392 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bqcqs" podUID="12934331-145e-4479-a97c-5412cd747b3d" containerName="registry-server" containerID="cri-o://4a10101a521d58990efb687ead540cdcd8acb0c42cc6611f73c1266ab0fac67d" gracePeriod=2 Feb 19 10:33:05 crc kubenswrapper[4965]: I0219 10:33:05.053827 4965 generic.go:334] "Generic (PLEG): container finished" podID="12934331-145e-4479-a97c-5412cd747b3d" containerID="4a10101a521d58990efb687ead540cdcd8acb0c42cc6611f73c1266ab0fac67d" exitCode=0 Feb 19 10:33:05 crc kubenswrapper[4965]: I0219 10:33:05.053882 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bqcqs" event={"ID":"12934331-145e-4479-a97c-5412cd747b3d","Type":"ContainerDied","Data":"4a10101a521d58990efb687ead540cdcd8acb0c42cc6611f73c1266ab0fac67d"} Feb 19 10:33:05 crc kubenswrapper[4965]: I0219 10:33:05.465357 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bqcqs" Feb 19 10:33:05 crc kubenswrapper[4965]: I0219 10:33:05.657219 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pv5qg\" (UniqueName: \"kubernetes.io/projected/12934331-145e-4479-a97c-5412cd747b3d-kube-api-access-pv5qg\") pod \"12934331-145e-4479-a97c-5412cd747b3d\" (UID: \"12934331-145e-4479-a97c-5412cd747b3d\") " Feb 19 10:33:05 crc kubenswrapper[4965]: I0219 10:33:05.657534 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12934331-145e-4479-a97c-5412cd747b3d-catalog-content\") pod \"12934331-145e-4479-a97c-5412cd747b3d\" (UID: \"12934331-145e-4479-a97c-5412cd747b3d\") " Feb 19 10:33:05 crc kubenswrapper[4965]: I0219 10:33:05.657688 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12934331-145e-4479-a97c-5412cd747b3d-utilities\") pod \"12934331-145e-4479-a97c-5412cd747b3d\" (UID: \"12934331-145e-4479-a97c-5412cd747b3d\") " Feb 19 10:33:05 crc kubenswrapper[4965]: I0219 10:33:05.658462 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12934331-145e-4479-a97c-5412cd747b3d-utilities" (OuterVolumeSpecName: "utilities") pod "12934331-145e-4479-a97c-5412cd747b3d" (UID: "12934331-145e-4479-a97c-5412cd747b3d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:33:05 crc kubenswrapper[4965]: I0219 10:33:05.667814 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12934331-145e-4479-a97c-5412cd747b3d-kube-api-access-pv5qg" (OuterVolumeSpecName: "kube-api-access-pv5qg") pod "12934331-145e-4479-a97c-5412cd747b3d" (UID: "12934331-145e-4479-a97c-5412cd747b3d"). InnerVolumeSpecName "kube-api-access-pv5qg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:33:05 crc kubenswrapper[4965]: I0219 10:33:05.714253 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12934331-145e-4479-a97c-5412cd747b3d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "12934331-145e-4479-a97c-5412cd747b3d" (UID: "12934331-145e-4479-a97c-5412cd747b3d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:33:05 crc kubenswrapper[4965]: I0219 10:33:05.760730 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pv5qg\" (UniqueName: \"kubernetes.io/projected/12934331-145e-4479-a97c-5412cd747b3d-kube-api-access-pv5qg\") on node \"crc\" DevicePath \"\"" Feb 19 10:33:05 crc kubenswrapper[4965]: I0219 10:33:05.760966 4965 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12934331-145e-4479-a97c-5412cd747b3d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:33:05 crc kubenswrapper[4965]: I0219 10:33:05.761044 4965 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12934331-145e-4479-a97c-5412cd747b3d-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:33:06 crc kubenswrapper[4965]: I0219 10:33:06.070300 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bqcqs" event={"ID":"12934331-145e-4479-a97c-5412cd747b3d","Type":"ContainerDied","Data":"703add77763e6e4413a5e026c929b00b44e63cbb21b4535c5b5a74fdc4f6d674"} Feb 19 10:33:06 crc kubenswrapper[4965]: I0219 10:33:06.070380 4965 scope.go:117] "RemoveContainer" containerID="4a10101a521d58990efb687ead540cdcd8acb0c42cc6611f73c1266ab0fac67d" Feb 19 10:33:06 crc kubenswrapper[4965]: I0219 10:33:06.071891 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bqcqs" Feb 19 10:33:06 crc kubenswrapper[4965]: I0219 10:33:06.105265 4965 scope.go:117] "RemoveContainer" containerID="668eaad3520da6635110af273c84ab47762fcd160337aa8d9594cb7fabc95dc7" Feb 19 10:33:06 crc kubenswrapper[4965]: I0219 10:33:06.125138 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bqcqs"] Feb 19 10:33:06 crc kubenswrapper[4965]: I0219 10:33:06.150354 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bqcqs"] Feb 19 10:33:06 crc kubenswrapper[4965]: I0219 10:33:06.158352 4965 scope.go:117] "RemoveContainer" containerID="b498d288372fab57067c36ee619d866f0c995317437eaf567b9741346a33d545" Feb 19 10:33:07 crc kubenswrapper[4965]: I0219 10:33:07.213375 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12934331-145e-4479-a97c-5412cd747b3d" path="/var/lib/kubelet/pods/12934331-145e-4479-a97c-5412cd747b3d/volumes" Feb 19 10:33:09 crc kubenswrapper[4965]: I0219 10:33:09.202440 4965 scope.go:117] "RemoveContainer" containerID="b29f2a4df3d1b4e7054216ba139e5ed515eb129972b5c4cb743ccf8db61bf96b" Feb 19 10:33:09 crc kubenswrapper[4965]: E0219 10:33:09.202890 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:33:24 crc kubenswrapper[4965]: I0219 10:33:24.198580 4965 scope.go:117] "RemoveContainer" containerID="b29f2a4df3d1b4e7054216ba139e5ed515eb129972b5c4cb743ccf8db61bf96b" Feb 19 10:33:24 crc kubenswrapper[4965]: E0219 10:33:24.199487 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:33:38 crc kubenswrapper[4965]: I0219 10:33:38.197746 4965 scope.go:117] "RemoveContainer" containerID="b29f2a4df3d1b4e7054216ba139e5ed515eb129972b5c4cb743ccf8db61bf96b" Feb 19 10:33:38 crc kubenswrapper[4965]: E0219 10:33:38.198651 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:33:49 crc kubenswrapper[4965]: I0219 10:33:49.197745 4965 scope.go:117] "RemoveContainer" containerID="b29f2a4df3d1b4e7054216ba139e5ed515eb129972b5c4cb743ccf8db61bf96b" Feb 19 10:33:49 crc kubenswrapper[4965]: E0219 10:33:49.198551 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:34:01 crc kubenswrapper[4965]: I0219 10:34:01.200611 4965 scope.go:117] "RemoveContainer" containerID="b29f2a4df3d1b4e7054216ba139e5ed515eb129972b5c4cb743ccf8db61bf96b" Feb 19 10:34:01 crc kubenswrapper[4965]: E0219 10:34:01.207165 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:34:13 crc kubenswrapper[4965]: I0219 10:34:13.198725 4965 scope.go:117] "RemoveContainer" containerID="b29f2a4df3d1b4e7054216ba139e5ed515eb129972b5c4cb743ccf8db61bf96b" Feb 19 10:34:13 crc kubenswrapper[4965]: E0219 10:34:13.200396 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:34:25 crc kubenswrapper[4965]: I0219 10:34:25.223541 4965 scope.go:117] "RemoveContainer" containerID="b29f2a4df3d1b4e7054216ba139e5ed515eb129972b5c4cb743ccf8db61bf96b" Feb 19 10:34:25 crc kubenswrapper[4965]: E0219 10:34:25.225424 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:34:38 crc kubenswrapper[4965]: I0219 10:34:38.197483 4965 scope.go:117] "RemoveContainer" containerID="b29f2a4df3d1b4e7054216ba139e5ed515eb129972b5c4cb743ccf8db61bf96b" Feb 19 10:34:38 crc kubenswrapper[4965]: E0219 10:34:38.198334 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:34:49 crc kubenswrapper[4965]: I0219 10:34:49.198304 4965 scope.go:117] "RemoveContainer" containerID="b29f2a4df3d1b4e7054216ba139e5ed515eb129972b5c4cb743ccf8db61bf96b" Feb 19 10:34:49 crc kubenswrapper[4965]: E0219 10:34:49.199012 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:35:03 crc kubenswrapper[4965]: I0219 10:35:03.197952 4965 scope.go:117] "RemoveContainer" containerID="b29f2a4df3d1b4e7054216ba139e5ed515eb129972b5c4cb743ccf8db61bf96b" Feb 19 10:35:03 crc kubenswrapper[4965]: E0219 10:35:03.198813 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:35:11 crc kubenswrapper[4965]: I0219 10:35:11.299825 4965 generic.go:334] "Generic (PLEG): container finished" podID="acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a" containerID="8b2affc2f3e6e50f0631edaa1a69fd1eab1d6a7c4850d0ae0fd554c0615ce2f7" exitCode=0 Feb 19 10:35:11 crc kubenswrapper[4965]: I0219 10:35:11.300417 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a","Type":"ContainerDied","Data":"8b2affc2f3e6e50f0631edaa1a69fd1eab1d6a7c4850d0ae0fd554c0615ce2f7"} Feb 19 10:35:12 crc kubenswrapper[4965]: I0219 10:35:12.900469 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 19 10:35:12 crc kubenswrapper[4965]: I0219 10:35:12.996496 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a-openstack-config\") pod \"acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a\" (UID: \"acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a\") " Feb 19 10:35:12 crc kubenswrapper[4965]: I0219 10:35:12.996614 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a-test-operator-ephemeral-temporary\") pod \"acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a\" (UID: \"acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a\") " Feb 19 10:35:12 crc kubenswrapper[4965]: I0219 10:35:12.996678 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a\" (UID: \"acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a\") " Feb 19 10:35:12 crc kubenswrapper[4965]: I0219 10:35:12.996727 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xgvc\" (UniqueName: \"kubernetes.io/projected/acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a-kube-api-access-9xgvc\") pod \"acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a\" (UID: \"acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a\") " Feb 19 10:35:12 crc kubenswrapper[4965]: I0219 10:35:12.996787 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a-test-operator-ephemeral-workdir\") pod \"acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a\" (UID: \"acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a\") " Feb 19 10:35:12 crc kubenswrapper[4965]: I0219 10:35:12.996808 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a-config-data\") pod \"acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a\" (UID: \"acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a\") " Feb 19 10:35:12 crc kubenswrapper[4965]: I0219 10:35:12.996830 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a-ssh-key\") pod \"acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a\" (UID: \"acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a\") " Feb 19 10:35:12 crc kubenswrapper[4965]: I0219 10:35:12.996848 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a-openstack-config-secret\") pod \"acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a\" (UID: \"acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a\") " Feb 19 10:35:12 crc kubenswrapper[4965]: I0219 10:35:12.996872 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a-ca-certs\") pod \"acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a\" (UID: \"acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a\") " Feb 19 10:35:12 crc kubenswrapper[4965]: I0219 10:35:12.997436 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a" (UID: "acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:35:12 crc kubenswrapper[4965]: I0219 10:35:12.997513 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a-config-data" (OuterVolumeSpecName: "config-data") pod "acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a" (UID: "acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:35:13 crc kubenswrapper[4965]: I0219 10:35:13.001514 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "test-operator-logs") pod "acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a" (UID: "acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 10:35:13 crc kubenswrapper[4965]: I0219 10:35:13.002652 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a-kube-api-access-9xgvc" (OuterVolumeSpecName: "kube-api-access-9xgvc") pod "acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a" (UID: "acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a"). InnerVolumeSpecName "kube-api-access-9xgvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:35:13 crc kubenswrapper[4965]: I0219 10:35:13.026065 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a" (UID: "acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:35:13 crc kubenswrapper[4965]: I0219 10:35:13.029039 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a" (UID: "acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:35:13 crc kubenswrapper[4965]: I0219 10:35:13.033367 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a" (UID: "acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:35:13 crc kubenswrapper[4965]: I0219 10:35:13.071046 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a" (UID: "acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:35:13 crc kubenswrapper[4965]: I0219 10:35:13.099398 4965 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Feb 19 10:35:13 crc kubenswrapper[4965]: I0219 10:35:13.099435 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xgvc\" (UniqueName: \"kubernetes.io/projected/acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a-kube-api-access-9xgvc\") on node \"crc\" DevicePath \"\"" Feb 19 10:35:13 crc kubenswrapper[4965]: I0219 10:35:13.099446 4965 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:35:13 crc kubenswrapper[4965]: I0219 10:35:13.099455 4965 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 19 10:35:13 crc kubenswrapper[4965]: I0219 10:35:13.099463 4965 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 19 10:35:13 crc kubenswrapper[4965]: I0219 10:35:13.099472 4965 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 19 10:35:13 crc kubenswrapper[4965]: I0219 10:35:13.099482 4965 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:35:13 crc kubenswrapper[4965]: I0219 10:35:13.099491 4965 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 19 10:35:13 crc kubenswrapper[4965]: I0219 10:35:13.127823 4965 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Feb 19 10:35:13 crc kubenswrapper[4965]: I0219 10:35:13.200778 4965 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Feb 19 10:35:13 crc kubenswrapper[4965]: I0219 10:35:13.330919 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a","Type":"ContainerDied","Data":"a832fb108f47dade4554a1edebd06ec73d36c02cfd8817239c66fb5387bf0c36"} Feb 19 10:35:13 crc kubenswrapper[4965]: I0219 10:35:13.330958 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a832fb108f47dade4554a1edebd06ec73d36c02cfd8817239c66fb5387bf0c36" Feb 19 10:35:13 crc kubenswrapper[4965]: I0219 10:35:13.331014 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 19 10:35:13 crc kubenswrapper[4965]: I0219 10:35:13.437829 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a" (UID: "acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:35:13 crc kubenswrapper[4965]: I0219 10:35:13.509017 4965 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 19 10:35:16 crc kubenswrapper[4965]: I0219 10:35:16.199892 4965 scope.go:117] "RemoveContainer" containerID="b29f2a4df3d1b4e7054216ba139e5ed515eb129972b5c4cb743ccf8db61bf96b" Feb 19 10:35:16 crc kubenswrapper[4965]: E0219 10:35:16.200856 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:35:24 crc kubenswrapper[4965]: I0219 10:35:24.459255 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 19 10:35:24 crc kubenswrapper[4965]: E0219 10:35:24.460285 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12934331-145e-4479-a97c-5412cd747b3d" containerName="extract-utilities" Feb 19 10:35:24 crc kubenswrapper[4965]: I0219 10:35:24.460303 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="12934331-145e-4479-a97c-5412cd747b3d" containerName="extract-utilities" Feb 19 10:35:24 crc kubenswrapper[4965]: E0219 10:35:24.460322 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12934331-145e-4479-a97c-5412cd747b3d" containerName="extract-content" Feb 19 10:35:24 crc kubenswrapper[4965]: I0219 10:35:24.460329 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="12934331-145e-4479-a97c-5412cd747b3d" containerName="extract-content" Feb 19 10:35:24 crc kubenswrapper[4965]: E0219 10:35:24.460354 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12934331-145e-4479-a97c-5412cd747b3d" containerName="registry-server" Feb 19 10:35:24 crc kubenswrapper[4965]: I0219 10:35:24.460364 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="12934331-145e-4479-a97c-5412cd747b3d" containerName="registry-server" Feb 19 10:35:24 crc kubenswrapper[4965]: E0219 10:35:24.460380 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a" containerName="tempest-tests-tempest-tests-runner" Feb 19 10:35:24 crc kubenswrapper[4965]: I0219 10:35:24.460388 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a" containerName="tempest-tests-tempest-tests-runner" Feb 19 10:35:24 crc kubenswrapper[4965]: I0219 10:35:24.460645 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="12934331-145e-4479-a97c-5412cd747b3d" containerName="registry-server" Feb 19 10:35:24 crc kubenswrapper[4965]: I0219 10:35:24.460667 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a" containerName="tempest-tests-tempest-tests-runner" Feb 19 10:35:24 crc kubenswrapper[4965]: I0219 10:35:24.461579 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 10:35:24 crc kubenswrapper[4965]: I0219 10:35:24.466095 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-xx4mq" Feb 19 10:35:24 crc kubenswrapper[4965]: I0219 10:35:24.477657 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 19 10:35:24 crc kubenswrapper[4965]: I0219 10:35:24.575904 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wwth\" (UniqueName: \"kubernetes.io/projected/ca9aec62-8a03-4f2d-acf7-cb4c5a08be00-kube-api-access-9wwth\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"ca9aec62-8a03-4f2d-acf7-cb4c5a08be00\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 10:35:24 crc kubenswrapper[4965]: I0219 10:35:24.576395 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"ca9aec62-8a03-4f2d-acf7-cb4c5a08be00\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 10:35:24 crc kubenswrapper[4965]: I0219 10:35:24.677871 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wwth\" (UniqueName: \"kubernetes.io/projected/ca9aec62-8a03-4f2d-acf7-cb4c5a08be00-kube-api-access-9wwth\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"ca9aec62-8a03-4f2d-acf7-cb4c5a08be00\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 10:35:24 crc kubenswrapper[4965]: I0219 10:35:24.678049 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"ca9aec62-8a03-4f2d-acf7-cb4c5a08be00\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 10:35:24 crc kubenswrapper[4965]: I0219 10:35:24.678445 4965 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"ca9aec62-8a03-4f2d-acf7-cb4c5a08be00\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 10:35:24 crc kubenswrapper[4965]: I0219 10:35:24.699387 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wwth\" (UniqueName: \"kubernetes.io/projected/ca9aec62-8a03-4f2d-acf7-cb4c5a08be00-kube-api-access-9wwth\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"ca9aec62-8a03-4f2d-acf7-cb4c5a08be00\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 10:35:24 crc kubenswrapper[4965]: I0219 10:35:24.709165 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"ca9aec62-8a03-4f2d-acf7-cb4c5a08be00\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 10:35:24 crc kubenswrapper[4965]: I0219 10:35:24.781371 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 10:35:25 crc kubenswrapper[4965]: I0219 10:35:25.282230 4965 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 10:35:25 crc kubenswrapper[4965]: I0219 10:35:25.284556 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 19 10:35:25 crc kubenswrapper[4965]: I0219 10:35:25.432747 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"ca9aec62-8a03-4f2d-acf7-cb4c5a08be00","Type":"ContainerStarted","Data":"f7b7f0abcb34d21d2748b79db00e586b093127e68932c25b737c2dac51a14c9b"} Feb 19 10:35:26 crc kubenswrapper[4965]: I0219 10:35:26.444947 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"ca9aec62-8a03-4f2d-acf7-cb4c5a08be00","Type":"ContainerStarted","Data":"4928f340ec169ef025fe0a72843463084f4578cfef0e8c6d397523eff0caa0f6"} Feb 19 10:35:26 crc kubenswrapper[4965]: I0219 10:35:26.463844 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.674357198 podStartE2EDuration="2.463826822s" podCreationTimestamp="2026-02-19 10:35:24 +0000 UTC" firstStartedPulling="2026-02-19 10:35:25.281955817 +0000 UTC m=+3180.903277127" lastFinishedPulling="2026-02-19 10:35:26.071425431 +0000 UTC m=+3181.692746751" observedRunningTime="2026-02-19 10:35:26.458934583 +0000 UTC m=+3182.080255893" watchObservedRunningTime="2026-02-19 10:35:26.463826822 +0000 UTC m=+3182.085148122" Feb 19 10:35:29 crc kubenswrapper[4965]: I0219 10:35:29.200633 4965 scope.go:117] "RemoveContainer" containerID="b29f2a4df3d1b4e7054216ba139e5ed515eb129972b5c4cb743ccf8db61bf96b" Feb 19 10:35:29 crc kubenswrapper[4965]: E0219 10:35:29.201669 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:35:43 crc kubenswrapper[4965]: I0219 10:35:43.198643 4965 scope.go:117] "RemoveContainer" containerID="b29f2a4df3d1b4e7054216ba139e5ed515eb129972b5c4cb743ccf8db61bf96b" Feb 19 10:35:43 crc kubenswrapper[4965]: E0219 10:35:43.199855 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:35:51 crc kubenswrapper[4965]: I0219 10:35:51.664331 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7rlbz"] Feb 19 10:35:51 crc kubenswrapper[4965]: I0219 10:35:51.667250 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7rlbz" Feb 19 10:35:51 crc kubenswrapper[4965]: I0219 10:35:51.676473 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7rlbz"] Feb 19 10:35:51 crc kubenswrapper[4965]: I0219 10:35:51.834247 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nwhr\" (UniqueName: \"kubernetes.io/projected/90fee0a7-5516-411c-af58-bd02a92f955c-kube-api-access-4nwhr\") pod \"redhat-operators-7rlbz\" (UID: \"90fee0a7-5516-411c-af58-bd02a92f955c\") " pod="openshift-marketplace/redhat-operators-7rlbz" Feb 19 10:35:51 crc kubenswrapper[4965]: I0219 10:35:51.834329 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90fee0a7-5516-411c-af58-bd02a92f955c-catalog-content\") pod \"redhat-operators-7rlbz\" (UID: \"90fee0a7-5516-411c-af58-bd02a92f955c\") " pod="openshift-marketplace/redhat-operators-7rlbz" Feb 19 10:35:51 crc kubenswrapper[4965]: I0219 10:35:51.834353 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90fee0a7-5516-411c-af58-bd02a92f955c-utilities\") pod \"redhat-operators-7rlbz\" (UID: \"90fee0a7-5516-411c-af58-bd02a92f955c\") " pod="openshift-marketplace/redhat-operators-7rlbz" Feb 19 10:35:51 crc kubenswrapper[4965]: I0219 10:35:51.936182 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nwhr\" (UniqueName: \"kubernetes.io/projected/90fee0a7-5516-411c-af58-bd02a92f955c-kube-api-access-4nwhr\") pod \"redhat-operators-7rlbz\" (UID: \"90fee0a7-5516-411c-af58-bd02a92f955c\") " pod="openshift-marketplace/redhat-operators-7rlbz" Feb 19 10:35:51 crc kubenswrapper[4965]: I0219 10:35:51.936268 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90fee0a7-5516-411c-af58-bd02a92f955c-catalog-content\") pod \"redhat-operators-7rlbz\" (UID: \"90fee0a7-5516-411c-af58-bd02a92f955c\") " pod="openshift-marketplace/redhat-operators-7rlbz" Feb 19 10:35:51 crc kubenswrapper[4965]: I0219 10:35:51.936286 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90fee0a7-5516-411c-af58-bd02a92f955c-utilities\") pod \"redhat-operators-7rlbz\" (UID: \"90fee0a7-5516-411c-af58-bd02a92f955c\") " pod="openshift-marketplace/redhat-operators-7rlbz" Feb 19 10:35:51 crc kubenswrapper[4965]: I0219 10:35:51.936748 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90fee0a7-5516-411c-af58-bd02a92f955c-utilities\") pod \"redhat-operators-7rlbz\" (UID: \"90fee0a7-5516-411c-af58-bd02a92f955c\") " pod="openshift-marketplace/redhat-operators-7rlbz" Feb 19 10:35:51 crc kubenswrapper[4965]: I0219 10:35:51.937260 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90fee0a7-5516-411c-af58-bd02a92f955c-catalog-content\") pod \"redhat-operators-7rlbz\" (UID: \"90fee0a7-5516-411c-af58-bd02a92f955c\") " pod="openshift-marketplace/redhat-operators-7rlbz" Feb 19 10:35:51 crc kubenswrapper[4965]: I0219 10:35:51.960332 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nwhr\" (UniqueName: \"kubernetes.io/projected/90fee0a7-5516-411c-af58-bd02a92f955c-kube-api-access-4nwhr\") pod \"redhat-operators-7rlbz\" (UID: \"90fee0a7-5516-411c-af58-bd02a92f955c\") " pod="openshift-marketplace/redhat-operators-7rlbz" Feb 19 10:35:51 crc kubenswrapper[4965]: I0219 10:35:51.995091 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7rlbz" Feb 19 10:35:52 crc kubenswrapper[4965]: I0219 10:35:52.483633 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7rlbz"] Feb 19 10:35:52 crc kubenswrapper[4965]: I0219 10:35:52.715090 4965 generic.go:334] "Generic (PLEG): container finished" podID="90fee0a7-5516-411c-af58-bd02a92f955c" containerID="683ba4d54ae13edce472861e9d9d950adcd3377bad3af16e3860efcd0d453bab" exitCode=0 Feb 19 10:35:52 crc kubenswrapper[4965]: I0219 10:35:52.715133 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7rlbz" event={"ID":"90fee0a7-5516-411c-af58-bd02a92f955c","Type":"ContainerDied","Data":"683ba4d54ae13edce472861e9d9d950adcd3377bad3af16e3860efcd0d453bab"} Feb 19 10:35:52 crc kubenswrapper[4965]: I0219 10:35:52.715158 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7rlbz" event={"ID":"90fee0a7-5516-411c-af58-bd02a92f955c","Type":"ContainerStarted","Data":"81d2fb7a3db227b5de3bb2c6a3e4eb5fef4ae568115609bb3b83d5d2f3e5996c"} Feb 19 10:35:53 crc kubenswrapper[4965]: I0219 10:35:53.739485 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vdz8f/must-gather-npf9w"] Feb 19 10:35:53 crc kubenswrapper[4965]: I0219 10:35:53.742580 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vdz8f/must-gather-npf9w" Feb 19 10:35:53 crc kubenswrapper[4965]: I0219 10:35:53.748676 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-vdz8f"/"openshift-service-ca.crt" Feb 19 10:35:53 crc kubenswrapper[4965]: I0219 10:35:53.748905 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-vdz8f"/"kube-root-ca.crt" Feb 19 10:35:53 crc kubenswrapper[4965]: I0219 10:35:53.785407 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vdz8f/must-gather-npf9w"] Feb 19 10:35:53 crc kubenswrapper[4965]: I0219 10:35:53.882603 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrpmt\" (UniqueName: \"kubernetes.io/projected/303ea2f8-2ebb-4bc0-8df9-7ebc25ded71d-kube-api-access-wrpmt\") pod \"must-gather-npf9w\" (UID: \"303ea2f8-2ebb-4bc0-8df9-7ebc25ded71d\") " pod="openshift-must-gather-vdz8f/must-gather-npf9w" Feb 19 10:35:53 crc kubenswrapper[4965]: I0219 10:35:53.882886 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/303ea2f8-2ebb-4bc0-8df9-7ebc25ded71d-must-gather-output\") pod \"must-gather-npf9w\" (UID: \"303ea2f8-2ebb-4bc0-8df9-7ebc25ded71d\") " pod="openshift-must-gather-vdz8f/must-gather-npf9w" Feb 19 10:35:53 crc kubenswrapper[4965]: I0219 10:35:53.984824 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrpmt\" (UniqueName: \"kubernetes.io/projected/303ea2f8-2ebb-4bc0-8df9-7ebc25ded71d-kube-api-access-wrpmt\") pod \"must-gather-npf9w\" (UID: \"303ea2f8-2ebb-4bc0-8df9-7ebc25ded71d\") " pod="openshift-must-gather-vdz8f/must-gather-npf9w" Feb 19 10:35:53 crc kubenswrapper[4965]: I0219 10:35:53.984950 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/303ea2f8-2ebb-4bc0-8df9-7ebc25ded71d-must-gather-output\") pod \"must-gather-npf9w\" (UID: \"303ea2f8-2ebb-4bc0-8df9-7ebc25ded71d\") " pod="openshift-must-gather-vdz8f/must-gather-npf9w" Feb 19 10:35:53 crc kubenswrapper[4965]: I0219 10:35:53.985446 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/303ea2f8-2ebb-4bc0-8df9-7ebc25ded71d-must-gather-output\") pod \"must-gather-npf9w\" (UID: \"303ea2f8-2ebb-4bc0-8df9-7ebc25ded71d\") " pod="openshift-must-gather-vdz8f/must-gather-npf9w" Feb 19 10:35:54 crc kubenswrapper[4965]: I0219 10:35:54.001267 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrpmt\" (UniqueName: \"kubernetes.io/projected/303ea2f8-2ebb-4bc0-8df9-7ebc25ded71d-kube-api-access-wrpmt\") pod \"must-gather-npf9w\" (UID: \"303ea2f8-2ebb-4bc0-8df9-7ebc25ded71d\") " pod="openshift-must-gather-vdz8f/must-gather-npf9w" Feb 19 10:35:54 crc kubenswrapper[4965]: I0219 10:35:54.088028 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vdz8f/must-gather-npf9w" Feb 19 10:35:54 crc kubenswrapper[4965]: I0219 10:35:54.198593 4965 scope.go:117] "RemoveContainer" containerID="b29f2a4df3d1b4e7054216ba139e5ed515eb129972b5c4cb743ccf8db61bf96b" Feb 19 10:35:54 crc kubenswrapper[4965]: E0219 10:35:54.198960 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:35:54 crc kubenswrapper[4965]: I0219 10:35:54.699489 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vdz8f/must-gather-npf9w"] Feb 19 10:35:54 crc kubenswrapper[4965]: I0219 10:35:54.785924 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vdz8f/must-gather-npf9w" event={"ID":"303ea2f8-2ebb-4bc0-8df9-7ebc25ded71d","Type":"ContainerStarted","Data":"fee5890c4d49ce8b767917a6189962d8293541583a8e22c1735067914af10ed9"} Feb 19 10:35:55 crc kubenswrapper[4965]: I0219 10:35:55.803020 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7rlbz" event={"ID":"90fee0a7-5516-411c-af58-bd02a92f955c","Type":"ContainerStarted","Data":"97caa4c852f0cbc9674d542a9be0c388338f2b4fc5cf288afbf9ff0adadf17cd"} Feb 19 10:36:03 crc kubenswrapper[4965]: I0219 10:36:03.878049 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vdz8f/must-gather-npf9w" event={"ID":"303ea2f8-2ebb-4bc0-8df9-7ebc25ded71d","Type":"ContainerStarted","Data":"6e341d49bd437c3712b2fbfda8681b8f1b12afc0ef915825b93fe7afcdccab8b"} Feb 19 10:36:03 crc kubenswrapper[4965]: I0219 10:36:03.878664 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vdz8f/must-gather-npf9w" event={"ID":"303ea2f8-2ebb-4bc0-8df9-7ebc25ded71d","Type":"ContainerStarted","Data":"6bd54c4d743d82b3bf0e6267332ef9ce559585651704c2eed5ff4964d795128c"} Feb 19 10:36:03 crc kubenswrapper[4965]: I0219 10:36:03.879961 4965 generic.go:334] "Generic (PLEG): container finished" podID="90fee0a7-5516-411c-af58-bd02a92f955c" containerID="97caa4c852f0cbc9674d542a9be0c388338f2b4fc5cf288afbf9ff0adadf17cd" exitCode=0 Feb 19 10:36:03 crc kubenswrapper[4965]: I0219 10:36:03.880002 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7rlbz" event={"ID":"90fee0a7-5516-411c-af58-bd02a92f955c","Type":"ContainerDied","Data":"97caa4c852f0cbc9674d542a9be0c388338f2b4fc5cf288afbf9ff0adadf17cd"} Feb 19 10:36:03 crc kubenswrapper[4965]: I0219 10:36:03.899674 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-vdz8f/must-gather-npf9w" podStartSLOduration=2.562350456 podStartE2EDuration="10.899653298s" podCreationTimestamp="2026-02-19 10:35:53 +0000 UTC" firstStartedPulling="2026-02-19 10:35:54.713495142 +0000 UTC m=+3210.334816452" lastFinishedPulling="2026-02-19 10:36:03.050797984 +0000 UTC m=+3218.672119294" observedRunningTime="2026-02-19 10:36:03.895667321 +0000 UTC m=+3219.516988631" watchObservedRunningTime="2026-02-19 10:36:03.899653298 +0000 UTC m=+3219.520974608" Feb 19 10:36:04 crc kubenswrapper[4965]: I0219 10:36:04.896971 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7rlbz" event={"ID":"90fee0a7-5516-411c-af58-bd02a92f955c","Type":"ContainerStarted","Data":"f2783b57bd11c70a70978b8751abd469c688f44eccc039d0088c337b5e327df4"} Feb 19 10:36:07 crc kubenswrapper[4965]: I0219 10:36:07.198611 4965 scope.go:117] "RemoveContainer" containerID="b29f2a4df3d1b4e7054216ba139e5ed515eb129972b5c4cb743ccf8db61bf96b" Feb 19 10:36:07 crc kubenswrapper[4965]: E0219 10:36:07.199447 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:36:08 crc kubenswrapper[4965]: I0219 10:36:08.061141 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7rlbz" podStartSLOduration=5.4995706 podStartE2EDuration="17.061119319s" podCreationTimestamp="2026-02-19 10:35:51 +0000 UTC" firstStartedPulling="2026-02-19 10:35:52.721394999 +0000 UTC m=+3208.342716309" lastFinishedPulling="2026-02-19 10:36:04.282943698 +0000 UTC m=+3219.904265028" observedRunningTime="2026-02-19 10:36:04.930473112 +0000 UTC m=+3220.551794432" watchObservedRunningTime="2026-02-19 10:36:08.061119319 +0000 UTC m=+3223.682440629" Feb 19 10:36:08 crc kubenswrapper[4965]: I0219 10:36:08.061646 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vdz8f/crc-debug-gh2lg"] Feb 19 10:36:08 crc kubenswrapper[4965]: I0219 10:36:08.062962 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vdz8f/crc-debug-gh2lg" Feb 19 10:36:08 crc kubenswrapper[4965]: I0219 10:36:08.065296 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-vdz8f"/"default-dockercfg-8cb4v" Feb 19 10:36:08 crc kubenswrapper[4965]: I0219 10:36:08.145780 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/393a1279-ff19-4a2b-8ca8-47b9764ff756-host\") pod \"crc-debug-gh2lg\" (UID: \"393a1279-ff19-4a2b-8ca8-47b9764ff756\") " pod="openshift-must-gather-vdz8f/crc-debug-gh2lg" Feb 19 10:36:08 crc kubenswrapper[4965]: I0219 10:36:08.146078 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt7vv\" (UniqueName: \"kubernetes.io/projected/393a1279-ff19-4a2b-8ca8-47b9764ff756-kube-api-access-vt7vv\") pod \"crc-debug-gh2lg\" (UID: \"393a1279-ff19-4a2b-8ca8-47b9764ff756\") " pod="openshift-must-gather-vdz8f/crc-debug-gh2lg" Feb 19 10:36:08 crc kubenswrapper[4965]: I0219 10:36:08.248481 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/393a1279-ff19-4a2b-8ca8-47b9764ff756-host\") pod \"crc-debug-gh2lg\" (UID: \"393a1279-ff19-4a2b-8ca8-47b9764ff756\") " pod="openshift-must-gather-vdz8f/crc-debug-gh2lg" Feb 19 10:36:08 crc kubenswrapper[4965]: I0219 10:36:08.248575 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/393a1279-ff19-4a2b-8ca8-47b9764ff756-host\") pod \"crc-debug-gh2lg\" (UID: \"393a1279-ff19-4a2b-8ca8-47b9764ff756\") " pod="openshift-must-gather-vdz8f/crc-debug-gh2lg" Feb 19 10:36:08 crc kubenswrapper[4965]: I0219 10:36:08.249081 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vt7vv\" (UniqueName: \"kubernetes.io/projected/393a1279-ff19-4a2b-8ca8-47b9764ff756-kube-api-access-vt7vv\") pod \"crc-debug-gh2lg\" (UID: \"393a1279-ff19-4a2b-8ca8-47b9764ff756\") " pod="openshift-must-gather-vdz8f/crc-debug-gh2lg" Feb 19 10:36:08 crc kubenswrapper[4965]: I0219 10:36:08.275794 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vt7vv\" (UniqueName: \"kubernetes.io/projected/393a1279-ff19-4a2b-8ca8-47b9764ff756-kube-api-access-vt7vv\") pod \"crc-debug-gh2lg\" (UID: \"393a1279-ff19-4a2b-8ca8-47b9764ff756\") " pod="openshift-must-gather-vdz8f/crc-debug-gh2lg" Feb 19 10:36:08 crc kubenswrapper[4965]: I0219 10:36:08.384961 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vdz8f/crc-debug-gh2lg" Feb 19 10:36:08 crc kubenswrapper[4965]: W0219 10:36:08.419341 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod393a1279_ff19_4a2b_8ca8_47b9764ff756.slice/crio-e6821f8a172d92ee237ccd6ea7b7a70c8e7318e396551d07aa07969682020338 WatchSource:0}: Error finding container e6821f8a172d92ee237ccd6ea7b7a70c8e7318e396551d07aa07969682020338: Status 404 returned error can't find the container with id e6821f8a172d92ee237ccd6ea7b7a70c8e7318e396551d07aa07969682020338 Feb 19 10:36:08 crc kubenswrapper[4965]: I0219 10:36:08.934393 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vdz8f/crc-debug-gh2lg" event={"ID":"393a1279-ff19-4a2b-8ca8-47b9764ff756","Type":"ContainerStarted","Data":"e6821f8a172d92ee237ccd6ea7b7a70c8e7318e396551d07aa07969682020338"} Feb 19 10:36:11 crc kubenswrapper[4965]: I0219 10:36:11.995206 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7rlbz" Feb 19 10:36:11 crc kubenswrapper[4965]: I0219 10:36:11.995505 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7rlbz" Feb 19 10:36:12 crc kubenswrapper[4965]: I0219 10:36:12.047453 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7rlbz" Feb 19 10:36:13 crc kubenswrapper[4965]: I0219 10:36:13.044537 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7rlbz" Feb 19 10:36:13 crc kubenswrapper[4965]: I0219 10:36:13.102446 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7rlbz"] Feb 19 10:36:15 crc kubenswrapper[4965]: I0219 10:36:15.029254 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7rlbz" podUID="90fee0a7-5516-411c-af58-bd02a92f955c" containerName="registry-server" containerID="cri-o://f2783b57bd11c70a70978b8751abd469c688f44eccc039d0088c337b5e327df4" gracePeriod=2 Feb 19 10:36:16 crc kubenswrapper[4965]: I0219 10:36:16.043109 4965 generic.go:334] "Generic (PLEG): container finished" podID="90fee0a7-5516-411c-af58-bd02a92f955c" containerID="f2783b57bd11c70a70978b8751abd469c688f44eccc039d0088c337b5e327df4" exitCode=0 Feb 19 10:36:16 crc kubenswrapper[4965]: I0219 10:36:16.043156 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7rlbz" event={"ID":"90fee0a7-5516-411c-af58-bd02a92f955c","Type":"ContainerDied","Data":"f2783b57bd11c70a70978b8751abd469c688f44eccc039d0088c337b5e327df4"} Feb 19 10:36:18 crc kubenswrapper[4965]: I0219 10:36:18.198185 4965 scope.go:117] "RemoveContainer" containerID="b29f2a4df3d1b4e7054216ba139e5ed515eb129972b5c4cb743ccf8db61bf96b" Feb 19 10:36:21 crc kubenswrapper[4965]: I0219 10:36:21.097938 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" event={"ID":"63ef3eb8-6103-492d-b6ef-f16081d15e83","Type":"ContainerStarted","Data":"80c316cf612b9ca1c6b347e543b0dd6345d3fe3eb0881cb783c0e74417f03cc0"} Feb 19 10:36:21 crc kubenswrapper[4965]: I0219 10:36:21.100567 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vdz8f/crc-debug-gh2lg" event={"ID":"393a1279-ff19-4a2b-8ca8-47b9764ff756","Type":"ContainerStarted","Data":"2f73b97c0382c23220ae95586591fbd1a8ff8050cf9c261530443fbcbb86d11f"} Feb 19 10:36:21 crc kubenswrapper[4965]: I0219 10:36:21.103915 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7rlbz" event={"ID":"90fee0a7-5516-411c-af58-bd02a92f955c","Type":"ContainerDied","Data":"81d2fb7a3db227b5de3bb2c6a3e4eb5fef4ae568115609bb3b83d5d2f3e5996c"} Feb 19 10:36:21 crc kubenswrapper[4965]: I0219 10:36:21.103942 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81d2fb7a3db227b5de3bb2c6a3e4eb5fef4ae568115609bb3b83d5d2f3e5996c" Feb 19 10:36:21 crc kubenswrapper[4965]: I0219 10:36:21.105398 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7rlbz" Feb 19 10:36:21 crc kubenswrapper[4965]: I0219 10:36:21.229926 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-vdz8f/crc-debug-gh2lg" podStartSLOduration=1.080021583 podStartE2EDuration="13.229904761s" podCreationTimestamp="2026-02-19 10:36:08 +0000 UTC" firstStartedPulling="2026-02-19 10:36:08.421328139 +0000 UTC m=+3224.042649449" lastFinishedPulling="2026-02-19 10:36:20.571211317 +0000 UTC m=+3236.192532627" observedRunningTime="2026-02-19 10:36:21.145428814 +0000 UTC m=+3236.766750124" watchObservedRunningTime="2026-02-19 10:36:21.229904761 +0000 UTC m=+3236.851226071" Feb 19 10:36:21 crc kubenswrapper[4965]: I0219 10:36:21.237023 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90fee0a7-5516-411c-af58-bd02a92f955c-utilities\") pod \"90fee0a7-5516-411c-af58-bd02a92f955c\" (UID: \"90fee0a7-5516-411c-af58-bd02a92f955c\") " Feb 19 10:36:21 crc kubenswrapper[4965]: I0219 10:36:21.237151 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90fee0a7-5516-411c-af58-bd02a92f955c-catalog-content\") pod \"90fee0a7-5516-411c-af58-bd02a92f955c\" (UID: \"90fee0a7-5516-411c-af58-bd02a92f955c\") " Feb 19 10:36:21 crc kubenswrapper[4965]: I0219 10:36:21.237297 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nwhr\" (UniqueName: \"kubernetes.io/projected/90fee0a7-5516-411c-af58-bd02a92f955c-kube-api-access-4nwhr\") pod \"90fee0a7-5516-411c-af58-bd02a92f955c\" (UID: \"90fee0a7-5516-411c-af58-bd02a92f955c\") " Feb 19 10:36:21 crc kubenswrapper[4965]: I0219 10:36:21.242896 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90fee0a7-5516-411c-af58-bd02a92f955c-utilities" (OuterVolumeSpecName: "utilities") pod "90fee0a7-5516-411c-af58-bd02a92f955c" (UID: "90fee0a7-5516-411c-af58-bd02a92f955c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:36:21 crc kubenswrapper[4965]: I0219 10:36:21.253513 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90fee0a7-5516-411c-af58-bd02a92f955c-kube-api-access-4nwhr" (OuterVolumeSpecName: "kube-api-access-4nwhr") pod "90fee0a7-5516-411c-af58-bd02a92f955c" (UID: "90fee0a7-5516-411c-af58-bd02a92f955c"). InnerVolumeSpecName "kube-api-access-4nwhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:36:21 crc kubenswrapper[4965]: I0219 10:36:21.342708 4965 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90fee0a7-5516-411c-af58-bd02a92f955c-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:36:21 crc kubenswrapper[4965]: I0219 10:36:21.342750 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nwhr\" (UniqueName: \"kubernetes.io/projected/90fee0a7-5516-411c-af58-bd02a92f955c-kube-api-access-4nwhr\") on node \"crc\" DevicePath \"\"" Feb 19 10:36:21 crc kubenswrapper[4965]: I0219 10:36:21.354839 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90fee0a7-5516-411c-af58-bd02a92f955c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "90fee0a7-5516-411c-af58-bd02a92f955c" (UID: "90fee0a7-5516-411c-af58-bd02a92f955c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:36:21 crc kubenswrapper[4965]: I0219 10:36:21.444867 4965 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90fee0a7-5516-411c-af58-bd02a92f955c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:36:22 crc kubenswrapper[4965]: I0219 10:36:22.112539 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7rlbz" Feb 19 10:36:22 crc kubenswrapper[4965]: I0219 10:36:22.145643 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7rlbz"] Feb 19 10:36:22 crc kubenswrapper[4965]: I0219 10:36:22.156864 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7rlbz"] Feb 19 10:36:23 crc kubenswrapper[4965]: I0219 10:36:23.213651 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90fee0a7-5516-411c-af58-bd02a92f955c" path="/var/lib/kubelet/pods/90fee0a7-5516-411c-af58-bd02a92f955c/volumes" Feb 19 10:36:51 crc kubenswrapper[4965]: I0219 10:36:51.227024 4965 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-hb4c6" podUID="7adcb318-8832-417d-814a-7a2d21c8af30" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.120:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 10:37:19 crc kubenswrapper[4965]: I0219 10:37:19.654236 4965 generic.go:334] "Generic (PLEG): container finished" podID="393a1279-ff19-4a2b-8ca8-47b9764ff756" containerID="2f73b97c0382c23220ae95586591fbd1a8ff8050cf9c261530443fbcbb86d11f" exitCode=0 Feb 19 10:37:19 crc kubenswrapper[4965]: I0219 10:37:19.654316 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vdz8f/crc-debug-gh2lg" event={"ID":"393a1279-ff19-4a2b-8ca8-47b9764ff756","Type":"ContainerDied","Data":"2f73b97c0382c23220ae95586591fbd1a8ff8050cf9c261530443fbcbb86d11f"} Feb 19 10:37:20 crc kubenswrapper[4965]: I0219 10:37:20.774582 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vdz8f/crc-debug-gh2lg" Feb 19 10:37:20 crc kubenswrapper[4965]: I0219 10:37:20.819387 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vdz8f/crc-debug-gh2lg"] Feb 19 10:37:20 crc kubenswrapper[4965]: I0219 10:37:20.831256 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vdz8f/crc-debug-gh2lg"] Feb 19 10:37:20 crc kubenswrapper[4965]: I0219 10:37:20.859315 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/393a1279-ff19-4a2b-8ca8-47b9764ff756-host\") pod \"393a1279-ff19-4a2b-8ca8-47b9764ff756\" (UID: \"393a1279-ff19-4a2b-8ca8-47b9764ff756\") " Feb 19 10:37:20 crc kubenswrapper[4965]: I0219 10:37:20.859432 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/393a1279-ff19-4a2b-8ca8-47b9764ff756-host" (OuterVolumeSpecName: "host") pod "393a1279-ff19-4a2b-8ca8-47b9764ff756" (UID: "393a1279-ff19-4a2b-8ca8-47b9764ff756"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 10:37:20 crc kubenswrapper[4965]: I0219 10:37:20.859594 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt7vv\" (UniqueName: \"kubernetes.io/projected/393a1279-ff19-4a2b-8ca8-47b9764ff756-kube-api-access-vt7vv\") pod \"393a1279-ff19-4a2b-8ca8-47b9764ff756\" (UID: \"393a1279-ff19-4a2b-8ca8-47b9764ff756\") " Feb 19 10:37:20 crc kubenswrapper[4965]: I0219 10:37:20.860007 4965 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/393a1279-ff19-4a2b-8ca8-47b9764ff756-host\") on node \"crc\" DevicePath \"\"" Feb 19 10:37:20 crc kubenswrapper[4965]: I0219 10:37:20.865431 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/393a1279-ff19-4a2b-8ca8-47b9764ff756-kube-api-access-vt7vv" (OuterVolumeSpecName: "kube-api-access-vt7vv") pod "393a1279-ff19-4a2b-8ca8-47b9764ff756" (UID: "393a1279-ff19-4a2b-8ca8-47b9764ff756"). InnerVolumeSpecName "kube-api-access-vt7vv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:37:20 crc kubenswrapper[4965]: I0219 10:37:20.961880 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt7vv\" (UniqueName: \"kubernetes.io/projected/393a1279-ff19-4a2b-8ca8-47b9764ff756-kube-api-access-vt7vv\") on node \"crc\" DevicePath \"\"" Feb 19 10:37:21 crc kubenswrapper[4965]: I0219 10:37:21.214458 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="393a1279-ff19-4a2b-8ca8-47b9764ff756" path="/var/lib/kubelet/pods/393a1279-ff19-4a2b-8ca8-47b9764ff756/volumes" Feb 19 10:37:21 crc kubenswrapper[4965]: I0219 10:37:21.681603 4965 scope.go:117] "RemoveContainer" containerID="2f73b97c0382c23220ae95586591fbd1a8ff8050cf9c261530443fbcbb86d11f" Feb 19 10:37:21 crc kubenswrapper[4965]: I0219 10:37:21.681716 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vdz8f/crc-debug-gh2lg" Feb 19 10:37:21 crc kubenswrapper[4965]: I0219 10:37:21.982389 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vdz8f/crc-debug-x2mlq"] Feb 19 10:37:21 crc kubenswrapper[4965]: E0219 10:37:21.983066 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90fee0a7-5516-411c-af58-bd02a92f955c" containerName="extract-content" Feb 19 10:37:21 crc kubenswrapper[4965]: I0219 10:37:21.983081 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="90fee0a7-5516-411c-af58-bd02a92f955c" containerName="extract-content" Feb 19 10:37:21 crc kubenswrapper[4965]: E0219 10:37:21.983105 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90fee0a7-5516-411c-af58-bd02a92f955c" containerName="registry-server" Feb 19 10:37:21 crc kubenswrapper[4965]: I0219 10:37:21.983112 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="90fee0a7-5516-411c-af58-bd02a92f955c" containerName="registry-server" Feb 19 10:37:21 crc kubenswrapper[4965]: E0219 10:37:21.983128 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="393a1279-ff19-4a2b-8ca8-47b9764ff756" containerName="container-00" Feb 19 10:37:21 crc kubenswrapper[4965]: I0219 10:37:21.983134 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="393a1279-ff19-4a2b-8ca8-47b9764ff756" containerName="container-00" Feb 19 10:37:21 crc kubenswrapper[4965]: E0219 10:37:21.983165 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90fee0a7-5516-411c-af58-bd02a92f955c" containerName="extract-utilities" Feb 19 10:37:21 crc kubenswrapper[4965]: I0219 10:37:21.983170 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="90fee0a7-5516-411c-af58-bd02a92f955c" containerName="extract-utilities" Feb 19 10:37:21 crc kubenswrapper[4965]: I0219 10:37:21.983369 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="90fee0a7-5516-411c-af58-bd02a92f955c" containerName="registry-server" Feb 19 10:37:21 crc kubenswrapper[4965]: I0219 10:37:21.983384 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="393a1279-ff19-4a2b-8ca8-47b9764ff756" containerName="container-00" Feb 19 10:37:21 crc kubenswrapper[4965]: I0219 10:37:21.984139 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vdz8f/crc-debug-x2mlq" Feb 19 10:37:21 crc kubenswrapper[4965]: I0219 10:37:21.987586 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-vdz8f"/"default-dockercfg-8cb4v" Feb 19 10:37:22 crc kubenswrapper[4965]: I0219 10:37:22.087135 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c8ca11be-cf3f-4e34-8137-85b7e19cd98c-host\") pod \"crc-debug-x2mlq\" (UID: \"c8ca11be-cf3f-4e34-8137-85b7e19cd98c\") " pod="openshift-must-gather-vdz8f/crc-debug-x2mlq" Feb 19 10:37:22 crc kubenswrapper[4965]: I0219 10:37:22.087186 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmhjp\" (UniqueName: \"kubernetes.io/projected/c8ca11be-cf3f-4e34-8137-85b7e19cd98c-kube-api-access-pmhjp\") pod \"crc-debug-x2mlq\" (UID: \"c8ca11be-cf3f-4e34-8137-85b7e19cd98c\") " pod="openshift-must-gather-vdz8f/crc-debug-x2mlq" Feb 19 10:37:22 crc kubenswrapper[4965]: I0219 10:37:22.189578 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c8ca11be-cf3f-4e34-8137-85b7e19cd98c-host\") pod \"crc-debug-x2mlq\" (UID: \"c8ca11be-cf3f-4e34-8137-85b7e19cd98c\") " pod="openshift-must-gather-vdz8f/crc-debug-x2mlq" Feb 19 10:37:22 crc kubenswrapper[4965]: I0219 10:37:22.189638 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmhjp\" (UniqueName: \"kubernetes.io/projected/c8ca11be-cf3f-4e34-8137-85b7e19cd98c-kube-api-access-pmhjp\") pod \"crc-debug-x2mlq\" (UID: \"c8ca11be-cf3f-4e34-8137-85b7e19cd98c\") " pod="openshift-must-gather-vdz8f/crc-debug-x2mlq" Feb 19 10:37:22 crc kubenswrapper[4965]: I0219 10:37:22.189705 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c8ca11be-cf3f-4e34-8137-85b7e19cd98c-host\") pod \"crc-debug-x2mlq\" (UID: \"c8ca11be-cf3f-4e34-8137-85b7e19cd98c\") " pod="openshift-must-gather-vdz8f/crc-debug-x2mlq" Feb 19 10:37:22 crc kubenswrapper[4965]: I0219 10:37:22.222145 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmhjp\" (UniqueName: \"kubernetes.io/projected/c8ca11be-cf3f-4e34-8137-85b7e19cd98c-kube-api-access-pmhjp\") pod \"crc-debug-x2mlq\" (UID: \"c8ca11be-cf3f-4e34-8137-85b7e19cd98c\") " pod="openshift-must-gather-vdz8f/crc-debug-x2mlq" Feb 19 10:37:22 crc kubenswrapper[4965]: I0219 10:37:22.313871 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vdz8f/crc-debug-x2mlq" Feb 19 10:37:22 crc kubenswrapper[4965]: I0219 10:37:22.691264 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vdz8f/crc-debug-x2mlq" event={"ID":"c8ca11be-cf3f-4e34-8137-85b7e19cd98c","Type":"ContainerStarted","Data":"a739a76b456d5071d9f9110c8ecb8424deb014c1d1a6e4599bb03ead1874d3f3"} Feb 19 10:37:22 crc kubenswrapper[4965]: I0219 10:37:22.691451 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vdz8f/crc-debug-x2mlq" event={"ID":"c8ca11be-cf3f-4e34-8137-85b7e19cd98c","Type":"ContainerStarted","Data":"360ed51d7a38c69abe05e6850442b92a2a27e8200384ed76e698c209c7c5127f"} Feb 19 10:37:22 crc kubenswrapper[4965]: I0219 10:37:22.703451 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-vdz8f/crc-debug-x2mlq" podStartSLOduration=1.703433962 podStartE2EDuration="1.703433962s" podCreationTimestamp="2026-02-19 10:37:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:37:22.703098603 +0000 UTC m=+3298.324419923" watchObservedRunningTime="2026-02-19 10:37:22.703433962 +0000 UTC m=+3298.324755282" Feb 19 10:37:23 crc kubenswrapper[4965]: I0219 10:37:23.700023 4965 generic.go:334] "Generic (PLEG): container finished" podID="c8ca11be-cf3f-4e34-8137-85b7e19cd98c" containerID="a739a76b456d5071d9f9110c8ecb8424deb014c1d1a6e4599bb03ead1874d3f3" exitCode=0 Feb 19 10:37:23 crc kubenswrapper[4965]: I0219 10:37:23.700341 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vdz8f/crc-debug-x2mlq" event={"ID":"c8ca11be-cf3f-4e34-8137-85b7e19cd98c","Type":"ContainerDied","Data":"a739a76b456d5071d9f9110c8ecb8424deb014c1d1a6e4599bb03ead1874d3f3"} Feb 19 10:37:24 crc kubenswrapper[4965]: I0219 10:37:24.822117 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vdz8f/crc-debug-x2mlq" Feb 19 10:37:24 crc kubenswrapper[4965]: I0219 10:37:24.866938 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vdz8f/crc-debug-x2mlq"] Feb 19 10:37:24 crc kubenswrapper[4965]: I0219 10:37:24.876215 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vdz8f/crc-debug-x2mlq"] Feb 19 10:37:24 crc kubenswrapper[4965]: I0219 10:37:24.955901 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmhjp\" (UniqueName: \"kubernetes.io/projected/c8ca11be-cf3f-4e34-8137-85b7e19cd98c-kube-api-access-pmhjp\") pod \"c8ca11be-cf3f-4e34-8137-85b7e19cd98c\" (UID: \"c8ca11be-cf3f-4e34-8137-85b7e19cd98c\") " Feb 19 10:37:24 crc kubenswrapper[4965]: I0219 10:37:24.956119 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c8ca11be-cf3f-4e34-8137-85b7e19cd98c-host\") pod \"c8ca11be-cf3f-4e34-8137-85b7e19cd98c\" (UID: \"c8ca11be-cf3f-4e34-8137-85b7e19cd98c\") " Feb 19 10:37:24 crc kubenswrapper[4965]: I0219 10:37:24.956247 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8ca11be-cf3f-4e34-8137-85b7e19cd98c-host" (OuterVolumeSpecName: "host") pod "c8ca11be-cf3f-4e34-8137-85b7e19cd98c" (UID: "c8ca11be-cf3f-4e34-8137-85b7e19cd98c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 10:37:24 crc kubenswrapper[4965]: I0219 10:37:24.956622 4965 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c8ca11be-cf3f-4e34-8137-85b7e19cd98c-host\") on node \"crc\" DevicePath \"\"" Feb 19 10:37:24 crc kubenswrapper[4965]: I0219 10:37:24.968756 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8ca11be-cf3f-4e34-8137-85b7e19cd98c-kube-api-access-pmhjp" (OuterVolumeSpecName: "kube-api-access-pmhjp") pod "c8ca11be-cf3f-4e34-8137-85b7e19cd98c" (UID: "c8ca11be-cf3f-4e34-8137-85b7e19cd98c"). InnerVolumeSpecName "kube-api-access-pmhjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:37:25 crc kubenswrapper[4965]: I0219 10:37:25.058256 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmhjp\" (UniqueName: \"kubernetes.io/projected/c8ca11be-cf3f-4e34-8137-85b7e19cd98c-kube-api-access-pmhjp\") on node \"crc\" DevicePath \"\"" Feb 19 10:37:25 crc kubenswrapper[4965]: I0219 10:37:25.211788 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8ca11be-cf3f-4e34-8137-85b7e19cd98c" path="/var/lib/kubelet/pods/c8ca11be-cf3f-4e34-8137-85b7e19cd98c/volumes" Feb 19 10:37:25 crc kubenswrapper[4965]: I0219 10:37:25.723479 4965 scope.go:117] "RemoveContainer" containerID="a739a76b456d5071d9f9110c8ecb8424deb014c1d1a6e4599bb03ead1874d3f3" Feb 19 10:37:25 crc kubenswrapper[4965]: I0219 10:37:25.723540 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vdz8f/crc-debug-x2mlq" Feb 19 10:37:26 crc kubenswrapper[4965]: I0219 10:37:26.078418 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vdz8f/crc-debug-dsl76"] Feb 19 10:37:26 crc kubenswrapper[4965]: E0219 10:37:26.078886 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8ca11be-cf3f-4e34-8137-85b7e19cd98c" containerName="container-00" Feb 19 10:37:26 crc kubenswrapper[4965]: I0219 10:37:26.078901 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8ca11be-cf3f-4e34-8137-85b7e19cd98c" containerName="container-00" Feb 19 10:37:26 crc kubenswrapper[4965]: I0219 10:37:26.079105 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8ca11be-cf3f-4e34-8137-85b7e19cd98c" containerName="container-00" Feb 19 10:37:26 crc kubenswrapper[4965]: I0219 10:37:26.079826 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vdz8f/crc-debug-dsl76" Feb 19 10:37:26 crc kubenswrapper[4965]: I0219 10:37:26.083278 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-vdz8f"/"default-dockercfg-8cb4v" Feb 19 10:37:26 crc kubenswrapper[4965]: I0219 10:37:26.182559 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1475d9df-7211-4cfe-8e4c-c5940ee001fa-host\") pod \"crc-debug-dsl76\" (UID: \"1475d9df-7211-4cfe-8e4c-c5940ee001fa\") " pod="openshift-must-gather-vdz8f/crc-debug-dsl76" Feb 19 10:37:26 crc kubenswrapper[4965]: I0219 10:37:26.182698 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxw94\" (UniqueName: \"kubernetes.io/projected/1475d9df-7211-4cfe-8e4c-c5940ee001fa-kube-api-access-dxw94\") pod \"crc-debug-dsl76\" (UID: \"1475d9df-7211-4cfe-8e4c-c5940ee001fa\") " pod="openshift-must-gather-vdz8f/crc-debug-dsl76" Feb 19 10:37:26 crc kubenswrapper[4965]: I0219 10:37:26.284319 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1475d9df-7211-4cfe-8e4c-c5940ee001fa-host\") pod \"crc-debug-dsl76\" (UID: \"1475d9df-7211-4cfe-8e4c-c5940ee001fa\") " pod="openshift-must-gather-vdz8f/crc-debug-dsl76" Feb 19 10:37:26 crc kubenswrapper[4965]: I0219 10:37:26.284442 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxw94\" (UniqueName: \"kubernetes.io/projected/1475d9df-7211-4cfe-8e4c-c5940ee001fa-kube-api-access-dxw94\") pod \"crc-debug-dsl76\" (UID: \"1475d9df-7211-4cfe-8e4c-c5940ee001fa\") " pod="openshift-must-gather-vdz8f/crc-debug-dsl76" Feb 19 10:37:26 crc kubenswrapper[4965]: I0219 10:37:26.284510 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1475d9df-7211-4cfe-8e4c-c5940ee001fa-host\") pod \"crc-debug-dsl76\" (UID: \"1475d9df-7211-4cfe-8e4c-c5940ee001fa\") " pod="openshift-must-gather-vdz8f/crc-debug-dsl76" Feb 19 10:37:26 crc kubenswrapper[4965]: I0219 10:37:26.306432 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxw94\" (UniqueName: \"kubernetes.io/projected/1475d9df-7211-4cfe-8e4c-c5940ee001fa-kube-api-access-dxw94\") pod \"crc-debug-dsl76\" (UID: \"1475d9df-7211-4cfe-8e4c-c5940ee001fa\") " pod="openshift-must-gather-vdz8f/crc-debug-dsl76" Feb 19 10:37:26 crc kubenswrapper[4965]: I0219 10:37:26.398372 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vdz8f/crc-debug-dsl76" Feb 19 10:37:26 crc kubenswrapper[4965]: W0219 10:37:26.436910 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1475d9df_7211_4cfe_8e4c_c5940ee001fa.slice/crio-12913de05846315af6426734f065b1583f14e3f38871808017c6ad8821f83ef4 WatchSource:0}: Error finding container 12913de05846315af6426734f065b1583f14e3f38871808017c6ad8821f83ef4: Status 404 returned error can't find the container with id 12913de05846315af6426734f065b1583f14e3f38871808017c6ad8821f83ef4 Feb 19 10:37:26 crc kubenswrapper[4965]: I0219 10:37:26.736931 4965 generic.go:334] "Generic (PLEG): container finished" podID="1475d9df-7211-4cfe-8e4c-c5940ee001fa" containerID="f22bc53674c55279954875dbed2359d1c4d6a20d1d4a26f664c96e41ead56676" exitCode=0 Feb 19 10:37:26 crc kubenswrapper[4965]: I0219 10:37:26.737096 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vdz8f/crc-debug-dsl76" event={"ID":"1475d9df-7211-4cfe-8e4c-c5940ee001fa","Type":"ContainerDied","Data":"f22bc53674c55279954875dbed2359d1c4d6a20d1d4a26f664c96e41ead56676"} Feb 19 10:37:26 crc kubenswrapper[4965]: I0219 10:37:26.737418 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vdz8f/crc-debug-dsl76" event={"ID":"1475d9df-7211-4cfe-8e4c-c5940ee001fa","Type":"ContainerStarted","Data":"12913de05846315af6426734f065b1583f14e3f38871808017c6ad8821f83ef4"} Feb 19 10:37:26 crc kubenswrapper[4965]: I0219 10:37:26.787226 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vdz8f/crc-debug-dsl76"] Feb 19 10:37:26 crc kubenswrapper[4965]: I0219 10:37:26.798470 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vdz8f/crc-debug-dsl76"] Feb 19 10:37:27 crc kubenswrapper[4965]: I0219 10:37:27.873658 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vdz8f/crc-debug-dsl76" Feb 19 10:37:27 crc kubenswrapper[4965]: I0219 10:37:27.919863 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxw94\" (UniqueName: \"kubernetes.io/projected/1475d9df-7211-4cfe-8e4c-c5940ee001fa-kube-api-access-dxw94\") pod \"1475d9df-7211-4cfe-8e4c-c5940ee001fa\" (UID: \"1475d9df-7211-4cfe-8e4c-c5940ee001fa\") " Feb 19 10:37:27 crc kubenswrapper[4965]: I0219 10:37:27.920186 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1475d9df-7211-4cfe-8e4c-c5940ee001fa-host\") pod \"1475d9df-7211-4cfe-8e4c-c5940ee001fa\" (UID: \"1475d9df-7211-4cfe-8e4c-c5940ee001fa\") " Feb 19 10:37:27 crc kubenswrapper[4965]: I0219 10:37:27.920502 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1475d9df-7211-4cfe-8e4c-c5940ee001fa-host" (OuterVolumeSpecName: "host") pod "1475d9df-7211-4cfe-8e4c-c5940ee001fa" (UID: "1475d9df-7211-4cfe-8e4c-c5940ee001fa"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 10:37:27 crc kubenswrapper[4965]: I0219 10:37:27.920883 4965 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1475d9df-7211-4cfe-8e4c-c5940ee001fa-host\") on node \"crc\" DevicePath \"\"" Feb 19 10:37:27 crc kubenswrapper[4965]: I0219 10:37:27.953828 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1475d9df-7211-4cfe-8e4c-c5940ee001fa-kube-api-access-dxw94" (OuterVolumeSpecName: "kube-api-access-dxw94") pod "1475d9df-7211-4cfe-8e4c-c5940ee001fa" (UID: "1475d9df-7211-4cfe-8e4c-c5940ee001fa"). InnerVolumeSpecName "kube-api-access-dxw94". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:37:28 crc kubenswrapper[4965]: I0219 10:37:28.022765 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxw94\" (UniqueName: \"kubernetes.io/projected/1475d9df-7211-4cfe-8e4c-c5940ee001fa-kube-api-access-dxw94\") on node \"crc\" DevicePath \"\"" Feb 19 10:37:28 crc kubenswrapper[4965]: I0219 10:37:28.758413 4965 scope.go:117] "RemoveContainer" containerID="f22bc53674c55279954875dbed2359d1c4d6a20d1d4a26f664c96e41ead56676" Feb 19 10:37:28 crc kubenswrapper[4965]: I0219 10:37:28.758837 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vdz8f/crc-debug-dsl76" Feb 19 10:37:29 crc kubenswrapper[4965]: I0219 10:37:29.209507 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1475d9df-7211-4cfe-8e4c-c5940ee001fa" path="/var/lib/kubelet/pods/1475d9df-7211-4cfe-8e4c-c5940ee001fa/volumes" Feb 19 10:37:54 crc kubenswrapper[4965]: I0219 10:37:54.065126 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_45105c9e-db96-41c5-ba42-d56027ca318c/init-config-reloader/0.log" Feb 19 10:37:54 crc kubenswrapper[4965]: I0219 10:37:54.279867 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_45105c9e-db96-41c5-ba42-d56027ca318c/init-config-reloader/0.log" Feb 19 10:37:54 crc kubenswrapper[4965]: I0219 10:37:54.374905 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_45105c9e-db96-41c5-ba42-d56027ca318c/alertmanager/0.log" Feb 19 10:37:54 crc kubenswrapper[4965]: I0219 10:37:54.436896 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_45105c9e-db96-41c5-ba42-d56027ca318c/config-reloader/0.log" Feb 19 10:37:54 crc kubenswrapper[4965]: I0219 10:37:54.454086 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-c97f468c6-bwf6p_45f4a2b8-338f-4c3d-afe8-305eb599081c/barbican-api/0.log" Feb 19 10:37:54 crc kubenswrapper[4965]: I0219 10:37:54.575350 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-c97f468c6-bwf6p_45f4a2b8-338f-4c3d-afe8-305eb599081c/barbican-api-log/0.log" Feb 19 10:37:54 crc kubenswrapper[4965]: I0219 10:37:54.672422 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6855d4854d-gc94v_efe10142-642a-45d3-9f5a-8d1f2cb717e9/barbican-keystone-listener/0.log" Feb 19 10:37:54 crc kubenswrapper[4965]: I0219 10:37:54.856926 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6855d4854d-gc94v_efe10142-642a-45d3-9f5a-8d1f2cb717e9/barbican-keystone-listener-log/0.log" Feb 19 10:37:54 crc kubenswrapper[4965]: I0219 10:37:54.874762 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-55c5f69ff7-qk9n8_a32c3eed-880c-428c-b58e-d89c763d11b9/barbican-worker/0.log" Feb 19 10:37:55 crc kubenswrapper[4965]: I0219 10:37:55.057832 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-55c5f69ff7-qk9n8_a32c3eed-880c-428c-b58e-d89c763d11b9/barbican-worker-log/0.log" Feb 19 10:37:55 crc kubenswrapper[4965]: I0219 10:37:55.118341 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-m8wg9_a6a006f0-d704-4e08-bc46-118269ad9b1a/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 10:37:55 crc kubenswrapper[4965]: I0219 10:37:55.396409 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2a094c76-7174-4b58-8b32-12020982c63b/ceilometer-central-agent/0.log" Feb 19 10:37:55 crc kubenswrapper[4965]: I0219 10:37:55.422400 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2a094c76-7174-4b58-8b32-12020982c63b/ceilometer-notification-agent/0.log" Feb 19 10:37:55 crc kubenswrapper[4965]: I0219 10:37:55.493315 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2a094c76-7174-4b58-8b32-12020982c63b/proxy-httpd/0.log" Feb 19 10:37:55 crc kubenswrapper[4965]: I0219 10:37:55.502727 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2a094c76-7174-4b58-8b32-12020982c63b/sg-core/0.log" Feb 19 10:37:55 crc kubenswrapper[4965]: I0219 10:37:55.667651 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_92030a04-19d0-4766-b560-3d5b64be8716/cinder-api-log/0.log" Feb 19 10:37:55 crc kubenswrapper[4965]: I0219 10:37:55.696343 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_92030a04-19d0-4766-b560-3d5b64be8716/cinder-api/0.log" Feb 19 10:37:55 crc kubenswrapper[4965]: I0219 10:37:55.901372 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_2e838493-1547-4574-8af2-eff17e75c65b/cinder-scheduler/0.log" Feb 19 10:37:55 crc kubenswrapper[4965]: I0219 10:37:55.972702 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_2e838493-1547-4574-8af2-eff17e75c65b/probe/0.log" Feb 19 10:37:56 crc kubenswrapper[4965]: I0219 10:37:56.104186 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-api-0_3ed10660-2674-4274-a62b-366af8d375da/cloudkitty-api/0.log" Feb 19 10:37:56 crc kubenswrapper[4965]: I0219 10:37:56.130393 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-api-0_3ed10660-2674-4274-a62b-366af8d375da/cloudkitty-api-log/0.log" Feb 19 10:37:56 crc kubenswrapper[4965]: I0219 10:37:56.290161 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-compactor-0_5038aafe-e39d-479c-b355-bbac1a77fa4a/loki-compactor/0.log" Feb 19 10:37:56 crc kubenswrapper[4965]: I0219 10:37:56.375529 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-distributor-585d9bcbc-ktrzq_afbb0d2a-5cd0-4358-b5b0-c22749400326/loki-distributor/0.log" Feb 19 10:37:56 crc kubenswrapper[4965]: I0219 10:37:56.615607 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-gateway-7f8685b49f-9vkbl_3c673b0f-7739-4b94-99b9-abd66fb51937/gateway/0.log" Feb 19 10:37:56 crc kubenswrapper[4965]: I0219 10:37:56.647775 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-gateway-7f8685b49f-h6555_b1f1bb07-4bb0-4a1c-ab8a-0e97e8d57f1a/gateway/0.log" Feb 19 10:37:57 crc kubenswrapper[4965]: I0219 10:37:57.302650 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-index-gateway-0_f9902193-fba0-4ea4-8de6-352459b1c13f/loki-index-gateway/0.log" Feb 19 10:37:57 crc kubenswrapper[4965]: I0219 10:37:57.327427 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-ingester-0_faab82f2-bc31-438d-b329-9a31d6ba5040/loki-ingester/0.log" Feb 19 10:37:57 crc kubenswrapper[4965]: I0219 10:37:57.577503 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-query-frontend-67bb4dfcd8-4bmcg_849f49ac-72be-49ce-ab6b-2eb5890a6337/loki-query-frontend/0.log" Feb 19 10:37:58 crc kubenswrapper[4965]: I0219 10:37:58.164952 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-lkc2z_04f58633-9350-49a8-9c41-522490a298eb/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 10:37:58 crc kubenswrapper[4965]: I0219 10:37:58.230976 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-sxwbj_64c1fbe6-a102-40e1-920a-319b6664c77e/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 10:37:58 crc kubenswrapper[4965]: I0219 10:37:58.250761 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-querier-58c84b5844-hb4c6_7adcb318-8832-417d-814a-7a2d21c8af30/loki-querier/0.log" Feb 19 10:37:58 crc kubenswrapper[4965]: I0219 10:37:58.441027 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-c4b758ff5-gdxjp_5f29d993-47df-4952-a137-bb5cf52ea59a/init/0.log" Feb 19 10:37:58 crc kubenswrapper[4965]: I0219 10:37:58.641927 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-c4b758ff5-gdxjp_5f29d993-47df-4952-a137-bb5cf52ea59a/init/0.log" Feb 19 10:37:58 crc kubenswrapper[4965]: I0219 10:37:58.705154 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-c4b758ff5-gdxjp_5f29d993-47df-4952-a137-bb5cf52ea59a/dnsmasq-dns/0.log" Feb 19 10:37:58 crc kubenswrapper[4965]: I0219 10:37:58.746792 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-vfth9_2cc29510-fe65-45e1-b4fe-fef9bb2923b0/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 10:37:58 crc kubenswrapper[4965]: I0219 10:37:58.997834 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_d1ece847-d2dd-42e7-ad4c-5f9ad04529f8/glance-log/0.log" Feb 19 10:37:59 crc kubenswrapper[4965]: I0219 10:37:59.097054 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_d1ece847-d2dd-42e7-ad4c-5f9ad04529f8/glance-httpd/0.log" Feb 19 10:37:59 crc kubenswrapper[4965]: I0219 10:37:59.208808 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_b7e632f4-f05e-4ac6-a1cd-96ae3244c450/glance-httpd/0.log" Feb 19 10:37:59 crc kubenswrapper[4965]: I0219 10:37:59.275839 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_b7e632f4-f05e-4ac6-a1cd-96ae3244c450/glance-log/0.log" Feb 19 10:37:59 crc kubenswrapper[4965]: I0219 10:37:59.325557 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-gtd56_25080ebe-a4ea-4698-b64c-b7064ff93db6/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 10:37:59 crc kubenswrapper[4965]: I0219 10:37:59.576787 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-kpx6m_0f72a778-ba2a-4454-bba8-865897b5d656/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 10:37:59 crc kubenswrapper[4965]: I0219 10:37:59.883757 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_947a2943-c25c-4606-848a-a2942e8988c9/kube-state-metrics/0.log" Feb 19 10:38:00 crc kubenswrapper[4965]: I0219 10:38:00.074352 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-6dbb44f597-5cgmc_3a9b7b7c-7f72-46f8-aa26-1f03e4f0fd4b/keystone-api/0.log" Feb 19 10:38:00 crc kubenswrapper[4965]: I0219 10:38:00.431010 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-zt9j8_6b29dda1-69ac-4d2a-a078-e2f1a7103b67/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 10:38:00 crc kubenswrapper[4965]: I0219 10:38:00.782736 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-76546766f9-plbd4_40c5d1a6-44fc-4f35-a393-d82f69dde17f/neutron-httpd/0.log" Feb 19 10:38:00 crc kubenswrapper[4965]: I0219 10:38:00.940809 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-76546766f9-plbd4_40c5d1a6-44fc-4f35-a393-d82f69dde17f/neutron-api/0.log" Feb 19 10:38:01 crc kubenswrapper[4965]: I0219 10:38:01.005648 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-dnm76_1189041a-04c1-4fa1-9c71-daf77ef8b3fe/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 10:38:01 crc kubenswrapper[4965]: I0219 10:38:01.635312 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_4b71fda7-2162-4dda-a5ba-053eb96e59a9/nova-api-log/0.log" Feb 19 10:38:01 crc kubenswrapper[4965]: I0219 10:38:01.785577 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_4b71fda7-2162-4dda-a5ba-053eb96e59a9/nova-api-api/0.log" Feb 19 10:38:01 crc kubenswrapper[4965]: I0219 10:38:01.842252 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_8af99122-00d4-45e7-8e66-f541ba54a66a/nova-cell0-conductor-conductor/0.log" Feb 19 10:38:02 crc kubenswrapper[4965]: I0219 10:38:02.278863 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_04d07332-2cb5-49b4-b70c-9f3a13f73a09/nova-cell1-conductor-conductor/0.log" Feb 19 10:38:02 crc kubenswrapper[4965]: I0219 10:38:02.307296 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_9358573e-5a2b-4f2a-bbff-0e55e0e00869/nova-cell1-novncproxy-novncproxy/0.log" Feb 19 10:38:02 crc kubenswrapper[4965]: I0219 10:38:02.602597 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-x4cs9_4bb72c3c-878c-497d-8105-767df1971b0d/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 10:38:02 crc kubenswrapper[4965]: I0219 10:38:02.798359 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_b29094ed-8036-44ed-a882-7ad1d5ad4cc3/nova-metadata-log/0.log" Feb 19 10:38:03 crc kubenswrapper[4965]: I0219 10:38:03.276812 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_6bc77d18-18ba-4f28-ab8c-a1d4e77996f3/nova-scheduler-scheduler/0.log" Feb 19 10:38:03 crc kubenswrapper[4965]: I0219 10:38:03.460312 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_5b862187-0edd-4939-9260-d0d35653485c/mysql-bootstrap/0.log" Feb 19 10:38:03 crc kubenswrapper[4965]: I0219 10:38:03.650992 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_5b862187-0edd-4939-9260-d0d35653485c/galera/0.log" Feb 19 10:38:03 crc kubenswrapper[4965]: I0219 10:38:03.725867 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_5b862187-0edd-4939-9260-d0d35653485c/mysql-bootstrap/0.log" Feb 19 10:38:04 crc kubenswrapper[4965]: I0219 10:38:04.058907 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_215df1f4-6c30-4144-b141-5a867e8d2728/mysql-bootstrap/0.log" Feb 19 10:38:04 crc kubenswrapper[4965]: I0219 10:38:04.241264 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_215df1f4-6c30-4144-b141-5a867e8d2728/galera/0.log" Feb 19 10:38:04 crc kubenswrapper[4965]: I0219 10:38:04.246679 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_215df1f4-6c30-4144-b141-5a867e8d2728/mysql-bootstrap/0.log" Feb 19 10:38:04 crc kubenswrapper[4965]: I0219 10:38:04.260562 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_b29094ed-8036-44ed-a882-7ad1d5ad4cc3/nova-metadata-metadata/0.log" Feb 19 10:38:04 crc kubenswrapper[4965]: I0219 10:38:04.523750 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_96d45563-22bf-42f1-bc03-4fd3b223293d/openstackclient/0.log" Feb 19 10:38:04 crc kubenswrapper[4965]: I0219 10:38:04.781132 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-wwgl6_154fb9e1-1e52-4338-964c-8210b8bbbc57/openstack-network-exporter/0.log" Feb 19 10:38:04 crc kubenswrapper[4965]: I0219 10:38:04.910751 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-mwlb6_0a44ec63-c497-4874-b0ca-ecb9d6c9bc2a/ovn-controller/0.log" Feb 19 10:38:05 crc kubenswrapper[4965]: I0219 10:38:05.132158 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jlns7_3f408d9e-6ca2-490c-be7e-0516fa19db75/ovsdb-server-init/0.log" Feb 19 10:38:05 crc kubenswrapper[4965]: I0219 10:38:05.328049 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jlns7_3f408d9e-6ca2-490c-be7e-0516fa19db75/ovsdb-server/0.log" Feb 19 10:38:05 crc kubenswrapper[4965]: I0219 10:38:05.364286 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jlns7_3f408d9e-6ca2-490c-be7e-0516fa19db75/ovs-vswitchd/0.log" Feb 19 10:38:05 crc kubenswrapper[4965]: I0219 10:38:05.401430 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jlns7_3f408d9e-6ca2-490c-be7e-0516fa19db75/ovsdb-server-init/0.log" Feb 19 10:38:05 crc kubenswrapper[4965]: I0219 10:38:05.643703 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-xd7zh_24c52aa6-9277-4040-8262-1bac8005a463/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 10:38:05 crc kubenswrapper[4965]: I0219 10:38:05.939820 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_89c93b76-069c-4c94-aa84-a77d7e4c8e26/openstack-network-exporter/0.log" Feb 19 10:38:05 crc kubenswrapper[4965]: I0219 10:38:05.948793 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_89c93b76-069c-4c94-aa84-a77d7e4c8e26/ovn-northd/0.log" Feb 19 10:38:06 crc kubenswrapper[4965]: I0219 10:38:06.159066 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_98633dba-c95c-4f35-a045-5c738d652492/openstack-network-exporter/0.log" Feb 19 10:38:06 crc kubenswrapper[4965]: I0219 10:38:06.220506 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_98633dba-c95c-4f35-a045-5c738d652492/ovsdbserver-nb/0.log" Feb 19 10:38:06 crc kubenswrapper[4965]: I0219 10:38:06.417211 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_1520d7ba-9d74-47f8-9c7a-9731ae9ff49e/openstack-network-exporter/0.log" Feb 19 10:38:06 crc kubenswrapper[4965]: I0219 10:38:06.466446 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_1520d7ba-9d74-47f8-9c7a-9731ae9ff49e/ovsdbserver-sb/0.log" Feb 19 10:38:06 crc kubenswrapper[4965]: I0219 10:38:06.760433 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-fbb65bccb-zmlg7_8505e9f1-238a-4f32-95a4-95979a4f7bac/placement-api/0.log" Feb 19 10:38:06 crc kubenswrapper[4965]: I0219 10:38:06.956447 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-fbb65bccb-zmlg7_8505e9f1-238a-4f32-95a4-95979a4f7bac/placement-log/0.log" Feb 19 10:38:07 crc kubenswrapper[4965]: I0219 10:38:07.035538 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_efd4d0e2-bc4c-4bac-9236-37338445f7c7/init-config-reloader/0.log" Feb 19 10:38:07 crc kubenswrapper[4965]: I0219 10:38:07.314920 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_efd4d0e2-bc4c-4bac-9236-37338445f7c7/init-config-reloader/0.log" Feb 19 10:38:07 crc kubenswrapper[4965]: I0219 10:38:07.340180 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_efd4d0e2-bc4c-4bac-9236-37338445f7c7/config-reloader/0.log" Feb 19 10:38:07 crc kubenswrapper[4965]: I0219 10:38:07.347664 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_efd4d0e2-bc4c-4bac-9236-37338445f7c7/prometheus/0.log" Feb 19 10:38:07 crc kubenswrapper[4965]: I0219 10:38:07.536722 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_efd4d0e2-bc4c-4bac-9236-37338445f7c7/thanos-sidecar/0.log" Feb 19 10:38:07 crc kubenswrapper[4965]: I0219 10:38:07.566471 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_8214d39f-90ff-4188-abbf-6a097f33eef0/setup-container/0.log" Feb 19 10:38:07 crc kubenswrapper[4965]: I0219 10:38:07.902582 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_8214d39f-90ff-4188-abbf-6a097f33eef0/setup-container/0.log" Feb 19 10:38:07 crc kubenswrapper[4965]: I0219 10:38:07.974538 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_8214d39f-90ff-4188-abbf-6a097f33eef0/rabbitmq/0.log" Feb 19 10:38:08 crc kubenswrapper[4965]: I0219 10:38:08.217729 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa/setup-container/0.log" Feb 19 10:38:08 crc kubenswrapper[4965]: I0219 10:38:08.625906 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa/setup-container/0.log" Feb 19 10:38:08 crc kubenswrapper[4965]: I0219 10:38:08.766993 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa/rabbitmq/0.log" Feb 19 10:38:08 crc kubenswrapper[4965]: I0219 10:38:08.901081 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-vprz4_6991c5fe-b928-4ea6-a3e5-bb8dbf6f9763/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 10:38:09 crc kubenswrapper[4965]: I0219 10:38:09.006833 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-rdb84_da068017-3803-4d74-bea1-932b1d829055/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 10:38:09 crc kubenswrapper[4965]: I0219 10:38:09.308705 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-87z8v_58f8c7f1-d425-4b21-ba27-1e47c69ddd93/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 10:38:09 crc kubenswrapper[4965]: I0219 10:38:09.604450 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-bv8v8_6827b2eb-c6f9-42d2-b11d-ef676213f97f/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 10:38:09 crc kubenswrapper[4965]: I0219 10:38:09.619185 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-rz8q9_b34f48f2-8dcc-4e0f-a1db-03a8adcc08e4/ssh-known-hosts-edpm-deployment/0.log" Feb 19 10:38:09 crc kubenswrapper[4965]: I0219 10:38:09.922658 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-76485c5b9f-wzzpl_69ee7a64-2965-42d1-bad2-82087733b567/proxy-server/0.log" Feb 19 10:38:10 crc kubenswrapper[4965]: I0219 10:38:10.116548 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-76485c5b9f-wzzpl_69ee7a64-2965-42d1-bad2-82087733b567/proxy-httpd/0.log" Feb 19 10:38:10 crc kubenswrapper[4965]: I0219 10:38:10.414348 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-kx9jd_f2a6db35-796d-485d-9b96-5c03b7d7725b/swift-ring-rebalance/0.log" Feb 19 10:38:10 crc kubenswrapper[4965]: I0219 10:38:10.634157 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2c3ae050-b164-4fbc-9e5b-392eb0a4fb53/account-reaper/0.log" Feb 19 10:38:10 crc kubenswrapper[4965]: I0219 10:38:10.661383 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2c3ae050-b164-4fbc-9e5b-392eb0a4fb53/account-auditor/0.log" Feb 19 10:38:10 crc kubenswrapper[4965]: I0219 10:38:10.781919 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2c3ae050-b164-4fbc-9e5b-392eb0a4fb53/account-replicator/0.log" Feb 19 10:38:10 crc kubenswrapper[4965]: I0219 10:38:10.902504 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2c3ae050-b164-4fbc-9e5b-392eb0a4fb53/container-auditor/0.log" Feb 19 10:38:10 crc kubenswrapper[4965]: I0219 10:38:10.913489 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2c3ae050-b164-4fbc-9e5b-392eb0a4fb53/account-server/0.log" Feb 19 10:38:11 crc kubenswrapper[4965]: I0219 10:38:11.042233 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2c3ae050-b164-4fbc-9e5b-392eb0a4fb53/container-replicator/0.log" Feb 19 10:38:11 crc kubenswrapper[4965]: I0219 10:38:11.090585 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2c3ae050-b164-4fbc-9e5b-392eb0a4fb53/container-server/0.log" Feb 19 10:38:11 crc kubenswrapper[4965]: I0219 10:38:11.120919 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2c3ae050-b164-4fbc-9e5b-392eb0a4fb53/container-updater/0.log" Feb 19 10:38:11 crc kubenswrapper[4965]: I0219 10:38:11.215876 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2c3ae050-b164-4fbc-9e5b-392eb0a4fb53/object-auditor/0.log" Feb 19 10:38:11 crc kubenswrapper[4965]: I0219 10:38:11.335418 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2c3ae050-b164-4fbc-9e5b-392eb0a4fb53/object-replicator/0.log" Feb 19 10:38:11 crc kubenswrapper[4965]: I0219 10:38:11.374466 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2c3ae050-b164-4fbc-9e5b-392eb0a4fb53/object-expirer/0.log" Feb 19 10:38:11 crc kubenswrapper[4965]: I0219 10:38:11.480396 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-proc-0_aa682c21-5c48-4518-9033-2f28eae7f24d/cloudkitty-proc/0.log" Feb 19 10:38:11 crc kubenswrapper[4965]: I0219 10:38:11.511746 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2c3ae050-b164-4fbc-9e5b-392eb0a4fb53/object-server/0.log" Feb 19 10:38:11 crc kubenswrapper[4965]: I0219 10:38:11.614601 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2c3ae050-b164-4fbc-9e5b-392eb0a4fb53/object-updater/0.log" Feb 19 10:38:11 crc kubenswrapper[4965]: I0219 10:38:11.729731 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2c3ae050-b164-4fbc-9e5b-392eb0a4fb53/swift-recon-cron/0.log" Feb 19 10:38:11 crc kubenswrapper[4965]: I0219 10:38:11.732826 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2c3ae050-b164-4fbc-9e5b-392eb0a4fb53/rsync/0.log" Feb 19 10:38:12 crc kubenswrapper[4965]: I0219 10:38:12.031147 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-lf5p2_b3d2f922-3941-4ff3-92fc-6bb14cd46698/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 10:38:12 crc kubenswrapper[4965]: I0219 10:38:12.039869 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a/tempest-tests-tempest-tests-runner/0.log" Feb 19 10:38:12 crc kubenswrapper[4965]: I0219 10:38:12.242383 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_ca9aec62-8a03-4f2d-acf7-cb4c5a08be00/test-operator-logs-container/0.log" Feb 19 10:38:12 crc kubenswrapper[4965]: I0219 10:38:12.333854 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-58l4m_9873ade5-a134-4b72-bbfe-468df59b993f/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 10:38:17 crc kubenswrapper[4965]: I0219 10:38:17.621143 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_40caef4c-7f84-42cb-b51c-b0884efc2052/memcached/0.log" Feb 19 10:38:43 crc kubenswrapper[4965]: I0219 10:38:43.557530 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b02772c0e8c97b926926e0a1d8d8c995d8f01d9d9d64402b28cb4393dfntdwr_62a8af9d-5a83-4c80-bc2e-49c0c576ed6e/util/0.log" Feb 19 10:38:43 crc kubenswrapper[4965]: I0219 10:38:43.799954 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b02772c0e8c97b926926e0a1d8d8c995d8f01d9d9d64402b28cb4393dfntdwr_62a8af9d-5a83-4c80-bc2e-49c0c576ed6e/util/0.log" Feb 19 10:38:43 crc kubenswrapper[4965]: I0219 10:38:43.890266 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b02772c0e8c97b926926e0a1d8d8c995d8f01d9d9d64402b28cb4393dfntdwr_62a8af9d-5a83-4c80-bc2e-49c0c576ed6e/pull/0.log" Feb 19 10:38:43 crc kubenswrapper[4965]: I0219 10:38:43.905223 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b02772c0e8c97b926926e0a1d8d8c995d8f01d9d9d64402b28cb4393dfntdwr_62a8af9d-5a83-4c80-bc2e-49c0c576ed6e/pull/0.log" Feb 19 10:38:44 crc kubenswrapper[4965]: I0219 10:38:44.129011 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b02772c0e8c97b926926e0a1d8d8c995d8f01d9d9d64402b28cb4393dfntdwr_62a8af9d-5a83-4c80-bc2e-49c0c576ed6e/pull/0.log" Feb 19 10:38:44 crc kubenswrapper[4965]: I0219 10:38:44.136904 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b02772c0e8c97b926926e0a1d8d8c995d8f01d9d9d64402b28cb4393dfntdwr_62a8af9d-5a83-4c80-bc2e-49c0c576ed6e/util/0.log" Feb 19 10:38:44 crc kubenswrapper[4965]: I0219 10:38:44.161077 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b02772c0e8c97b926926e0a1d8d8c995d8f01d9d9d64402b28cb4393dfntdwr_62a8af9d-5a83-4c80-bc2e-49c0c576ed6e/extract/0.log" Feb 19 10:38:44 crc kubenswrapper[4965]: I0219 10:38:44.610433 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-ztvs5_7c1737a3-9dfe-4208-a8da-8be7f09394d9/manager/0.log" Feb 19 10:38:45 crc kubenswrapper[4965]: I0219 10:38:45.267090 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-bndgq_ef077548-5e44-43f1-9f0d-3cf539bca16b/manager/0.log" Feb 19 10:38:45 crc kubenswrapper[4965]: I0219 10:38:45.526591 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-4rtq9_8e1c4dc5-2d5b-46fb-b3cc-1ae2749fd02c/manager/0.log" Feb 19 10:38:45 crc kubenswrapper[4965]: I0219 10:38:45.738845 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-vgqfx_fe5bbdd4-d10a-4bc6-bd35-76c7abb54600/manager/0.log" Feb 19 10:38:45 crc kubenswrapper[4965]: I0219 10:38:45.967424 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-2g7mq_74f4ddc1-28bd-411f-8f0c-c5bfc3bfcec6/manager/0.log" Feb 19 10:38:46 crc kubenswrapper[4965]: I0219 10:38:46.300762 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-z78k7_73c20094-0abc-4525-ae77-d571755841fa/manager/0.log" Feb 19 10:38:46 crc kubenswrapper[4965]: I0219 10:38:46.442882 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-zqmsr_2f161526-b0fd-453b-8ae7-7b9b7a485b97/manager/0.log" Feb 19 10:38:46 crc kubenswrapper[4965]: I0219 10:38:46.600645 4965 patch_prober.go:28] interesting pod/machine-config-daemon-7mhh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:38:46 crc kubenswrapper[4965]: I0219 10:38:46.600692 4965 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:38:46 crc kubenswrapper[4965]: I0219 10:38:46.617236 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-h5skt_5747cc94-5621-4a7d-b599-f2a0f2a2aa29/manager/0.log" Feb 19 10:38:46 crc kubenswrapper[4965]: I0219 10:38:46.737758 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-zh77z_c94f0d1d-5edd-4b64-b2c7-85bdc5022ec3/manager/0.log" Feb 19 10:38:46 crc kubenswrapper[4965]: I0219 10:38:46.971395 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-4md54_9898282c-422b-49dd-b369-da910d49a2d8/manager/0.log" Feb 19 10:38:47 crc kubenswrapper[4965]: I0219 10:38:47.245470 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-49xr8_ec34bcd2-48d7-4522-a32a-268a3a1b385c/manager/0.log" Feb 19 10:38:47 crc kubenswrapper[4965]: I0219 10:38:47.386723 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-7mzd9_18230479-3d13-49f7-a2a1-95a191acb3db/manager/0.log" Feb 19 10:38:47 crc kubenswrapper[4965]: I0219 10:38:47.790149 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cpknzq_58e82cd5-3bd0-4f99-b958-29e5541fa49a/manager/0.log" Feb 19 10:38:48 crc kubenswrapper[4965]: I0219 10:38:48.244817 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-d86db9fbc-vplp8_5c35ac3d-3c0a-48b2-a17d-ce896fa8a00e/operator/0.log" Feb 19 10:38:48 crc kubenswrapper[4965]: I0219 10:38:48.532544 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-wpjxk_2c206b8c-0a2e-4081-8f51-29977545ef20/registry-server/0.log" Feb 19 10:38:49 crc kubenswrapper[4965]: I0219 10:38:49.154766 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-wp77d_ca57fee7-64f8-4c49-9170-6f6e618c78e7/manager/0.log" Feb 19 10:38:49 crc kubenswrapper[4965]: I0219 10:38:49.412024 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-h27hl_a354e865-3819-4147-a565-4682bc4c6a6c/manager/0.log" Feb 19 10:38:49 crc kubenswrapper[4965]: I0219 10:38:49.701574 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-frln4_f1723aed-01cb-4ac1-b191-299a6dd638e5/operator/0.log" Feb 19 10:38:50 crc kubenswrapper[4965]: I0219 10:38:50.212874 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-h5rvt_e70fa350-bca9-4007-80a9-15cfb3a56b11/manager/0.log" Feb 19 10:38:50 crc kubenswrapper[4965]: I0219 10:38:50.467605 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-glzx9_a0ff2743-9ab6-4388-b0af-06e06c3e7587/manager/0.log" Feb 19 10:38:50 crc kubenswrapper[4965]: I0219 10:38:50.702742 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-jzssc_7e1ae3d6-7af0-406d-b740-98c9f5c9403c/manager/0.log" Feb 19 10:38:50 crc kubenswrapper[4965]: I0219 10:38:50.932423 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5db88f68c-54vfn_6bd1df07-8b75-44b8-91a3-4f612b64c279/manager/0.log" Feb 19 10:38:50 crc kubenswrapper[4965]: I0219 10:38:50.960612 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7f6588fc96-6phd8_186369a2-50b6-4226-be98-8876e469033f/manager/0.log" Feb 19 10:38:51 crc kubenswrapper[4965]: I0219 10:38:51.039669 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-dff68c48-5928s_f1fcb3fa-62de-4b0b-93db-3e401ff94fe4/manager/0.log" Feb 19 10:38:53 crc kubenswrapper[4965]: I0219 10:38:53.125912 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-jncdt_24b54009-86e7-409a-991e-a406d38ab751/manager/0.log" Feb 19 10:39:13 crc kubenswrapper[4965]: I0219 10:39:13.671364 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-2v6fb_5e0fcf66-e50c-4c4c-9370-08ed336d25d9/control-plane-machine-set-operator/0.log" Feb 19 10:39:13 crc kubenswrapper[4965]: I0219 10:39:13.836848 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-hqt8l_49cf856e-b37d-4ab6-9c6e-241cbc4be93e/machine-api-operator/0.log" Feb 19 10:39:13 crc kubenswrapper[4965]: I0219 10:39:13.847243 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-hqt8l_49cf856e-b37d-4ab6-9c6e-241cbc4be93e/kube-rbac-proxy/0.log" Feb 19 10:39:16 crc kubenswrapper[4965]: I0219 10:39:16.609795 4965 patch_prober.go:28] interesting pod/machine-config-daemon-7mhh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:39:16 crc kubenswrapper[4965]: I0219 10:39:16.610322 4965 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:39:27 crc kubenswrapper[4965]: I0219 10:39:27.837661 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-vmfkz_41967e40-5df3-456a-aae9-86b898d18216/cert-manager-controller/0.log" Feb 19 10:39:28 crc kubenswrapper[4965]: I0219 10:39:28.008517 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-ls5h7_592650ba-f791-4f32-bbbe-23c0a5d9e82b/cert-manager-cainjector/0.log" Feb 19 10:39:28 crc kubenswrapper[4965]: I0219 10:39:28.090343 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-5vx5v_ac8283e8-11a9-4b2f-ac84-4f8f6a7821bc/cert-manager-webhook/0.log" Feb 19 10:39:41 crc kubenswrapper[4965]: I0219 10:39:41.422762 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-5xfsg_85cb536f-7492-4fb3-90dd-d71c7d207771/nmstate-console-plugin/0.log" Feb 19 10:39:41 crc kubenswrapper[4965]: I0219 10:39:41.648769 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-ftvpm_75ab303c-d1a1-45fd-b457-b5c2a118e898/nmstate-handler/0.log" Feb 19 10:39:41 crc kubenswrapper[4965]: I0219 10:39:41.713370 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-jnl6n_fbb4bfee-56b1-49ff-ae41-a6ea373fd06a/kube-rbac-proxy/0.log" Feb 19 10:39:41 crc kubenswrapper[4965]: I0219 10:39:41.740148 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-jnl6n_fbb4bfee-56b1-49ff-ae41-a6ea373fd06a/nmstate-metrics/0.log" Feb 19 10:39:41 crc kubenswrapper[4965]: I0219 10:39:41.892738 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-kkrqs_56e58cdb-3ef2-4cbf-a926-70ac47e83f9c/nmstate-operator/0.log" Feb 19 10:39:41 crc kubenswrapper[4965]: I0219 10:39:41.928991 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-8gm87_1b0feb3d-7d0d-43b4-bf7c-afd4e11dc0b7/nmstate-webhook/0.log" Feb 19 10:39:46 crc kubenswrapper[4965]: I0219 10:39:46.600797 4965 patch_prober.go:28] interesting pod/machine-config-daemon-7mhh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:39:46 crc kubenswrapper[4965]: I0219 10:39:46.601486 4965 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:39:46 crc kubenswrapper[4965]: I0219 10:39:46.601534 4965 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" Feb 19 10:39:46 crc kubenswrapper[4965]: I0219 10:39:46.602298 4965 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"80c316cf612b9ca1c6b347e543b0dd6345d3fe3eb0881cb783c0e74417f03cc0"} pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 10:39:46 crc kubenswrapper[4965]: I0219 10:39:46.602353 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerName="machine-config-daemon" containerID="cri-o://80c316cf612b9ca1c6b347e543b0dd6345d3fe3eb0881cb783c0e74417f03cc0" gracePeriod=600 Feb 19 10:39:46 crc kubenswrapper[4965]: E0219 10:39:46.772999 4965 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63ef3eb8_6103_492d_b6ef_f16081d15e83.slice/crio-conmon-80c316cf612b9ca1c6b347e543b0dd6345d3fe3eb0881cb783c0e74417f03cc0.scope\": RecentStats: unable to find data in memory cache]" Feb 19 10:39:47 crc kubenswrapper[4965]: I0219 10:39:47.157886 4965 generic.go:334] "Generic (PLEG): container finished" podID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerID="80c316cf612b9ca1c6b347e543b0dd6345d3fe3eb0881cb783c0e74417f03cc0" exitCode=0 Feb 19 10:39:47 crc kubenswrapper[4965]: I0219 10:39:47.157967 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" event={"ID":"63ef3eb8-6103-492d-b6ef-f16081d15e83","Type":"ContainerDied","Data":"80c316cf612b9ca1c6b347e543b0dd6345d3fe3eb0881cb783c0e74417f03cc0"} Feb 19 10:39:47 crc kubenswrapper[4965]: I0219 10:39:47.158776 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" event={"ID":"63ef3eb8-6103-492d-b6ef-f16081d15e83","Type":"ContainerStarted","Data":"dd91099379f13248da41756ec9df975dda5a009207ac101c1b7f089c85137088"} Feb 19 10:39:47 crc kubenswrapper[4965]: I0219 10:39:47.158813 4965 scope.go:117] "RemoveContainer" containerID="b29f2a4df3d1b4e7054216ba139e5ed515eb129972b5c4cb743ccf8db61bf96b" Feb 19 10:39:55 crc kubenswrapper[4965]: I0219 10:39:55.475618 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-564bb987d4-6pxn4_d8ed232a-7084-4f69-afdf-6d674b5864de/kube-rbac-proxy/0.log" Feb 19 10:39:55 crc kubenswrapper[4965]: I0219 10:39:55.493684 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-564bb987d4-6pxn4_d8ed232a-7084-4f69-afdf-6d674b5864de/manager/0.log" Feb 19 10:40:07 crc kubenswrapper[4965]: I0219 10:40:07.771133 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rjp8r"] Feb 19 10:40:07 crc kubenswrapper[4965]: E0219 10:40:07.778798 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1475d9df-7211-4cfe-8e4c-c5940ee001fa" containerName="container-00" Feb 19 10:40:07 crc kubenswrapper[4965]: I0219 10:40:07.779024 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="1475d9df-7211-4cfe-8e4c-c5940ee001fa" containerName="container-00" Feb 19 10:40:07 crc kubenswrapper[4965]: I0219 10:40:07.779449 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="1475d9df-7211-4cfe-8e4c-c5940ee001fa" containerName="container-00" Feb 19 10:40:07 crc kubenswrapper[4965]: I0219 10:40:07.786839 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rjp8r"] Feb 19 10:40:07 crc kubenswrapper[4965]: I0219 10:40:07.803516 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rjp8r" Feb 19 10:40:07 crc kubenswrapper[4965]: I0219 10:40:07.890828 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8h2w\" (UniqueName: \"kubernetes.io/projected/d9cab2e1-f235-4ddd-a17a-78af546d0fb2-kube-api-access-f8h2w\") pod \"redhat-marketplace-rjp8r\" (UID: \"d9cab2e1-f235-4ddd-a17a-78af546d0fb2\") " pod="openshift-marketplace/redhat-marketplace-rjp8r" Feb 19 10:40:07 crc kubenswrapper[4965]: I0219 10:40:07.890902 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9cab2e1-f235-4ddd-a17a-78af546d0fb2-utilities\") pod \"redhat-marketplace-rjp8r\" (UID: \"d9cab2e1-f235-4ddd-a17a-78af546d0fb2\") " pod="openshift-marketplace/redhat-marketplace-rjp8r" Feb 19 10:40:07 crc kubenswrapper[4965]: I0219 10:40:07.890946 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9cab2e1-f235-4ddd-a17a-78af546d0fb2-catalog-content\") pod \"redhat-marketplace-rjp8r\" (UID: \"d9cab2e1-f235-4ddd-a17a-78af546d0fb2\") " pod="openshift-marketplace/redhat-marketplace-rjp8r" Feb 19 10:40:07 crc kubenswrapper[4965]: I0219 10:40:07.992890 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8h2w\" (UniqueName: \"kubernetes.io/projected/d9cab2e1-f235-4ddd-a17a-78af546d0fb2-kube-api-access-f8h2w\") pod \"redhat-marketplace-rjp8r\" (UID: \"d9cab2e1-f235-4ddd-a17a-78af546d0fb2\") " pod="openshift-marketplace/redhat-marketplace-rjp8r" Feb 19 10:40:07 crc kubenswrapper[4965]: I0219 10:40:07.992967 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9cab2e1-f235-4ddd-a17a-78af546d0fb2-utilities\") pod \"redhat-marketplace-rjp8r\" (UID: \"d9cab2e1-f235-4ddd-a17a-78af546d0fb2\") " pod="openshift-marketplace/redhat-marketplace-rjp8r" Feb 19 10:40:07 crc kubenswrapper[4965]: I0219 10:40:07.993012 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9cab2e1-f235-4ddd-a17a-78af546d0fb2-catalog-content\") pod \"redhat-marketplace-rjp8r\" (UID: \"d9cab2e1-f235-4ddd-a17a-78af546d0fb2\") " pod="openshift-marketplace/redhat-marketplace-rjp8r" Feb 19 10:40:07 crc kubenswrapper[4965]: I0219 10:40:07.993646 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9cab2e1-f235-4ddd-a17a-78af546d0fb2-catalog-content\") pod \"redhat-marketplace-rjp8r\" (UID: \"d9cab2e1-f235-4ddd-a17a-78af546d0fb2\") " pod="openshift-marketplace/redhat-marketplace-rjp8r" Feb 19 10:40:07 crc kubenswrapper[4965]: I0219 10:40:07.993672 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9cab2e1-f235-4ddd-a17a-78af546d0fb2-utilities\") pod \"redhat-marketplace-rjp8r\" (UID: \"d9cab2e1-f235-4ddd-a17a-78af546d0fb2\") " pod="openshift-marketplace/redhat-marketplace-rjp8r" Feb 19 10:40:08 crc kubenswrapper[4965]: I0219 10:40:08.014939 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8h2w\" (UniqueName: \"kubernetes.io/projected/d9cab2e1-f235-4ddd-a17a-78af546d0fb2-kube-api-access-f8h2w\") pod \"redhat-marketplace-rjp8r\" (UID: \"d9cab2e1-f235-4ddd-a17a-78af546d0fb2\") " pod="openshift-marketplace/redhat-marketplace-rjp8r" Feb 19 10:40:08 crc kubenswrapper[4965]: I0219 10:40:08.130380 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rjp8r" Feb 19 10:40:08 crc kubenswrapper[4965]: I0219 10:40:08.681098 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rjp8r"] Feb 19 10:40:09 crc kubenswrapper[4965]: I0219 10:40:09.360523 4965 generic.go:334] "Generic (PLEG): container finished" podID="d9cab2e1-f235-4ddd-a17a-78af546d0fb2" containerID="d7c0bb7a6076ce3c2e6f55678203925fa30e51ccc1bf91a9cd90de4731f96d70" exitCode=0 Feb 19 10:40:09 crc kubenswrapper[4965]: I0219 10:40:09.360578 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rjp8r" event={"ID":"d9cab2e1-f235-4ddd-a17a-78af546d0fb2","Type":"ContainerDied","Data":"d7c0bb7a6076ce3c2e6f55678203925fa30e51ccc1bf91a9cd90de4731f96d70"} Feb 19 10:40:09 crc kubenswrapper[4965]: I0219 10:40:09.360783 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rjp8r" event={"ID":"d9cab2e1-f235-4ddd-a17a-78af546d0fb2","Type":"ContainerStarted","Data":"db8a3783b8cf06516a945b1dc076aa40cb0198f62f00ae00a2266eaa6aa5eb21"} Feb 19 10:40:10 crc kubenswrapper[4965]: I0219 10:40:10.327979 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-qfjz7_97e4a3bf-25d9-4a7b-ab73-7be5267dcfb1/prometheus-operator/0.log" Feb 19 10:40:10 crc kubenswrapper[4965]: I0219 10:40:10.516630 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5675bf8465-d42dv_0e50e1bd-3144-4362-9c46-355cfb2ba24f/prometheus-operator-admission-webhook/0.log" Feb 19 10:40:10 crc kubenswrapper[4965]: I0219 10:40:10.623933 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5675bf8465-h45db_0d85e95a-22ec-4364-a43c-04e60d68be0d/prometheus-operator-admission-webhook/0.log" Feb 19 10:40:10 crc kubenswrapper[4965]: I0219 10:40:10.842271 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-h4689_b7e1070f-f099-4a4f-a107-c1b8589af7c7/operator/0.log" Feb 19 10:40:10 crc kubenswrapper[4965]: I0219 10:40:10.865375 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-x7xjb_d55c4261-3d41-49fd-97dd-098bb8747449/perses-operator/0.log" Feb 19 10:40:11 crc kubenswrapper[4965]: I0219 10:40:11.378493 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rjp8r" event={"ID":"d9cab2e1-f235-4ddd-a17a-78af546d0fb2","Type":"ContainerStarted","Data":"9e516e7224adaa7e4345c162fc4162e6922c0ff12e14e9924fc2d4413261e9a2"} Feb 19 10:40:12 crc kubenswrapper[4965]: I0219 10:40:12.452489 4965 generic.go:334] "Generic (PLEG): container finished" podID="d9cab2e1-f235-4ddd-a17a-78af546d0fb2" containerID="9e516e7224adaa7e4345c162fc4162e6922c0ff12e14e9924fc2d4413261e9a2" exitCode=0 Feb 19 10:40:12 crc kubenswrapper[4965]: I0219 10:40:12.452845 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rjp8r" event={"ID":"d9cab2e1-f235-4ddd-a17a-78af546d0fb2","Type":"ContainerDied","Data":"9e516e7224adaa7e4345c162fc4162e6922c0ff12e14e9924fc2d4413261e9a2"} Feb 19 10:40:13 crc kubenswrapper[4965]: I0219 10:40:13.481778 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rjp8r" event={"ID":"d9cab2e1-f235-4ddd-a17a-78af546d0fb2","Type":"ContainerStarted","Data":"798aa2ff8d13ca96630fd57302b6041fedad139cf2cabcbd41fcd9359e21e3ae"} Feb 19 10:40:13 crc kubenswrapper[4965]: I0219 10:40:13.502347 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rjp8r" podStartSLOduration=3.019398051 podStartE2EDuration="6.502324607s" podCreationTimestamp="2026-02-19 10:40:07 +0000 UTC" firstStartedPulling="2026-02-19 10:40:09.363390371 +0000 UTC m=+3464.984711681" lastFinishedPulling="2026-02-19 10:40:12.846316927 +0000 UTC m=+3468.467638237" observedRunningTime="2026-02-19 10:40:13.500723428 +0000 UTC m=+3469.122044748" watchObservedRunningTime="2026-02-19 10:40:13.502324607 +0000 UTC m=+3469.123645917" Feb 19 10:40:18 crc kubenswrapper[4965]: I0219 10:40:18.130666 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rjp8r" Feb 19 10:40:18 crc kubenswrapper[4965]: I0219 10:40:18.131109 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rjp8r" Feb 19 10:40:18 crc kubenswrapper[4965]: I0219 10:40:18.187385 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rjp8r" Feb 19 10:40:18 crc kubenswrapper[4965]: I0219 10:40:18.592500 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rjp8r" Feb 19 10:40:18 crc kubenswrapper[4965]: I0219 10:40:18.764769 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rjp8r"] Feb 19 10:40:20 crc kubenswrapper[4965]: I0219 10:40:20.561961 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rjp8r" podUID="d9cab2e1-f235-4ddd-a17a-78af546d0fb2" containerName="registry-server" containerID="cri-o://798aa2ff8d13ca96630fd57302b6041fedad139cf2cabcbd41fcd9359e21e3ae" gracePeriod=2 Feb 19 10:40:21 crc kubenswrapper[4965]: I0219 10:40:21.386086 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rjp8r" Feb 19 10:40:21 crc kubenswrapper[4965]: I0219 10:40:21.573215 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rjp8r" Feb 19 10:40:21 crc kubenswrapper[4965]: I0219 10:40:21.573378 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rjp8r" event={"ID":"d9cab2e1-f235-4ddd-a17a-78af546d0fb2","Type":"ContainerDied","Data":"798aa2ff8d13ca96630fd57302b6041fedad139cf2cabcbd41fcd9359e21e3ae"} Feb 19 10:40:21 crc kubenswrapper[4965]: I0219 10:40:21.573458 4965 scope.go:117] "RemoveContainer" containerID="798aa2ff8d13ca96630fd57302b6041fedad139cf2cabcbd41fcd9359e21e3ae" Feb 19 10:40:21 crc kubenswrapper[4965]: I0219 10:40:21.573734 4965 generic.go:334] "Generic (PLEG): container finished" podID="d9cab2e1-f235-4ddd-a17a-78af546d0fb2" containerID="798aa2ff8d13ca96630fd57302b6041fedad139cf2cabcbd41fcd9359e21e3ae" exitCode=0 Feb 19 10:40:21 crc kubenswrapper[4965]: I0219 10:40:21.573769 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rjp8r" event={"ID":"d9cab2e1-f235-4ddd-a17a-78af546d0fb2","Type":"ContainerDied","Data":"db8a3783b8cf06516a945b1dc076aa40cb0198f62f00ae00a2266eaa6aa5eb21"} Feb 19 10:40:21 crc kubenswrapper[4965]: I0219 10:40:21.583911 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9cab2e1-f235-4ddd-a17a-78af546d0fb2-utilities\") pod \"d9cab2e1-f235-4ddd-a17a-78af546d0fb2\" (UID: \"d9cab2e1-f235-4ddd-a17a-78af546d0fb2\") " Feb 19 10:40:21 crc kubenswrapper[4965]: I0219 10:40:21.584086 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9cab2e1-f235-4ddd-a17a-78af546d0fb2-catalog-content\") pod \"d9cab2e1-f235-4ddd-a17a-78af546d0fb2\" (UID: \"d9cab2e1-f235-4ddd-a17a-78af546d0fb2\") " Feb 19 10:40:21 crc kubenswrapper[4965]: I0219 10:40:21.584126 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8h2w\" (UniqueName: \"kubernetes.io/projected/d9cab2e1-f235-4ddd-a17a-78af546d0fb2-kube-api-access-f8h2w\") pod \"d9cab2e1-f235-4ddd-a17a-78af546d0fb2\" (UID: \"d9cab2e1-f235-4ddd-a17a-78af546d0fb2\") " Feb 19 10:40:21 crc kubenswrapper[4965]: I0219 10:40:21.585550 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9cab2e1-f235-4ddd-a17a-78af546d0fb2-utilities" (OuterVolumeSpecName: "utilities") pod "d9cab2e1-f235-4ddd-a17a-78af546d0fb2" (UID: "d9cab2e1-f235-4ddd-a17a-78af546d0fb2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:40:21 crc kubenswrapper[4965]: I0219 10:40:21.600229 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9cab2e1-f235-4ddd-a17a-78af546d0fb2-kube-api-access-f8h2w" (OuterVolumeSpecName: "kube-api-access-f8h2w") pod "d9cab2e1-f235-4ddd-a17a-78af546d0fb2" (UID: "d9cab2e1-f235-4ddd-a17a-78af546d0fb2"). InnerVolumeSpecName "kube-api-access-f8h2w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:40:21 crc kubenswrapper[4965]: I0219 10:40:21.606381 4965 scope.go:117] "RemoveContainer" containerID="9e516e7224adaa7e4345c162fc4162e6922c0ff12e14e9924fc2d4413261e9a2" Feb 19 10:40:21 crc kubenswrapper[4965]: I0219 10:40:21.611837 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9cab2e1-f235-4ddd-a17a-78af546d0fb2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d9cab2e1-f235-4ddd-a17a-78af546d0fb2" (UID: "d9cab2e1-f235-4ddd-a17a-78af546d0fb2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:40:21 crc kubenswrapper[4965]: I0219 10:40:21.686704 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8h2w\" (UniqueName: \"kubernetes.io/projected/d9cab2e1-f235-4ddd-a17a-78af546d0fb2-kube-api-access-f8h2w\") on node \"crc\" DevicePath \"\"" Feb 19 10:40:21 crc kubenswrapper[4965]: I0219 10:40:21.686940 4965 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9cab2e1-f235-4ddd-a17a-78af546d0fb2-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:40:21 crc kubenswrapper[4965]: I0219 10:40:21.686949 4965 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9cab2e1-f235-4ddd-a17a-78af546d0fb2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:40:21 crc kubenswrapper[4965]: I0219 10:40:21.700659 4965 scope.go:117] "RemoveContainer" containerID="d7c0bb7a6076ce3c2e6f55678203925fa30e51ccc1bf91a9cd90de4731f96d70" Feb 19 10:40:21 crc kubenswrapper[4965]: I0219 10:40:21.750820 4965 scope.go:117] "RemoveContainer" containerID="798aa2ff8d13ca96630fd57302b6041fedad139cf2cabcbd41fcd9359e21e3ae" Feb 19 10:40:21 crc kubenswrapper[4965]: E0219 10:40:21.752757 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"798aa2ff8d13ca96630fd57302b6041fedad139cf2cabcbd41fcd9359e21e3ae\": container with ID starting with 798aa2ff8d13ca96630fd57302b6041fedad139cf2cabcbd41fcd9359e21e3ae not found: ID does not exist" containerID="798aa2ff8d13ca96630fd57302b6041fedad139cf2cabcbd41fcd9359e21e3ae" Feb 19 10:40:21 crc kubenswrapper[4965]: I0219 10:40:21.752803 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"798aa2ff8d13ca96630fd57302b6041fedad139cf2cabcbd41fcd9359e21e3ae"} err="failed to get container status \"798aa2ff8d13ca96630fd57302b6041fedad139cf2cabcbd41fcd9359e21e3ae\": rpc error: code = NotFound desc = could not find container \"798aa2ff8d13ca96630fd57302b6041fedad139cf2cabcbd41fcd9359e21e3ae\": container with ID starting with 798aa2ff8d13ca96630fd57302b6041fedad139cf2cabcbd41fcd9359e21e3ae not found: ID does not exist" Feb 19 10:40:21 crc kubenswrapper[4965]: I0219 10:40:21.752828 4965 scope.go:117] "RemoveContainer" containerID="9e516e7224adaa7e4345c162fc4162e6922c0ff12e14e9924fc2d4413261e9a2" Feb 19 10:40:21 crc kubenswrapper[4965]: E0219 10:40:21.755006 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e516e7224adaa7e4345c162fc4162e6922c0ff12e14e9924fc2d4413261e9a2\": container with ID starting with 9e516e7224adaa7e4345c162fc4162e6922c0ff12e14e9924fc2d4413261e9a2 not found: ID does not exist" containerID="9e516e7224adaa7e4345c162fc4162e6922c0ff12e14e9924fc2d4413261e9a2" Feb 19 10:40:21 crc kubenswrapper[4965]: I0219 10:40:21.755037 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e516e7224adaa7e4345c162fc4162e6922c0ff12e14e9924fc2d4413261e9a2"} err="failed to get container status \"9e516e7224adaa7e4345c162fc4162e6922c0ff12e14e9924fc2d4413261e9a2\": rpc error: code = NotFound desc = could not find container \"9e516e7224adaa7e4345c162fc4162e6922c0ff12e14e9924fc2d4413261e9a2\": container with ID starting with 9e516e7224adaa7e4345c162fc4162e6922c0ff12e14e9924fc2d4413261e9a2 not found: ID does not exist" Feb 19 10:40:21 crc kubenswrapper[4965]: I0219 10:40:21.755051 4965 scope.go:117] "RemoveContainer" containerID="d7c0bb7a6076ce3c2e6f55678203925fa30e51ccc1bf91a9cd90de4731f96d70" Feb 19 10:40:21 crc kubenswrapper[4965]: E0219 10:40:21.756518 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7c0bb7a6076ce3c2e6f55678203925fa30e51ccc1bf91a9cd90de4731f96d70\": container with ID starting with d7c0bb7a6076ce3c2e6f55678203925fa30e51ccc1bf91a9cd90de4731f96d70 not found: ID does not exist" containerID="d7c0bb7a6076ce3c2e6f55678203925fa30e51ccc1bf91a9cd90de4731f96d70" Feb 19 10:40:21 crc kubenswrapper[4965]: I0219 10:40:21.756540 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7c0bb7a6076ce3c2e6f55678203925fa30e51ccc1bf91a9cd90de4731f96d70"} err="failed to get container status \"d7c0bb7a6076ce3c2e6f55678203925fa30e51ccc1bf91a9cd90de4731f96d70\": rpc error: code = NotFound desc = could not find container \"d7c0bb7a6076ce3c2e6f55678203925fa30e51ccc1bf91a9cd90de4731f96d70\": container with ID starting with d7c0bb7a6076ce3c2e6f55678203925fa30e51ccc1bf91a9cd90de4731f96d70 not found: ID does not exist" Feb 19 10:40:21 crc kubenswrapper[4965]: I0219 10:40:21.910636 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rjp8r"] Feb 19 10:40:21 crc kubenswrapper[4965]: I0219 10:40:21.919763 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rjp8r"] Feb 19 10:40:23 crc kubenswrapper[4965]: I0219 10:40:23.208405 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9cab2e1-f235-4ddd-a17a-78af546d0fb2" path="/var/lib/kubelet/pods/d9cab2e1-f235-4ddd-a17a-78af546d0fb2/volumes" Feb 19 10:40:26 crc kubenswrapper[4965]: I0219 10:40:26.166765 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jc5qp"] Feb 19 10:40:26 crc kubenswrapper[4965]: E0219 10:40:26.167787 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9cab2e1-f235-4ddd-a17a-78af546d0fb2" containerName="extract-utilities" Feb 19 10:40:26 crc kubenswrapper[4965]: I0219 10:40:26.167802 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9cab2e1-f235-4ddd-a17a-78af546d0fb2" containerName="extract-utilities" Feb 19 10:40:26 crc kubenswrapper[4965]: E0219 10:40:26.167833 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9cab2e1-f235-4ddd-a17a-78af546d0fb2" containerName="extract-content" Feb 19 10:40:26 crc kubenswrapper[4965]: I0219 10:40:26.167840 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9cab2e1-f235-4ddd-a17a-78af546d0fb2" containerName="extract-content" Feb 19 10:40:26 crc kubenswrapper[4965]: E0219 10:40:26.167865 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9cab2e1-f235-4ddd-a17a-78af546d0fb2" containerName="registry-server" Feb 19 10:40:26 crc kubenswrapper[4965]: I0219 10:40:26.167872 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9cab2e1-f235-4ddd-a17a-78af546d0fb2" containerName="registry-server" Feb 19 10:40:26 crc kubenswrapper[4965]: I0219 10:40:26.168122 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9cab2e1-f235-4ddd-a17a-78af546d0fb2" containerName="registry-server" Feb 19 10:40:26 crc kubenswrapper[4965]: I0219 10:40:26.170086 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jc5qp" Feb 19 10:40:26 crc kubenswrapper[4965]: I0219 10:40:26.188805 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jc5qp"] Feb 19 10:40:26 crc kubenswrapper[4965]: I0219 10:40:26.284264 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klb4b\" (UniqueName: \"kubernetes.io/projected/8a373ea3-9887-4a28-8fe0-39976a63f8f2-kube-api-access-klb4b\") pod \"certified-operators-jc5qp\" (UID: \"8a373ea3-9887-4a28-8fe0-39976a63f8f2\") " pod="openshift-marketplace/certified-operators-jc5qp" Feb 19 10:40:26 crc kubenswrapper[4965]: I0219 10:40:26.284387 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a373ea3-9887-4a28-8fe0-39976a63f8f2-catalog-content\") pod \"certified-operators-jc5qp\" (UID: \"8a373ea3-9887-4a28-8fe0-39976a63f8f2\") " pod="openshift-marketplace/certified-operators-jc5qp" Feb 19 10:40:26 crc kubenswrapper[4965]: I0219 10:40:26.284443 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a373ea3-9887-4a28-8fe0-39976a63f8f2-utilities\") pod \"certified-operators-jc5qp\" (UID: \"8a373ea3-9887-4a28-8fe0-39976a63f8f2\") " pod="openshift-marketplace/certified-operators-jc5qp" Feb 19 10:40:26 crc kubenswrapper[4965]: I0219 10:40:26.386644 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klb4b\" (UniqueName: \"kubernetes.io/projected/8a373ea3-9887-4a28-8fe0-39976a63f8f2-kube-api-access-klb4b\") pod \"certified-operators-jc5qp\" (UID: \"8a373ea3-9887-4a28-8fe0-39976a63f8f2\") " pod="openshift-marketplace/certified-operators-jc5qp" Feb 19 10:40:26 crc kubenswrapper[4965]: I0219 10:40:26.386779 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a373ea3-9887-4a28-8fe0-39976a63f8f2-catalog-content\") pod \"certified-operators-jc5qp\" (UID: \"8a373ea3-9887-4a28-8fe0-39976a63f8f2\") " pod="openshift-marketplace/certified-operators-jc5qp" Feb 19 10:40:26 crc kubenswrapper[4965]: I0219 10:40:26.386855 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a373ea3-9887-4a28-8fe0-39976a63f8f2-utilities\") pod \"certified-operators-jc5qp\" (UID: \"8a373ea3-9887-4a28-8fe0-39976a63f8f2\") " pod="openshift-marketplace/certified-operators-jc5qp" Feb 19 10:40:26 crc kubenswrapper[4965]: I0219 10:40:26.387335 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a373ea3-9887-4a28-8fe0-39976a63f8f2-catalog-content\") pod \"certified-operators-jc5qp\" (UID: \"8a373ea3-9887-4a28-8fe0-39976a63f8f2\") " pod="openshift-marketplace/certified-operators-jc5qp" Feb 19 10:40:26 crc kubenswrapper[4965]: I0219 10:40:26.387676 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a373ea3-9887-4a28-8fe0-39976a63f8f2-utilities\") pod \"certified-operators-jc5qp\" (UID: \"8a373ea3-9887-4a28-8fe0-39976a63f8f2\") " pod="openshift-marketplace/certified-operators-jc5qp" Feb 19 10:40:26 crc kubenswrapper[4965]: I0219 10:40:26.407657 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klb4b\" (UniqueName: \"kubernetes.io/projected/8a373ea3-9887-4a28-8fe0-39976a63f8f2-kube-api-access-klb4b\") pod \"certified-operators-jc5qp\" (UID: \"8a373ea3-9887-4a28-8fe0-39976a63f8f2\") " pod="openshift-marketplace/certified-operators-jc5qp" Feb 19 10:40:26 crc kubenswrapper[4965]: I0219 10:40:26.499844 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jc5qp" Feb 19 10:40:27 crc kubenswrapper[4965]: I0219 10:40:27.106511 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jc5qp"] Feb 19 10:40:27 crc kubenswrapper[4965]: I0219 10:40:27.647702 4965 generic.go:334] "Generic (PLEG): container finished" podID="8a373ea3-9887-4a28-8fe0-39976a63f8f2" containerID="499bf44f9315a956af977f3972043a21dde39f7b4215d4c116b1a57310c8b4e8" exitCode=0 Feb 19 10:40:27 crc kubenswrapper[4965]: I0219 10:40:27.647875 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jc5qp" event={"ID":"8a373ea3-9887-4a28-8fe0-39976a63f8f2","Type":"ContainerDied","Data":"499bf44f9315a956af977f3972043a21dde39f7b4215d4c116b1a57310c8b4e8"} Feb 19 10:40:27 crc kubenswrapper[4965]: I0219 10:40:27.648013 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jc5qp" event={"ID":"8a373ea3-9887-4a28-8fe0-39976a63f8f2","Type":"ContainerStarted","Data":"f0a7fe5c899afbc86822efb73d4ab15ba196011a2a85564949c2ac416e4bf1f0"} Feb 19 10:40:27 crc kubenswrapper[4965]: I0219 10:40:27.652073 4965 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 10:40:27 crc kubenswrapper[4965]: I0219 10:40:27.934848 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-62qgz_4fb263d7-f864-47e8-ba07-5a8860db5d11/kube-rbac-proxy/0.log" Feb 19 10:40:28 crc kubenswrapper[4965]: I0219 10:40:28.114922 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-62qgz_4fb263d7-f864-47e8-ba07-5a8860db5d11/controller/0.log" Feb 19 10:40:28 crc kubenswrapper[4965]: I0219 10:40:28.330539 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hcb66_684dceb2-01ab-4856-b857-0d6ade07aadd/cp-frr-files/0.log" Feb 19 10:40:28 crc kubenswrapper[4965]: I0219 10:40:28.620041 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hcb66_684dceb2-01ab-4856-b857-0d6ade07aadd/cp-reloader/0.log" Feb 19 10:40:28 crc kubenswrapper[4965]: I0219 10:40:28.620677 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hcb66_684dceb2-01ab-4856-b857-0d6ade07aadd/cp-frr-files/0.log" Feb 19 10:40:28 crc kubenswrapper[4965]: I0219 10:40:28.654790 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hcb66_684dceb2-01ab-4856-b857-0d6ade07aadd/cp-metrics/0.log" Feb 19 10:40:28 crc kubenswrapper[4965]: I0219 10:40:28.658270 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hcb66_684dceb2-01ab-4856-b857-0d6ade07aadd/cp-reloader/0.log" Feb 19 10:40:28 crc kubenswrapper[4965]: I0219 10:40:28.659947 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jc5qp" event={"ID":"8a373ea3-9887-4a28-8fe0-39976a63f8f2","Type":"ContainerStarted","Data":"13a081b1eefa4786c49f6f6c634fe5abdc337f16d8ff49ec47b46ad63bc9805e"} Feb 19 10:40:28 crc kubenswrapper[4965]: I0219 10:40:28.858053 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hcb66_684dceb2-01ab-4856-b857-0d6ade07aadd/cp-frr-files/0.log" Feb 19 10:40:28 crc kubenswrapper[4965]: I0219 10:40:28.879568 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hcb66_684dceb2-01ab-4856-b857-0d6ade07aadd/cp-metrics/0.log" Feb 19 10:40:28 crc kubenswrapper[4965]: I0219 10:40:28.887936 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hcb66_684dceb2-01ab-4856-b857-0d6ade07aadd/cp-metrics/0.log" Feb 19 10:40:28 crc kubenswrapper[4965]: I0219 10:40:28.967113 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hcb66_684dceb2-01ab-4856-b857-0d6ade07aadd/cp-reloader/0.log" Feb 19 10:40:29 crc kubenswrapper[4965]: I0219 10:40:29.107500 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hcb66_684dceb2-01ab-4856-b857-0d6ade07aadd/cp-reloader/0.log" Feb 19 10:40:29 crc kubenswrapper[4965]: I0219 10:40:29.113017 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hcb66_684dceb2-01ab-4856-b857-0d6ade07aadd/cp-frr-files/0.log" Feb 19 10:40:29 crc kubenswrapper[4965]: I0219 10:40:29.142028 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hcb66_684dceb2-01ab-4856-b857-0d6ade07aadd/cp-metrics/0.log" Feb 19 10:40:29 crc kubenswrapper[4965]: I0219 10:40:29.183164 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hcb66_684dceb2-01ab-4856-b857-0d6ade07aadd/controller/0.log" Feb 19 10:40:29 crc kubenswrapper[4965]: I0219 10:40:29.333898 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hcb66_684dceb2-01ab-4856-b857-0d6ade07aadd/kube-rbac-proxy/0.log" Feb 19 10:40:29 crc kubenswrapper[4965]: I0219 10:40:29.345874 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hcb66_684dceb2-01ab-4856-b857-0d6ade07aadd/kube-rbac-proxy-frr/0.log" Feb 19 10:40:29 crc kubenswrapper[4965]: I0219 10:40:29.367612 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hcb66_684dceb2-01ab-4856-b857-0d6ade07aadd/frr-metrics/0.log" Feb 19 10:40:29 crc kubenswrapper[4965]: I0219 10:40:29.600870 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hcb66_684dceb2-01ab-4856-b857-0d6ade07aadd/reloader/0.log" Feb 19 10:40:29 crc kubenswrapper[4965]: I0219 10:40:29.738322 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-l7r8n_de0e351f-d402-4a5b-8942-d22a20ad2fa4/frr-k8s-webhook-server/0.log" Feb 19 10:40:30 crc kubenswrapper[4965]: I0219 10:40:30.055789 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-fc95c66df-lk6qw_6be4b034-d7e8-410b-bbef-e4989108becd/webhook-server/0.log" Feb 19 10:40:30 crc kubenswrapper[4965]: I0219 10:40:30.071365 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7df4b8cb75-tnc6t_281afb41-32a0-42c3-b25c-e2b5ee969867/manager/0.log" Feb 19 10:40:30 crc kubenswrapper[4965]: I0219 10:40:30.390399 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-5f69j_2a12c073-8d46-4579-a422-6344a8a4959f/kube-rbac-proxy/0.log" Feb 19 10:40:31 crc kubenswrapper[4965]: I0219 10:40:31.286485 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hcb66_684dceb2-01ab-4856-b857-0d6ade07aadd/frr/0.log" Feb 19 10:40:31 crc kubenswrapper[4965]: I0219 10:40:31.512964 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-5f69j_2a12c073-8d46-4579-a422-6344a8a4959f/speaker/0.log" Feb 19 10:40:31 crc kubenswrapper[4965]: I0219 10:40:31.686295 4965 generic.go:334] "Generic (PLEG): container finished" podID="8a373ea3-9887-4a28-8fe0-39976a63f8f2" containerID="13a081b1eefa4786c49f6f6c634fe5abdc337f16d8ff49ec47b46ad63bc9805e" exitCode=0 Feb 19 10:40:31 crc kubenswrapper[4965]: I0219 10:40:31.686378 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jc5qp" event={"ID":"8a373ea3-9887-4a28-8fe0-39976a63f8f2","Type":"ContainerDied","Data":"13a081b1eefa4786c49f6f6c634fe5abdc337f16d8ff49ec47b46ad63bc9805e"} Feb 19 10:40:33 crc kubenswrapper[4965]: I0219 10:40:33.705923 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jc5qp" event={"ID":"8a373ea3-9887-4a28-8fe0-39976a63f8f2","Type":"ContainerStarted","Data":"49115bee25cec8f2aef7e7a58a34e563bb29a65ba953fff62859e3a2be9c4e3b"} Feb 19 10:40:33 crc kubenswrapper[4965]: I0219 10:40:33.738475 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jc5qp" podStartSLOduration=2.890140922 podStartE2EDuration="7.738448361s" podCreationTimestamp="2026-02-19 10:40:26 +0000 UTC" firstStartedPulling="2026-02-19 10:40:27.651743146 +0000 UTC m=+3483.273064456" lastFinishedPulling="2026-02-19 10:40:32.500050585 +0000 UTC m=+3488.121371895" observedRunningTime="2026-02-19 10:40:33.728274663 +0000 UTC m=+3489.349595973" watchObservedRunningTime="2026-02-19 10:40:33.738448361 +0000 UTC m=+3489.359769671" Feb 19 10:40:36 crc kubenswrapper[4965]: I0219 10:40:36.500918 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jc5qp" Feb 19 10:40:36 crc kubenswrapper[4965]: I0219 10:40:36.501307 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jc5qp" Feb 19 10:40:36 crc kubenswrapper[4965]: I0219 10:40:36.549628 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jc5qp" Feb 19 10:40:44 crc kubenswrapper[4965]: I0219 10:40:44.753789 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651vxf6v_476478a2-24c2-4386-9876-ab59f36cabbf/util/0.log" Feb 19 10:40:45 crc kubenswrapper[4965]: I0219 10:40:45.169673 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651vxf6v_476478a2-24c2-4386-9876-ab59f36cabbf/util/0.log" Feb 19 10:40:45 crc kubenswrapper[4965]: I0219 10:40:45.171646 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651vxf6v_476478a2-24c2-4386-9876-ab59f36cabbf/pull/0.log" Feb 19 10:40:45 crc kubenswrapper[4965]: I0219 10:40:45.244915 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651vxf6v_476478a2-24c2-4386-9876-ab59f36cabbf/pull/0.log" Feb 19 10:40:45 crc kubenswrapper[4965]: I0219 10:40:45.417072 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651vxf6v_476478a2-24c2-4386-9876-ab59f36cabbf/pull/0.log" Feb 19 10:40:45 crc kubenswrapper[4965]: I0219 10:40:45.454744 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651vxf6v_476478a2-24c2-4386-9876-ab59f36cabbf/extract/0.log" Feb 19 10:40:45 crc kubenswrapper[4965]: I0219 10:40:45.488914 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651vxf6v_476478a2-24c2-4386-9876-ab59f36cabbf/util/0.log" Feb 19 10:40:45 crc kubenswrapper[4965]: I0219 10:40:45.667464 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lsxc7_73c31c1a-7233-4c2c-b79b-70abd832d746/util/0.log" Feb 19 10:40:45 crc kubenswrapper[4965]: I0219 10:40:45.852851 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lsxc7_73c31c1a-7233-4c2c-b79b-70abd832d746/util/0.log" Feb 19 10:40:45 crc kubenswrapper[4965]: I0219 10:40:45.859314 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lsxc7_73c31c1a-7233-4c2c-b79b-70abd832d746/pull/0.log" Feb 19 10:40:45 crc kubenswrapper[4965]: I0219 10:40:45.861700 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lsxc7_73c31c1a-7233-4c2c-b79b-70abd832d746/pull/0.log" Feb 19 10:40:46 crc kubenswrapper[4965]: I0219 10:40:46.060847 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lsxc7_73c31c1a-7233-4c2c-b79b-70abd832d746/pull/0.log" Feb 19 10:40:46 crc kubenswrapper[4965]: I0219 10:40:46.061658 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lsxc7_73c31c1a-7233-4c2c-b79b-70abd832d746/extract/0.log" Feb 19 10:40:46 crc kubenswrapper[4965]: I0219 10:40:46.073165 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lsxc7_73c31c1a-7233-4c2c-b79b-70abd832d746/util/0.log" Feb 19 10:40:46 crc kubenswrapper[4965]: I0219 10:40:46.218810 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l45v6_38b3ecaf-956c-479a-8c6e-4e0dd083f186/util/0.log" Feb 19 10:40:46 crc kubenswrapper[4965]: I0219 10:40:46.404292 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l45v6_38b3ecaf-956c-479a-8c6e-4e0dd083f186/util/0.log" Feb 19 10:40:46 crc kubenswrapper[4965]: I0219 10:40:46.409918 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l45v6_38b3ecaf-956c-479a-8c6e-4e0dd083f186/pull/0.log" Feb 19 10:40:46 crc kubenswrapper[4965]: I0219 10:40:46.410087 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l45v6_38b3ecaf-956c-479a-8c6e-4e0dd083f186/pull/0.log" Feb 19 10:40:46 crc kubenswrapper[4965]: I0219 10:40:46.550086 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jc5qp" Feb 19 10:40:46 crc kubenswrapper[4965]: I0219 10:40:46.611468 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jc5qp"] Feb 19 10:40:46 crc kubenswrapper[4965]: I0219 10:40:46.614148 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l45v6_38b3ecaf-956c-479a-8c6e-4e0dd083f186/util/0.log" Feb 19 10:40:46 crc kubenswrapper[4965]: I0219 10:40:46.617822 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l45v6_38b3ecaf-956c-479a-8c6e-4e0dd083f186/pull/0.log" Feb 19 10:40:46 crc kubenswrapper[4965]: I0219 10:40:46.631605 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l45v6_38b3ecaf-956c-479a-8c6e-4e0dd083f186/extract/0.log" Feb 19 10:40:46 crc kubenswrapper[4965]: I0219 10:40:46.830724 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jc5qp" podUID="8a373ea3-9887-4a28-8fe0-39976a63f8f2" containerName="registry-server" containerID="cri-o://49115bee25cec8f2aef7e7a58a34e563bb29a65ba953fff62859e3a2be9c4e3b" gracePeriod=2 Feb 19 10:40:46 crc kubenswrapper[4965]: I0219 10:40:46.831493 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jc5qp_8a373ea3-9887-4a28-8fe0-39976a63f8f2/extract-utilities/0.log" Feb 19 10:40:47 crc kubenswrapper[4965]: I0219 10:40:47.064836 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jc5qp_8a373ea3-9887-4a28-8fe0-39976a63f8f2/extract-utilities/0.log" Feb 19 10:40:47 crc kubenswrapper[4965]: I0219 10:40:47.143290 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jc5qp_8a373ea3-9887-4a28-8fe0-39976a63f8f2/extract-content/0.log" Feb 19 10:40:47 crc kubenswrapper[4965]: I0219 10:40:47.149997 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jc5qp_8a373ea3-9887-4a28-8fe0-39976a63f8f2/extract-content/0.log" Feb 19 10:40:47 crc kubenswrapper[4965]: I0219 10:40:47.352298 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jc5qp_8a373ea3-9887-4a28-8fe0-39976a63f8f2/extract-utilities/0.log" Feb 19 10:40:47 crc kubenswrapper[4965]: I0219 10:40:47.384761 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jc5qp_8a373ea3-9887-4a28-8fe0-39976a63f8f2/extract-content/0.log" Feb 19 10:40:47 crc kubenswrapper[4965]: I0219 10:40:47.459606 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jc5qp_8a373ea3-9887-4a28-8fe0-39976a63f8f2/registry-server/0.log" Feb 19 10:40:47 crc kubenswrapper[4965]: I0219 10:40:47.549842 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jc5qp" Feb 19 10:40:47 crc kubenswrapper[4965]: I0219 10:40:47.595429 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jjz7h_28ed8d8d-3c38-43ae-b93c-d5c88112a04b/extract-utilities/0.log" Feb 19 10:40:47 crc kubenswrapper[4965]: I0219 10:40:47.600116 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a373ea3-9887-4a28-8fe0-39976a63f8f2-catalog-content\") pod \"8a373ea3-9887-4a28-8fe0-39976a63f8f2\" (UID: \"8a373ea3-9887-4a28-8fe0-39976a63f8f2\") " Feb 19 10:40:47 crc kubenswrapper[4965]: I0219 10:40:47.601388 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-klb4b\" (UniqueName: \"kubernetes.io/projected/8a373ea3-9887-4a28-8fe0-39976a63f8f2-kube-api-access-klb4b\") pod \"8a373ea3-9887-4a28-8fe0-39976a63f8f2\" (UID: \"8a373ea3-9887-4a28-8fe0-39976a63f8f2\") " Feb 19 10:40:47 crc kubenswrapper[4965]: I0219 10:40:47.601545 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a373ea3-9887-4a28-8fe0-39976a63f8f2-utilities\") pod \"8a373ea3-9887-4a28-8fe0-39976a63f8f2\" (UID: \"8a373ea3-9887-4a28-8fe0-39976a63f8f2\") " Feb 19 10:40:47 crc kubenswrapper[4965]: I0219 10:40:47.602938 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a373ea3-9887-4a28-8fe0-39976a63f8f2-utilities" (OuterVolumeSpecName: "utilities") pod "8a373ea3-9887-4a28-8fe0-39976a63f8f2" (UID: "8a373ea3-9887-4a28-8fe0-39976a63f8f2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:40:47 crc kubenswrapper[4965]: I0219 10:40:47.615879 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a373ea3-9887-4a28-8fe0-39976a63f8f2-kube-api-access-klb4b" (OuterVolumeSpecName: "kube-api-access-klb4b") pod "8a373ea3-9887-4a28-8fe0-39976a63f8f2" (UID: "8a373ea3-9887-4a28-8fe0-39976a63f8f2"). InnerVolumeSpecName "kube-api-access-klb4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:40:47 crc kubenswrapper[4965]: I0219 10:40:47.656490 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a373ea3-9887-4a28-8fe0-39976a63f8f2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8a373ea3-9887-4a28-8fe0-39976a63f8f2" (UID: "8a373ea3-9887-4a28-8fe0-39976a63f8f2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:40:47 crc kubenswrapper[4965]: I0219 10:40:47.703941 4965 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a373ea3-9887-4a28-8fe0-39976a63f8f2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:40:47 crc kubenswrapper[4965]: I0219 10:40:47.703975 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-klb4b\" (UniqueName: \"kubernetes.io/projected/8a373ea3-9887-4a28-8fe0-39976a63f8f2-kube-api-access-klb4b\") on node \"crc\" DevicePath \"\"" Feb 19 10:40:47 crc kubenswrapper[4965]: I0219 10:40:47.703986 4965 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a373ea3-9887-4a28-8fe0-39976a63f8f2-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:40:47 crc kubenswrapper[4965]: I0219 10:40:47.779477 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jjz7h_28ed8d8d-3c38-43ae-b93c-d5c88112a04b/extract-utilities/0.log" Feb 19 10:40:47 crc kubenswrapper[4965]: I0219 10:40:47.849778 4965 generic.go:334] "Generic (PLEG): container finished" podID="8a373ea3-9887-4a28-8fe0-39976a63f8f2" containerID="49115bee25cec8f2aef7e7a58a34e563bb29a65ba953fff62859e3a2be9c4e3b" exitCode=0 Feb 19 10:40:47 crc kubenswrapper[4965]: I0219 10:40:47.849831 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jc5qp" event={"ID":"8a373ea3-9887-4a28-8fe0-39976a63f8f2","Type":"ContainerDied","Data":"49115bee25cec8f2aef7e7a58a34e563bb29a65ba953fff62859e3a2be9c4e3b"} Feb 19 10:40:47 crc kubenswrapper[4965]: I0219 10:40:47.849866 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jc5qp" event={"ID":"8a373ea3-9887-4a28-8fe0-39976a63f8f2","Type":"ContainerDied","Data":"f0a7fe5c899afbc86822efb73d4ab15ba196011a2a85564949c2ac416e4bf1f0"} Feb 19 10:40:47 crc kubenswrapper[4965]: I0219 10:40:47.849889 4965 scope.go:117] "RemoveContainer" containerID="49115bee25cec8f2aef7e7a58a34e563bb29a65ba953fff62859e3a2be9c4e3b" Feb 19 10:40:47 crc kubenswrapper[4965]: I0219 10:40:47.849948 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jc5qp" Feb 19 10:40:47 crc kubenswrapper[4965]: I0219 10:40:47.867421 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jjz7h_28ed8d8d-3c38-43ae-b93c-d5c88112a04b/extract-content/0.log" Feb 19 10:40:47 crc kubenswrapper[4965]: I0219 10:40:47.882161 4965 scope.go:117] "RemoveContainer" containerID="13a081b1eefa4786c49f6f6c634fe5abdc337f16d8ff49ec47b46ad63bc9805e" Feb 19 10:40:47 crc kubenswrapper[4965]: I0219 10:40:47.903401 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jc5qp"] Feb 19 10:40:47 crc kubenswrapper[4965]: I0219 10:40:47.917683 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jc5qp"] Feb 19 10:40:47 crc kubenswrapper[4965]: I0219 10:40:47.920926 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jjz7h_28ed8d8d-3c38-43ae-b93c-d5c88112a04b/extract-content/0.log" Feb 19 10:40:47 crc kubenswrapper[4965]: I0219 10:40:47.921725 4965 scope.go:117] "RemoveContainer" containerID="499bf44f9315a956af977f3972043a21dde39f7b4215d4c116b1a57310c8b4e8" Feb 19 10:40:47 crc kubenswrapper[4965]: I0219 10:40:47.960037 4965 scope.go:117] "RemoveContainer" containerID="49115bee25cec8f2aef7e7a58a34e563bb29a65ba953fff62859e3a2be9c4e3b" Feb 19 10:40:47 crc kubenswrapper[4965]: E0219 10:40:47.960593 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49115bee25cec8f2aef7e7a58a34e563bb29a65ba953fff62859e3a2be9c4e3b\": container with ID starting with 49115bee25cec8f2aef7e7a58a34e563bb29a65ba953fff62859e3a2be9c4e3b not found: ID does not exist" containerID="49115bee25cec8f2aef7e7a58a34e563bb29a65ba953fff62859e3a2be9c4e3b" Feb 19 10:40:47 crc kubenswrapper[4965]: I0219 10:40:47.960637 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49115bee25cec8f2aef7e7a58a34e563bb29a65ba953fff62859e3a2be9c4e3b"} err="failed to get container status \"49115bee25cec8f2aef7e7a58a34e563bb29a65ba953fff62859e3a2be9c4e3b\": rpc error: code = NotFound desc = could not find container \"49115bee25cec8f2aef7e7a58a34e563bb29a65ba953fff62859e3a2be9c4e3b\": container with ID starting with 49115bee25cec8f2aef7e7a58a34e563bb29a65ba953fff62859e3a2be9c4e3b not found: ID does not exist" Feb 19 10:40:47 crc kubenswrapper[4965]: I0219 10:40:47.960662 4965 scope.go:117] "RemoveContainer" containerID="13a081b1eefa4786c49f6f6c634fe5abdc337f16d8ff49ec47b46ad63bc9805e" Feb 19 10:40:47 crc kubenswrapper[4965]: E0219 10:40:47.960948 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13a081b1eefa4786c49f6f6c634fe5abdc337f16d8ff49ec47b46ad63bc9805e\": container with ID starting with 13a081b1eefa4786c49f6f6c634fe5abdc337f16d8ff49ec47b46ad63bc9805e not found: ID does not exist" containerID="13a081b1eefa4786c49f6f6c634fe5abdc337f16d8ff49ec47b46ad63bc9805e" Feb 19 10:40:47 crc kubenswrapper[4965]: I0219 10:40:47.960992 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13a081b1eefa4786c49f6f6c634fe5abdc337f16d8ff49ec47b46ad63bc9805e"} err="failed to get container status \"13a081b1eefa4786c49f6f6c634fe5abdc337f16d8ff49ec47b46ad63bc9805e\": rpc error: code = NotFound desc = could not find container \"13a081b1eefa4786c49f6f6c634fe5abdc337f16d8ff49ec47b46ad63bc9805e\": container with ID starting with 13a081b1eefa4786c49f6f6c634fe5abdc337f16d8ff49ec47b46ad63bc9805e not found: ID does not exist" Feb 19 10:40:47 crc kubenswrapper[4965]: I0219 10:40:47.961019 4965 scope.go:117] "RemoveContainer" containerID="499bf44f9315a956af977f3972043a21dde39f7b4215d4c116b1a57310c8b4e8" Feb 19 10:40:47 crc kubenswrapper[4965]: E0219 10:40:47.961404 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"499bf44f9315a956af977f3972043a21dde39f7b4215d4c116b1a57310c8b4e8\": container with ID starting with 499bf44f9315a956af977f3972043a21dde39f7b4215d4c116b1a57310c8b4e8 not found: ID does not exist" containerID="499bf44f9315a956af977f3972043a21dde39f7b4215d4c116b1a57310c8b4e8" Feb 19 10:40:47 crc kubenswrapper[4965]: I0219 10:40:47.961432 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"499bf44f9315a956af977f3972043a21dde39f7b4215d4c116b1a57310c8b4e8"} err="failed to get container status \"499bf44f9315a956af977f3972043a21dde39f7b4215d4c116b1a57310c8b4e8\": rpc error: code = NotFound desc = could not find container \"499bf44f9315a956af977f3972043a21dde39f7b4215d4c116b1a57310c8b4e8\": container with ID starting with 499bf44f9315a956af977f3972043a21dde39f7b4215d4c116b1a57310c8b4e8 not found: ID does not exist" Feb 19 10:40:48 crc kubenswrapper[4965]: I0219 10:40:48.095009 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jjz7h_28ed8d8d-3c38-43ae-b93c-d5c88112a04b/extract-utilities/0.log" Feb 19 10:40:48 crc kubenswrapper[4965]: I0219 10:40:48.135140 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jjz7h_28ed8d8d-3c38-43ae-b93c-d5c88112a04b/extract-content/0.log" Feb 19 10:40:48 crc kubenswrapper[4965]: I0219 10:40:48.383216 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jjz7h_28ed8d8d-3c38-43ae-b93c-d5c88112a04b/registry-server/0.log" Feb 19 10:40:48 crc kubenswrapper[4965]: I0219 10:40:48.385891 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6ml7g_32baa37b-a196-447f-af2a-0f1cc92785d8/extract-utilities/0.log" Feb 19 10:40:48 crc kubenswrapper[4965]: I0219 10:40:48.573112 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6ml7g_32baa37b-a196-447f-af2a-0f1cc92785d8/extract-content/0.log" Feb 19 10:40:48 crc kubenswrapper[4965]: I0219 10:40:48.621397 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6ml7g_32baa37b-a196-447f-af2a-0f1cc92785d8/extract-content/0.log" Feb 19 10:40:48 crc kubenswrapper[4965]: I0219 10:40:48.633188 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6ml7g_32baa37b-a196-447f-af2a-0f1cc92785d8/extract-utilities/0.log" Feb 19 10:40:48 crc kubenswrapper[4965]: I0219 10:40:48.808575 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6ml7g_32baa37b-a196-447f-af2a-0f1cc92785d8/extract-content/0.log" Feb 19 10:40:48 crc kubenswrapper[4965]: I0219 10:40:48.886808 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6ml7g_32baa37b-a196-447f-af2a-0f1cc92785d8/extract-utilities/0.log" Feb 19 10:40:49 crc kubenswrapper[4965]: I0219 10:40:49.210323 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf999t_a18e3883-75f3-47f3-a6a6-31358dbc980a/util/0.log" Feb 19 10:40:49 crc kubenswrapper[4965]: I0219 10:40:49.223405 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a373ea3-9887-4a28-8fe0-39976a63f8f2" path="/var/lib/kubelet/pods/8a373ea3-9887-4a28-8fe0-39976a63f8f2/volumes" Feb 19 10:40:49 crc kubenswrapper[4965]: I0219 10:40:49.399462 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf999t_a18e3883-75f3-47f3-a6a6-31358dbc980a/pull/0.log" Feb 19 10:40:49 crc kubenswrapper[4965]: I0219 10:40:49.411175 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf999t_a18e3883-75f3-47f3-a6a6-31358dbc980a/util/0.log" Feb 19 10:40:49 crc kubenswrapper[4965]: I0219 10:40:49.474673 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf999t_a18e3883-75f3-47f3-a6a6-31358dbc980a/pull/0.log" Feb 19 10:40:49 crc kubenswrapper[4965]: I0219 10:40:49.498480 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6ml7g_32baa37b-a196-447f-af2a-0f1cc92785d8/registry-server/0.log" Feb 19 10:40:49 crc kubenswrapper[4965]: I0219 10:40:49.646179 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf999t_a18e3883-75f3-47f3-a6a6-31358dbc980a/pull/0.log" Feb 19 10:40:49 crc kubenswrapper[4965]: I0219 10:40:49.649254 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf999t_a18e3883-75f3-47f3-a6a6-31358dbc980a/util/0.log" Feb 19 10:40:49 crc kubenswrapper[4965]: I0219 10:40:49.707245 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf999t_a18e3883-75f3-47f3-a6a6-31358dbc980a/extract/0.log" Feb 19 10:40:49 crc kubenswrapper[4965]: I0219 10:40:49.745736 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-pbfkw_16a589f2-57f9-460f-9802-1c63bd877a05/marketplace-operator/0.log" Feb 19 10:40:49 crc kubenswrapper[4965]: I0219 10:40:49.853718 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-p94wp_f8b960f6-0e57-4ebd-83e9-b245cbcd3b9d/extract-utilities/0.log" Feb 19 10:40:50 crc kubenswrapper[4965]: I0219 10:40:50.058019 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-p94wp_f8b960f6-0e57-4ebd-83e9-b245cbcd3b9d/extract-content/0.log" Feb 19 10:40:50 crc kubenswrapper[4965]: I0219 10:40:50.081424 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-p94wp_f8b960f6-0e57-4ebd-83e9-b245cbcd3b9d/extract-content/0.log" Feb 19 10:40:50 crc kubenswrapper[4965]: I0219 10:40:50.102444 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-p94wp_f8b960f6-0e57-4ebd-83e9-b245cbcd3b9d/extract-utilities/0.log" Feb 19 10:40:50 crc kubenswrapper[4965]: I0219 10:40:50.309850 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-p94wp_f8b960f6-0e57-4ebd-83e9-b245cbcd3b9d/extract-content/0.log" Feb 19 10:40:50 crc kubenswrapper[4965]: I0219 10:40:50.329123 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-p94wp_f8b960f6-0e57-4ebd-83e9-b245cbcd3b9d/extract-utilities/0.log" Feb 19 10:40:50 crc kubenswrapper[4965]: I0219 10:40:50.385820 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-p94wp_f8b960f6-0e57-4ebd-83e9-b245cbcd3b9d/registry-server/0.log" Feb 19 10:40:50 crc kubenswrapper[4965]: I0219 10:40:50.453909 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-w6fn7_59eb38d1-a115-462c-b054-4660ec8e6ac1/extract-utilities/0.log" Feb 19 10:40:50 crc kubenswrapper[4965]: I0219 10:40:50.607927 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-w6fn7_59eb38d1-a115-462c-b054-4660ec8e6ac1/extract-content/0.log" Feb 19 10:40:50 crc kubenswrapper[4965]: I0219 10:40:50.609161 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-w6fn7_59eb38d1-a115-462c-b054-4660ec8e6ac1/extract-content/0.log" Feb 19 10:40:50 crc kubenswrapper[4965]: I0219 10:40:50.632440 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-w6fn7_59eb38d1-a115-462c-b054-4660ec8e6ac1/extract-utilities/0.log" Feb 19 10:40:50 crc kubenswrapper[4965]: I0219 10:40:50.767378 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-w6fn7_59eb38d1-a115-462c-b054-4660ec8e6ac1/extract-content/0.log" Feb 19 10:40:50 crc kubenswrapper[4965]: I0219 10:40:50.784084 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-w6fn7_59eb38d1-a115-462c-b054-4660ec8e6ac1/extract-utilities/0.log" Feb 19 10:40:51 crc kubenswrapper[4965]: I0219 10:40:51.268782 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-w6fn7_59eb38d1-a115-462c-b054-4660ec8e6ac1/registry-server/0.log" Feb 19 10:41:04 crc kubenswrapper[4965]: I0219 10:41:04.251325 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-qfjz7_97e4a3bf-25d9-4a7b-ab73-7be5267dcfb1/prometheus-operator/0.log" Feb 19 10:41:04 crc kubenswrapper[4965]: I0219 10:41:04.264309 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5675bf8465-d42dv_0e50e1bd-3144-4362-9c46-355cfb2ba24f/prometheus-operator-admission-webhook/0.log" Feb 19 10:41:04 crc kubenswrapper[4965]: I0219 10:41:04.366769 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5675bf8465-h45db_0d85e95a-22ec-4364-a43c-04e60d68be0d/prometheus-operator-admission-webhook/0.log" Feb 19 10:41:04 crc kubenswrapper[4965]: I0219 10:41:04.469005 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-h4689_b7e1070f-f099-4a4f-a107-c1b8589af7c7/operator/0.log" Feb 19 10:41:04 crc kubenswrapper[4965]: I0219 10:41:04.524170 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-x7xjb_d55c4261-3d41-49fd-97dd-098bb8747449/perses-operator/0.log" Feb 19 10:41:18 crc kubenswrapper[4965]: I0219 10:41:18.370996 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-564bb987d4-6pxn4_d8ed232a-7084-4f69-afdf-6d674b5864de/kube-rbac-proxy/0.log" Feb 19 10:41:18 crc kubenswrapper[4965]: I0219 10:41:18.378492 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-564bb987d4-6pxn4_d8ed232a-7084-4f69-afdf-6d674b5864de/manager/0.log" Feb 19 10:41:21 crc kubenswrapper[4965]: E0219 10:41:21.513314 4965 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.196:51346->38.102.83.196:35139: write tcp 38.102.83.196:51346->38.102.83.196:35139: write: broken pipe Feb 19 10:41:33 crc kubenswrapper[4965]: E0219 10:41:33.179631 4965 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.196:37740->38.102.83.196:35139: write tcp 38.102.83.196:37740->38.102.83.196:35139: write: broken pipe Feb 19 10:41:46 crc kubenswrapper[4965]: I0219 10:41:46.601799 4965 patch_prober.go:28] interesting pod/machine-config-daemon-7mhh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:41:46 crc kubenswrapper[4965]: I0219 10:41:46.602409 4965 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:42:16 crc kubenswrapper[4965]: I0219 10:42:16.601567 4965 patch_prober.go:28] interesting pod/machine-config-daemon-7mhh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:42:16 crc kubenswrapper[4965]: I0219 10:42:16.602128 4965 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:42:43 crc kubenswrapper[4965]: I0219 10:42:43.488122 4965 scope.go:117] "RemoveContainer" containerID="f2783b57bd11c70a70978b8751abd469c688f44eccc039d0088c337b5e327df4" Feb 19 10:42:43 crc kubenswrapper[4965]: I0219 10:42:43.506780 4965 scope.go:117] "RemoveContainer" containerID="97caa4c852f0cbc9674d542a9be0c388338f2b4fc5cf288afbf9ff0adadf17cd" Feb 19 10:42:43 crc kubenswrapper[4965]: I0219 10:42:43.542274 4965 scope.go:117] "RemoveContainer" containerID="683ba4d54ae13edce472861e9d9d950adcd3377bad3af16e3860efcd0d453bab" Feb 19 10:42:46 crc kubenswrapper[4965]: I0219 10:42:46.601218 4965 patch_prober.go:28] interesting pod/machine-config-daemon-7mhh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:42:46 crc kubenswrapper[4965]: I0219 10:42:46.601769 4965 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:42:46 crc kubenswrapper[4965]: I0219 10:42:46.601883 4965 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" Feb 19 10:42:46 crc kubenswrapper[4965]: I0219 10:42:46.602665 4965 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dd91099379f13248da41756ec9df975dda5a009207ac101c1b7f089c85137088"} pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 10:42:46 crc kubenswrapper[4965]: I0219 10:42:46.602726 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerName="machine-config-daemon" containerID="cri-o://dd91099379f13248da41756ec9df975dda5a009207ac101c1b7f089c85137088" gracePeriod=600 Feb 19 10:42:46 crc kubenswrapper[4965]: E0219 10:42:46.725725 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:42:46 crc kubenswrapper[4965]: I0219 10:42:46.960931 4965 generic.go:334] "Generic (PLEG): container finished" podID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerID="dd91099379f13248da41756ec9df975dda5a009207ac101c1b7f089c85137088" exitCode=0 Feb 19 10:42:46 crc kubenswrapper[4965]: I0219 10:42:46.960978 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" event={"ID":"63ef3eb8-6103-492d-b6ef-f16081d15e83","Type":"ContainerDied","Data":"dd91099379f13248da41756ec9df975dda5a009207ac101c1b7f089c85137088"} Feb 19 10:42:46 crc kubenswrapper[4965]: I0219 10:42:46.961018 4965 scope.go:117] "RemoveContainer" containerID="80c316cf612b9ca1c6b347e543b0dd6345d3fe3eb0881cb783c0e74417f03cc0" Feb 19 10:42:46 crc kubenswrapper[4965]: I0219 10:42:46.961655 4965 scope.go:117] "RemoveContainer" containerID="dd91099379f13248da41756ec9df975dda5a009207ac101c1b7f089c85137088" Feb 19 10:42:46 crc kubenswrapper[4965]: E0219 10:42:46.962148 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:42:58 crc kubenswrapper[4965]: I0219 10:42:58.197871 4965 scope.go:117] "RemoveContainer" containerID="dd91099379f13248da41756ec9df975dda5a009207ac101c1b7f089c85137088" Feb 19 10:42:58 crc kubenswrapper[4965]: E0219 10:42:58.199608 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:43:01 crc kubenswrapper[4965]: I0219 10:43:01.837568 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-d24r2"] Feb 19 10:43:01 crc kubenswrapper[4965]: E0219 10:43:01.838523 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a373ea3-9887-4a28-8fe0-39976a63f8f2" containerName="registry-server" Feb 19 10:43:01 crc kubenswrapper[4965]: I0219 10:43:01.838540 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a373ea3-9887-4a28-8fe0-39976a63f8f2" containerName="registry-server" Feb 19 10:43:01 crc kubenswrapper[4965]: E0219 10:43:01.838556 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a373ea3-9887-4a28-8fe0-39976a63f8f2" containerName="extract-utilities" Feb 19 10:43:01 crc kubenswrapper[4965]: I0219 10:43:01.838566 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a373ea3-9887-4a28-8fe0-39976a63f8f2" containerName="extract-utilities" Feb 19 10:43:01 crc kubenswrapper[4965]: E0219 10:43:01.838587 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a373ea3-9887-4a28-8fe0-39976a63f8f2" containerName="extract-content" Feb 19 10:43:01 crc kubenswrapper[4965]: I0219 10:43:01.838594 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a373ea3-9887-4a28-8fe0-39976a63f8f2" containerName="extract-content" Feb 19 10:43:01 crc kubenswrapper[4965]: I0219 10:43:01.838853 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a373ea3-9887-4a28-8fe0-39976a63f8f2" containerName="registry-server" Feb 19 10:43:01 crc kubenswrapper[4965]: I0219 10:43:01.840828 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d24r2" Feb 19 10:43:01 crc kubenswrapper[4965]: I0219 10:43:01.849177 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d24r2"] Feb 19 10:43:02 crc kubenswrapper[4965]: I0219 10:43:02.005307 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23148b36-ccaf-4796-9f4e-1e215bb80cdf-utilities\") pod \"community-operators-d24r2\" (UID: \"23148b36-ccaf-4796-9f4e-1e215bb80cdf\") " pod="openshift-marketplace/community-operators-d24r2" Feb 19 10:43:02 crc kubenswrapper[4965]: I0219 10:43:02.005601 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8w9gh\" (UniqueName: \"kubernetes.io/projected/23148b36-ccaf-4796-9f4e-1e215bb80cdf-kube-api-access-8w9gh\") pod \"community-operators-d24r2\" (UID: \"23148b36-ccaf-4796-9f4e-1e215bb80cdf\") " pod="openshift-marketplace/community-operators-d24r2" Feb 19 10:43:02 crc kubenswrapper[4965]: I0219 10:43:02.005639 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23148b36-ccaf-4796-9f4e-1e215bb80cdf-catalog-content\") pod \"community-operators-d24r2\" (UID: \"23148b36-ccaf-4796-9f4e-1e215bb80cdf\") " pod="openshift-marketplace/community-operators-d24r2" Feb 19 10:43:02 crc kubenswrapper[4965]: I0219 10:43:02.107681 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23148b36-ccaf-4796-9f4e-1e215bb80cdf-utilities\") pod \"community-operators-d24r2\" (UID: \"23148b36-ccaf-4796-9f4e-1e215bb80cdf\") " pod="openshift-marketplace/community-operators-d24r2" Feb 19 10:43:02 crc kubenswrapper[4965]: I0219 10:43:02.107728 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8w9gh\" (UniqueName: \"kubernetes.io/projected/23148b36-ccaf-4796-9f4e-1e215bb80cdf-kube-api-access-8w9gh\") pod \"community-operators-d24r2\" (UID: \"23148b36-ccaf-4796-9f4e-1e215bb80cdf\") " pod="openshift-marketplace/community-operators-d24r2" Feb 19 10:43:02 crc kubenswrapper[4965]: I0219 10:43:02.107760 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23148b36-ccaf-4796-9f4e-1e215bb80cdf-catalog-content\") pod \"community-operators-d24r2\" (UID: \"23148b36-ccaf-4796-9f4e-1e215bb80cdf\") " pod="openshift-marketplace/community-operators-d24r2" Feb 19 10:43:02 crc kubenswrapper[4965]: I0219 10:43:02.108266 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23148b36-ccaf-4796-9f4e-1e215bb80cdf-catalog-content\") pod \"community-operators-d24r2\" (UID: \"23148b36-ccaf-4796-9f4e-1e215bb80cdf\") " pod="openshift-marketplace/community-operators-d24r2" Feb 19 10:43:02 crc kubenswrapper[4965]: I0219 10:43:02.108557 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23148b36-ccaf-4796-9f4e-1e215bb80cdf-utilities\") pod \"community-operators-d24r2\" (UID: \"23148b36-ccaf-4796-9f4e-1e215bb80cdf\") " pod="openshift-marketplace/community-operators-d24r2" Feb 19 10:43:02 crc kubenswrapper[4965]: I0219 10:43:02.146464 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8w9gh\" (UniqueName: \"kubernetes.io/projected/23148b36-ccaf-4796-9f4e-1e215bb80cdf-kube-api-access-8w9gh\") pod \"community-operators-d24r2\" (UID: \"23148b36-ccaf-4796-9f4e-1e215bb80cdf\") " pod="openshift-marketplace/community-operators-d24r2" Feb 19 10:43:02 crc kubenswrapper[4965]: I0219 10:43:02.189318 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d24r2" Feb 19 10:43:03 crc kubenswrapper[4965]: I0219 10:43:03.006102 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d24r2"] Feb 19 10:43:03 crc kubenswrapper[4965]: I0219 10:43:03.229945 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d24r2" event={"ID":"23148b36-ccaf-4796-9f4e-1e215bb80cdf","Type":"ContainerStarted","Data":"419b5b075f83ddd0747b52542363a6e8c141812614f84e89050daa653e98b354"} Feb 19 10:43:04 crc kubenswrapper[4965]: I0219 10:43:04.240627 4965 generic.go:334] "Generic (PLEG): container finished" podID="23148b36-ccaf-4796-9f4e-1e215bb80cdf" containerID="068a67d9094869692d822b0d91a95075216d60907245a42ff9b94f419f0043c0" exitCode=0 Feb 19 10:43:04 crc kubenswrapper[4965]: I0219 10:43:04.240684 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d24r2" event={"ID":"23148b36-ccaf-4796-9f4e-1e215bb80cdf","Type":"ContainerDied","Data":"068a67d9094869692d822b0d91a95075216d60907245a42ff9b94f419f0043c0"} Feb 19 10:43:05 crc kubenswrapper[4965]: I0219 10:43:05.251735 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d24r2" event={"ID":"23148b36-ccaf-4796-9f4e-1e215bb80cdf","Type":"ContainerStarted","Data":"a32ab859d6d33ca8aa9dc8386671352afdeec0e315f66934dc04175ae1e93c68"} Feb 19 10:43:07 crc kubenswrapper[4965]: I0219 10:43:07.297587 4965 generic.go:334] "Generic (PLEG): container finished" podID="23148b36-ccaf-4796-9f4e-1e215bb80cdf" containerID="a32ab859d6d33ca8aa9dc8386671352afdeec0e315f66934dc04175ae1e93c68" exitCode=0 Feb 19 10:43:07 crc kubenswrapper[4965]: I0219 10:43:07.297681 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d24r2" event={"ID":"23148b36-ccaf-4796-9f4e-1e215bb80cdf","Type":"ContainerDied","Data":"a32ab859d6d33ca8aa9dc8386671352afdeec0e315f66934dc04175ae1e93c68"} Feb 19 10:43:09 crc kubenswrapper[4965]: I0219 10:43:09.315121 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d24r2" event={"ID":"23148b36-ccaf-4796-9f4e-1e215bb80cdf","Type":"ContainerStarted","Data":"dfdfde3f33034d8255b2f4d8852c590e0970a00e2fdca9551497a8179334719f"} Feb 19 10:43:09 crc kubenswrapper[4965]: I0219 10:43:09.339061 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-d24r2" podStartSLOduration=3.761784922 podStartE2EDuration="8.33904104s" podCreationTimestamp="2026-02-19 10:43:01 +0000 UTC" firstStartedPulling="2026-02-19 10:43:04.242564787 +0000 UTC m=+3639.863886087" lastFinishedPulling="2026-02-19 10:43:08.819820895 +0000 UTC m=+3644.441142205" observedRunningTime="2026-02-19 10:43:09.335656558 +0000 UTC m=+3644.956977888" watchObservedRunningTime="2026-02-19 10:43:09.33904104 +0000 UTC m=+3644.960362350" Feb 19 10:43:10 crc kubenswrapper[4965]: I0219 10:43:10.197916 4965 scope.go:117] "RemoveContainer" containerID="dd91099379f13248da41756ec9df975dda5a009207ac101c1b7f089c85137088" Feb 19 10:43:10 crc kubenswrapper[4965]: E0219 10:43:10.198378 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:43:11 crc kubenswrapper[4965]: I0219 10:43:11.337054 4965 generic.go:334] "Generic (PLEG): container finished" podID="303ea2f8-2ebb-4bc0-8df9-7ebc25ded71d" containerID="6bd54c4d743d82b3bf0e6267332ef9ce559585651704c2eed5ff4964d795128c" exitCode=0 Feb 19 10:43:11 crc kubenswrapper[4965]: I0219 10:43:11.337101 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vdz8f/must-gather-npf9w" event={"ID":"303ea2f8-2ebb-4bc0-8df9-7ebc25ded71d","Type":"ContainerDied","Data":"6bd54c4d743d82b3bf0e6267332ef9ce559585651704c2eed5ff4964d795128c"} Feb 19 10:43:11 crc kubenswrapper[4965]: I0219 10:43:11.337771 4965 scope.go:117] "RemoveContainer" containerID="6bd54c4d743d82b3bf0e6267332ef9ce559585651704c2eed5ff4964d795128c" Feb 19 10:43:12 crc kubenswrapper[4965]: I0219 10:43:12.188891 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vdz8f_must-gather-npf9w_303ea2f8-2ebb-4bc0-8df9-7ebc25ded71d/gather/0.log" Feb 19 10:43:12 crc kubenswrapper[4965]: I0219 10:43:12.189354 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-d24r2" Feb 19 10:43:12 crc kubenswrapper[4965]: I0219 10:43:12.189829 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-d24r2" Feb 19 10:43:12 crc kubenswrapper[4965]: I0219 10:43:12.254338 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-d24r2" Feb 19 10:43:20 crc kubenswrapper[4965]: I0219 10:43:20.166512 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vdz8f/must-gather-npf9w"] Feb 19 10:43:20 crc kubenswrapper[4965]: I0219 10:43:20.167310 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-vdz8f/must-gather-npf9w" podUID="303ea2f8-2ebb-4bc0-8df9-7ebc25ded71d" containerName="copy" containerID="cri-o://6e341d49bd437c3712b2fbfda8681b8f1b12afc0ef915825b93fe7afcdccab8b" gracePeriod=2 Feb 19 10:43:20 crc kubenswrapper[4965]: I0219 10:43:20.203886 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vdz8f/must-gather-npf9w"] Feb 19 10:43:20 crc kubenswrapper[4965]: I0219 10:43:20.431300 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vdz8f_must-gather-npf9w_303ea2f8-2ebb-4bc0-8df9-7ebc25ded71d/copy/0.log" Feb 19 10:43:20 crc kubenswrapper[4965]: I0219 10:43:20.432058 4965 generic.go:334] "Generic (PLEG): container finished" podID="303ea2f8-2ebb-4bc0-8df9-7ebc25ded71d" containerID="6e341d49bd437c3712b2fbfda8681b8f1b12afc0ef915825b93fe7afcdccab8b" exitCode=143 Feb 19 10:43:20 crc kubenswrapper[4965]: I0219 10:43:20.795296 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vdz8f_must-gather-npf9w_303ea2f8-2ebb-4bc0-8df9-7ebc25ded71d/copy/0.log" Feb 19 10:43:20 crc kubenswrapper[4965]: I0219 10:43:20.795857 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vdz8f/must-gather-npf9w" Feb 19 10:43:20 crc kubenswrapper[4965]: I0219 10:43:20.919673 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/303ea2f8-2ebb-4bc0-8df9-7ebc25ded71d-must-gather-output\") pod \"303ea2f8-2ebb-4bc0-8df9-7ebc25ded71d\" (UID: \"303ea2f8-2ebb-4bc0-8df9-7ebc25ded71d\") " Feb 19 10:43:20 crc kubenswrapper[4965]: I0219 10:43:20.919912 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrpmt\" (UniqueName: \"kubernetes.io/projected/303ea2f8-2ebb-4bc0-8df9-7ebc25ded71d-kube-api-access-wrpmt\") pod \"303ea2f8-2ebb-4bc0-8df9-7ebc25ded71d\" (UID: \"303ea2f8-2ebb-4bc0-8df9-7ebc25ded71d\") " Feb 19 10:43:20 crc kubenswrapper[4965]: I0219 10:43:20.937950 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/303ea2f8-2ebb-4bc0-8df9-7ebc25ded71d-kube-api-access-wrpmt" (OuterVolumeSpecName: "kube-api-access-wrpmt") pod "303ea2f8-2ebb-4bc0-8df9-7ebc25ded71d" (UID: "303ea2f8-2ebb-4bc0-8df9-7ebc25ded71d"). InnerVolumeSpecName "kube-api-access-wrpmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:43:21 crc kubenswrapper[4965]: I0219 10:43:21.023062 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrpmt\" (UniqueName: \"kubernetes.io/projected/303ea2f8-2ebb-4bc0-8df9-7ebc25ded71d-kube-api-access-wrpmt\") on node \"crc\" DevicePath \"\"" Feb 19 10:43:21 crc kubenswrapper[4965]: I0219 10:43:21.120800 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/303ea2f8-2ebb-4bc0-8df9-7ebc25ded71d-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "303ea2f8-2ebb-4bc0-8df9-7ebc25ded71d" (UID: "303ea2f8-2ebb-4bc0-8df9-7ebc25ded71d"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:43:21 crc kubenswrapper[4965]: I0219 10:43:21.125223 4965 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/303ea2f8-2ebb-4bc0-8df9-7ebc25ded71d-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 19 10:43:21 crc kubenswrapper[4965]: I0219 10:43:21.213629 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="303ea2f8-2ebb-4bc0-8df9-7ebc25ded71d" path="/var/lib/kubelet/pods/303ea2f8-2ebb-4bc0-8df9-7ebc25ded71d/volumes" Feb 19 10:43:21 crc kubenswrapper[4965]: I0219 10:43:21.447753 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vdz8f_must-gather-npf9w_303ea2f8-2ebb-4bc0-8df9-7ebc25ded71d/copy/0.log" Feb 19 10:43:21 crc kubenswrapper[4965]: I0219 10:43:21.448055 4965 scope.go:117] "RemoveContainer" containerID="6e341d49bd437c3712b2fbfda8681b8f1b12afc0ef915825b93fe7afcdccab8b" Feb 19 10:43:21 crc kubenswrapper[4965]: I0219 10:43:21.448170 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vdz8f/must-gather-npf9w" Feb 19 10:43:21 crc kubenswrapper[4965]: I0219 10:43:21.468501 4965 scope.go:117] "RemoveContainer" containerID="6bd54c4d743d82b3bf0e6267332ef9ce559585651704c2eed5ff4964d795128c" Feb 19 10:43:22 crc kubenswrapper[4965]: I0219 10:43:22.247675 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-d24r2" Feb 19 10:43:22 crc kubenswrapper[4965]: I0219 10:43:22.313520 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d24r2"] Feb 19 10:43:22 crc kubenswrapper[4965]: I0219 10:43:22.457439 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-d24r2" podUID="23148b36-ccaf-4796-9f4e-1e215bb80cdf" containerName="registry-server" containerID="cri-o://dfdfde3f33034d8255b2f4d8852c590e0970a00e2fdca9551497a8179334719f" gracePeriod=2 Feb 19 10:43:22 crc kubenswrapper[4965]: E0219 10:43:22.747260 4965 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23148b36_ccaf_4796_9f4e_1e215bb80cdf.slice/crio-dfdfde3f33034d8255b2f4d8852c590e0970a00e2fdca9551497a8179334719f.scope\": RecentStats: unable to find data in memory cache]" Feb 19 10:43:23 crc kubenswrapper[4965]: I0219 10:43:23.197921 4965 scope.go:117] "RemoveContainer" containerID="dd91099379f13248da41756ec9df975dda5a009207ac101c1b7f089c85137088" Feb 19 10:43:23 crc kubenswrapper[4965]: E0219 10:43:23.198543 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:43:23 crc kubenswrapper[4965]: I0219 10:43:23.292349 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d24r2" Feb 19 10:43:23 crc kubenswrapper[4965]: I0219 10:43:23.467156 4965 generic.go:334] "Generic (PLEG): container finished" podID="23148b36-ccaf-4796-9f4e-1e215bb80cdf" containerID="dfdfde3f33034d8255b2f4d8852c590e0970a00e2fdca9551497a8179334719f" exitCode=0 Feb 19 10:43:23 crc kubenswrapper[4965]: I0219 10:43:23.467228 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d24r2" Feb 19 10:43:23 crc kubenswrapper[4965]: I0219 10:43:23.467249 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d24r2" event={"ID":"23148b36-ccaf-4796-9f4e-1e215bb80cdf","Type":"ContainerDied","Data":"dfdfde3f33034d8255b2f4d8852c590e0970a00e2fdca9551497a8179334719f"} Feb 19 10:43:23 crc kubenswrapper[4965]: I0219 10:43:23.467577 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d24r2" event={"ID":"23148b36-ccaf-4796-9f4e-1e215bb80cdf","Type":"ContainerDied","Data":"419b5b075f83ddd0747b52542363a6e8c141812614f84e89050daa653e98b354"} Feb 19 10:43:23 crc kubenswrapper[4965]: I0219 10:43:23.467597 4965 scope.go:117] "RemoveContainer" containerID="dfdfde3f33034d8255b2f4d8852c590e0970a00e2fdca9551497a8179334719f" Feb 19 10:43:23 crc kubenswrapper[4965]: I0219 10:43:23.471728 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23148b36-ccaf-4796-9f4e-1e215bb80cdf-catalog-content\") pod \"23148b36-ccaf-4796-9f4e-1e215bb80cdf\" (UID: \"23148b36-ccaf-4796-9f4e-1e215bb80cdf\") " Feb 19 10:43:23 crc kubenswrapper[4965]: I0219 10:43:23.471844 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8w9gh\" (UniqueName: \"kubernetes.io/projected/23148b36-ccaf-4796-9f4e-1e215bb80cdf-kube-api-access-8w9gh\") pod \"23148b36-ccaf-4796-9f4e-1e215bb80cdf\" (UID: \"23148b36-ccaf-4796-9f4e-1e215bb80cdf\") " Feb 19 10:43:23 crc kubenswrapper[4965]: I0219 10:43:23.471881 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23148b36-ccaf-4796-9f4e-1e215bb80cdf-utilities\") pod \"23148b36-ccaf-4796-9f4e-1e215bb80cdf\" (UID: \"23148b36-ccaf-4796-9f4e-1e215bb80cdf\") " Feb 19 10:43:23 crc kubenswrapper[4965]: I0219 10:43:23.472809 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23148b36-ccaf-4796-9f4e-1e215bb80cdf-utilities" (OuterVolumeSpecName: "utilities") pod "23148b36-ccaf-4796-9f4e-1e215bb80cdf" (UID: "23148b36-ccaf-4796-9f4e-1e215bb80cdf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:43:23 crc kubenswrapper[4965]: I0219 10:43:23.478448 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23148b36-ccaf-4796-9f4e-1e215bb80cdf-kube-api-access-8w9gh" (OuterVolumeSpecName: "kube-api-access-8w9gh") pod "23148b36-ccaf-4796-9f4e-1e215bb80cdf" (UID: "23148b36-ccaf-4796-9f4e-1e215bb80cdf"). InnerVolumeSpecName "kube-api-access-8w9gh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:43:23 crc kubenswrapper[4965]: I0219 10:43:23.489399 4965 scope.go:117] "RemoveContainer" containerID="a32ab859d6d33ca8aa9dc8386671352afdeec0e315f66934dc04175ae1e93c68" Feb 19 10:43:23 crc kubenswrapper[4965]: I0219 10:43:23.526129 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23148b36-ccaf-4796-9f4e-1e215bb80cdf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "23148b36-ccaf-4796-9f4e-1e215bb80cdf" (UID: "23148b36-ccaf-4796-9f4e-1e215bb80cdf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:43:23 crc kubenswrapper[4965]: I0219 10:43:23.547179 4965 scope.go:117] "RemoveContainer" containerID="068a67d9094869692d822b0d91a95075216d60907245a42ff9b94f419f0043c0" Feb 19 10:43:23 crc kubenswrapper[4965]: I0219 10:43:23.574898 4965 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23148b36-ccaf-4796-9f4e-1e215bb80cdf-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:43:23 crc kubenswrapper[4965]: I0219 10:43:23.574930 4965 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23148b36-ccaf-4796-9f4e-1e215bb80cdf-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:43:23 crc kubenswrapper[4965]: I0219 10:43:23.574940 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8w9gh\" (UniqueName: \"kubernetes.io/projected/23148b36-ccaf-4796-9f4e-1e215bb80cdf-kube-api-access-8w9gh\") on node \"crc\" DevicePath \"\"" Feb 19 10:43:23 crc kubenswrapper[4965]: I0219 10:43:23.588605 4965 scope.go:117] "RemoveContainer" containerID="dfdfde3f33034d8255b2f4d8852c590e0970a00e2fdca9551497a8179334719f" Feb 19 10:43:23 crc kubenswrapper[4965]: E0219 10:43:23.589059 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfdfde3f33034d8255b2f4d8852c590e0970a00e2fdca9551497a8179334719f\": container with ID starting with dfdfde3f33034d8255b2f4d8852c590e0970a00e2fdca9551497a8179334719f not found: ID does not exist" containerID="dfdfde3f33034d8255b2f4d8852c590e0970a00e2fdca9551497a8179334719f" Feb 19 10:43:23 crc kubenswrapper[4965]: I0219 10:43:23.589091 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfdfde3f33034d8255b2f4d8852c590e0970a00e2fdca9551497a8179334719f"} err="failed to get container status \"dfdfde3f33034d8255b2f4d8852c590e0970a00e2fdca9551497a8179334719f\": rpc error: code = NotFound desc = could not find container \"dfdfde3f33034d8255b2f4d8852c590e0970a00e2fdca9551497a8179334719f\": container with ID starting with dfdfde3f33034d8255b2f4d8852c590e0970a00e2fdca9551497a8179334719f not found: ID does not exist" Feb 19 10:43:23 crc kubenswrapper[4965]: I0219 10:43:23.589117 4965 scope.go:117] "RemoveContainer" containerID="a32ab859d6d33ca8aa9dc8386671352afdeec0e315f66934dc04175ae1e93c68" Feb 19 10:43:23 crc kubenswrapper[4965]: E0219 10:43:23.589435 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a32ab859d6d33ca8aa9dc8386671352afdeec0e315f66934dc04175ae1e93c68\": container with ID starting with a32ab859d6d33ca8aa9dc8386671352afdeec0e315f66934dc04175ae1e93c68 not found: ID does not exist" containerID="a32ab859d6d33ca8aa9dc8386671352afdeec0e315f66934dc04175ae1e93c68" Feb 19 10:43:23 crc kubenswrapper[4965]: I0219 10:43:23.589457 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a32ab859d6d33ca8aa9dc8386671352afdeec0e315f66934dc04175ae1e93c68"} err="failed to get container status \"a32ab859d6d33ca8aa9dc8386671352afdeec0e315f66934dc04175ae1e93c68\": rpc error: code = NotFound desc = could not find container \"a32ab859d6d33ca8aa9dc8386671352afdeec0e315f66934dc04175ae1e93c68\": container with ID starting with a32ab859d6d33ca8aa9dc8386671352afdeec0e315f66934dc04175ae1e93c68 not found: ID does not exist" Feb 19 10:43:23 crc kubenswrapper[4965]: I0219 10:43:23.589472 4965 scope.go:117] "RemoveContainer" containerID="068a67d9094869692d822b0d91a95075216d60907245a42ff9b94f419f0043c0" Feb 19 10:43:23 crc kubenswrapper[4965]: E0219 10:43:23.589756 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"068a67d9094869692d822b0d91a95075216d60907245a42ff9b94f419f0043c0\": container with ID starting with 068a67d9094869692d822b0d91a95075216d60907245a42ff9b94f419f0043c0 not found: ID does not exist" containerID="068a67d9094869692d822b0d91a95075216d60907245a42ff9b94f419f0043c0" Feb 19 10:43:23 crc kubenswrapper[4965]: I0219 10:43:23.589780 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"068a67d9094869692d822b0d91a95075216d60907245a42ff9b94f419f0043c0"} err="failed to get container status \"068a67d9094869692d822b0d91a95075216d60907245a42ff9b94f419f0043c0\": rpc error: code = NotFound desc = could not find container \"068a67d9094869692d822b0d91a95075216d60907245a42ff9b94f419f0043c0\": container with ID starting with 068a67d9094869692d822b0d91a95075216d60907245a42ff9b94f419f0043c0 not found: ID does not exist" Feb 19 10:43:23 crc kubenswrapper[4965]: I0219 10:43:23.819650 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d24r2"] Feb 19 10:43:23 crc kubenswrapper[4965]: I0219 10:43:23.839451 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-d24r2"] Feb 19 10:43:25 crc kubenswrapper[4965]: I0219 10:43:25.215228 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23148b36-ccaf-4796-9f4e-1e215bb80cdf" path="/var/lib/kubelet/pods/23148b36-ccaf-4796-9f4e-1e215bb80cdf/volumes" Feb 19 10:43:35 crc kubenswrapper[4965]: I0219 10:43:35.204916 4965 scope.go:117] "RemoveContainer" containerID="dd91099379f13248da41756ec9df975dda5a009207ac101c1b7f089c85137088" Feb 19 10:43:35 crc kubenswrapper[4965]: E0219 10:43:35.206552 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:43:50 crc kubenswrapper[4965]: I0219 10:43:50.198536 4965 scope.go:117] "RemoveContainer" containerID="dd91099379f13248da41756ec9df975dda5a009207ac101c1b7f089c85137088" Feb 19 10:43:50 crc kubenswrapper[4965]: E0219 10:43:50.199658 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:44:03 crc kubenswrapper[4965]: I0219 10:44:03.198251 4965 scope.go:117] "RemoveContainer" containerID="dd91099379f13248da41756ec9df975dda5a009207ac101c1b7f089c85137088" Feb 19 10:44:03 crc kubenswrapper[4965]: E0219 10:44:03.198972 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:44:16 crc kubenswrapper[4965]: I0219 10:44:16.198533 4965 scope.go:117] "RemoveContainer" containerID="dd91099379f13248da41756ec9df975dda5a009207ac101c1b7f089c85137088" Feb 19 10:44:16 crc kubenswrapper[4965]: E0219 10:44:16.199547 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:44:27 crc kubenswrapper[4965]: I0219 10:44:27.199111 4965 scope.go:117] "RemoveContainer" containerID="dd91099379f13248da41756ec9df975dda5a009207ac101c1b7f089c85137088" Feb 19 10:44:27 crc kubenswrapper[4965]: E0219 10:44:27.199894 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:44:38 crc kubenswrapper[4965]: I0219 10:44:38.198457 4965 scope.go:117] "RemoveContainer" containerID="dd91099379f13248da41756ec9df975dda5a009207ac101c1b7f089c85137088" Feb 19 10:44:38 crc kubenswrapper[4965]: E0219 10:44:38.199477 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:44:52 crc kubenswrapper[4965]: I0219 10:44:52.084942 4965 scope.go:117] "RemoveContainer" containerID="dd91099379f13248da41756ec9df975dda5a009207ac101c1b7f089c85137088" Feb 19 10:44:52 crc kubenswrapper[4965]: E0219 10:44:52.086236 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:45:00 crc kubenswrapper[4965]: I0219 10:45:00.171241 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524965-4x75w"] Feb 19 10:45:00 crc kubenswrapper[4965]: E0219 10:45:00.172225 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23148b36-ccaf-4796-9f4e-1e215bb80cdf" containerName="registry-server" Feb 19 10:45:00 crc kubenswrapper[4965]: I0219 10:45:00.172241 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="23148b36-ccaf-4796-9f4e-1e215bb80cdf" containerName="registry-server" Feb 19 10:45:00 crc kubenswrapper[4965]: E0219 10:45:00.172259 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23148b36-ccaf-4796-9f4e-1e215bb80cdf" containerName="extract-content" Feb 19 10:45:00 crc kubenswrapper[4965]: I0219 10:45:00.172267 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="23148b36-ccaf-4796-9f4e-1e215bb80cdf" containerName="extract-content" Feb 19 10:45:00 crc kubenswrapper[4965]: E0219 10:45:00.172281 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="303ea2f8-2ebb-4bc0-8df9-7ebc25ded71d" containerName="copy" Feb 19 10:45:00 crc kubenswrapper[4965]: I0219 10:45:00.172287 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="303ea2f8-2ebb-4bc0-8df9-7ebc25ded71d" containerName="copy" Feb 19 10:45:00 crc kubenswrapper[4965]: E0219 10:45:00.172298 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="303ea2f8-2ebb-4bc0-8df9-7ebc25ded71d" containerName="gather" Feb 19 10:45:00 crc kubenswrapper[4965]: I0219 10:45:00.172303 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="303ea2f8-2ebb-4bc0-8df9-7ebc25ded71d" containerName="gather" Feb 19 10:45:00 crc kubenswrapper[4965]: E0219 10:45:00.172314 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23148b36-ccaf-4796-9f4e-1e215bb80cdf" containerName="extract-utilities" Feb 19 10:45:00 crc kubenswrapper[4965]: I0219 10:45:00.172320 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="23148b36-ccaf-4796-9f4e-1e215bb80cdf" containerName="extract-utilities" Feb 19 10:45:00 crc kubenswrapper[4965]: I0219 10:45:00.172500 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="303ea2f8-2ebb-4bc0-8df9-7ebc25ded71d" containerName="copy" Feb 19 10:45:00 crc kubenswrapper[4965]: I0219 10:45:00.172528 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="23148b36-ccaf-4796-9f4e-1e215bb80cdf" containerName="registry-server" Feb 19 10:45:00 crc kubenswrapper[4965]: I0219 10:45:00.172540 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="303ea2f8-2ebb-4bc0-8df9-7ebc25ded71d" containerName="gather" Feb 19 10:45:00 crc kubenswrapper[4965]: I0219 10:45:00.173274 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-4x75w" Feb 19 10:45:00 crc kubenswrapper[4965]: I0219 10:45:00.177582 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 10:45:00 crc kubenswrapper[4965]: I0219 10:45:00.177752 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 10:45:00 crc kubenswrapper[4965]: I0219 10:45:00.199560 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524965-4x75w"] Feb 19 10:45:00 crc kubenswrapper[4965]: I0219 10:45:00.323772 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx2vj\" (UniqueName: \"kubernetes.io/projected/89e087b9-56eb-4bd8-9b2a-b6ac8beabe1d-kube-api-access-jx2vj\") pod \"collect-profiles-29524965-4x75w\" (UID: \"89e087b9-56eb-4bd8-9b2a-b6ac8beabe1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-4x75w" Feb 19 10:45:00 crc kubenswrapper[4965]: I0219 10:45:00.323822 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/89e087b9-56eb-4bd8-9b2a-b6ac8beabe1d-config-volume\") pod \"collect-profiles-29524965-4x75w\" (UID: \"89e087b9-56eb-4bd8-9b2a-b6ac8beabe1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-4x75w" Feb 19 10:45:00 crc kubenswrapper[4965]: I0219 10:45:00.323845 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/89e087b9-56eb-4bd8-9b2a-b6ac8beabe1d-secret-volume\") pod \"collect-profiles-29524965-4x75w\" (UID: \"89e087b9-56eb-4bd8-9b2a-b6ac8beabe1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-4x75w" Feb 19 10:45:00 crc kubenswrapper[4965]: I0219 10:45:00.425719 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jx2vj\" (UniqueName: \"kubernetes.io/projected/89e087b9-56eb-4bd8-9b2a-b6ac8beabe1d-kube-api-access-jx2vj\") pod \"collect-profiles-29524965-4x75w\" (UID: \"89e087b9-56eb-4bd8-9b2a-b6ac8beabe1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-4x75w" Feb 19 10:45:00 crc kubenswrapper[4965]: I0219 10:45:00.426025 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/89e087b9-56eb-4bd8-9b2a-b6ac8beabe1d-config-volume\") pod \"collect-profiles-29524965-4x75w\" (UID: \"89e087b9-56eb-4bd8-9b2a-b6ac8beabe1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-4x75w" Feb 19 10:45:00 crc kubenswrapper[4965]: I0219 10:45:00.426132 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/89e087b9-56eb-4bd8-9b2a-b6ac8beabe1d-secret-volume\") pod \"collect-profiles-29524965-4x75w\" (UID: \"89e087b9-56eb-4bd8-9b2a-b6ac8beabe1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-4x75w" Feb 19 10:45:00 crc kubenswrapper[4965]: I0219 10:45:00.427160 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/89e087b9-56eb-4bd8-9b2a-b6ac8beabe1d-config-volume\") pod \"collect-profiles-29524965-4x75w\" (UID: \"89e087b9-56eb-4bd8-9b2a-b6ac8beabe1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-4x75w" Feb 19 10:45:00 crc kubenswrapper[4965]: I0219 10:45:00.438063 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/89e087b9-56eb-4bd8-9b2a-b6ac8beabe1d-secret-volume\") pod \"collect-profiles-29524965-4x75w\" (UID: \"89e087b9-56eb-4bd8-9b2a-b6ac8beabe1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-4x75w" Feb 19 10:45:00 crc kubenswrapper[4965]: I0219 10:45:00.442339 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jx2vj\" (UniqueName: \"kubernetes.io/projected/89e087b9-56eb-4bd8-9b2a-b6ac8beabe1d-kube-api-access-jx2vj\") pod \"collect-profiles-29524965-4x75w\" (UID: \"89e087b9-56eb-4bd8-9b2a-b6ac8beabe1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-4x75w" Feb 19 10:45:00 crc kubenswrapper[4965]: I0219 10:45:00.514067 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-4x75w" Feb 19 10:45:00 crc kubenswrapper[4965]: I0219 10:45:00.972526 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524965-4x75w"] Feb 19 10:45:01 crc kubenswrapper[4965]: I0219 10:45:01.185312 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-4x75w" event={"ID":"89e087b9-56eb-4bd8-9b2a-b6ac8beabe1d","Type":"ContainerStarted","Data":"b3682efb069a5b83df52fca2e6bc6b5740d9f6f022553f7776d5ad27803583ff"} Feb 19 10:45:01 crc kubenswrapper[4965]: I0219 10:45:01.185357 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-4x75w" event={"ID":"89e087b9-56eb-4bd8-9b2a-b6ac8beabe1d","Type":"ContainerStarted","Data":"2341b3ee30d6d8d64cde5f09051342f6969c6c0335a7ea5f73c1e1b13e56fc0e"} Feb 19 10:45:01 crc kubenswrapper[4965]: I0219 10:45:01.207568 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-4x75w" podStartSLOduration=1.207550365 podStartE2EDuration="1.207550365s" podCreationTimestamp="2026-02-19 10:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:45:01.202438511 +0000 UTC m=+3756.823759811" watchObservedRunningTime="2026-02-19 10:45:01.207550365 +0000 UTC m=+3756.828871675" Feb 19 10:45:02 crc kubenswrapper[4965]: I0219 10:45:02.195077 4965 generic.go:334] "Generic (PLEG): container finished" podID="89e087b9-56eb-4bd8-9b2a-b6ac8beabe1d" containerID="b3682efb069a5b83df52fca2e6bc6b5740d9f6f022553f7776d5ad27803583ff" exitCode=0 Feb 19 10:45:02 crc kubenswrapper[4965]: I0219 10:45:02.195795 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-4x75w" event={"ID":"89e087b9-56eb-4bd8-9b2a-b6ac8beabe1d","Type":"ContainerDied","Data":"b3682efb069a5b83df52fca2e6bc6b5740d9f6f022553f7776d5ad27803583ff"} Feb 19 10:45:03 crc kubenswrapper[4965]: I0219 10:45:03.769602 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-4x75w" Feb 19 10:45:03 crc kubenswrapper[4965]: I0219 10:45:03.903880 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jx2vj\" (UniqueName: \"kubernetes.io/projected/89e087b9-56eb-4bd8-9b2a-b6ac8beabe1d-kube-api-access-jx2vj\") pod \"89e087b9-56eb-4bd8-9b2a-b6ac8beabe1d\" (UID: \"89e087b9-56eb-4bd8-9b2a-b6ac8beabe1d\") " Feb 19 10:45:03 crc kubenswrapper[4965]: I0219 10:45:03.904005 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/89e087b9-56eb-4bd8-9b2a-b6ac8beabe1d-config-volume\") pod \"89e087b9-56eb-4bd8-9b2a-b6ac8beabe1d\" (UID: \"89e087b9-56eb-4bd8-9b2a-b6ac8beabe1d\") " Feb 19 10:45:03 crc kubenswrapper[4965]: I0219 10:45:03.904932 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/89e087b9-56eb-4bd8-9b2a-b6ac8beabe1d-secret-volume\") pod \"89e087b9-56eb-4bd8-9b2a-b6ac8beabe1d\" (UID: \"89e087b9-56eb-4bd8-9b2a-b6ac8beabe1d\") " Feb 19 10:45:03 crc kubenswrapper[4965]: I0219 10:45:03.904815 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89e087b9-56eb-4bd8-9b2a-b6ac8beabe1d-config-volume" (OuterVolumeSpecName: "config-volume") pod "89e087b9-56eb-4bd8-9b2a-b6ac8beabe1d" (UID: "89e087b9-56eb-4bd8-9b2a-b6ac8beabe1d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:45:03 crc kubenswrapper[4965]: I0219 10:45:03.906957 4965 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/89e087b9-56eb-4bd8-9b2a-b6ac8beabe1d-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 10:45:03 crc kubenswrapper[4965]: I0219 10:45:03.909784 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89e087b9-56eb-4bd8-9b2a-b6ac8beabe1d-kube-api-access-jx2vj" (OuterVolumeSpecName: "kube-api-access-jx2vj") pod "89e087b9-56eb-4bd8-9b2a-b6ac8beabe1d" (UID: "89e087b9-56eb-4bd8-9b2a-b6ac8beabe1d"). InnerVolumeSpecName "kube-api-access-jx2vj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:45:03 crc kubenswrapper[4965]: I0219 10:45:03.917389 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89e087b9-56eb-4bd8-9b2a-b6ac8beabe1d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "89e087b9-56eb-4bd8-9b2a-b6ac8beabe1d" (UID: "89e087b9-56eb-4bd8-9b2a-b6ac8beabe1d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:45:04 crc kubenswrapper[4965]: I0219 10:45:04.008698 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jx2vj\" (UniqueName: \"kubernetes.io/projected/89e087b9-56eb-4bd8-9b2a-b6ac8beabe1d-kube-api-access-jx2vj\") on node \"crc\" DevicePath \"\"" Feb 19 10:45:04 crc kubenswrapper[4965]: I0219 10:45:04.008732 4965 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/89e087b9-56eb-4bd8-9b2a-b6ac8beabe1d-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 10:45:04 crc kubenswrapper[4965]: I0219 10:45:04.215913 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-4x75w" event={"ID":"89e087b9-56eb-4bd8-9b2a-b6ac8beabe1d","Type":"ContainerDied","Data":"2341b3ee30d6d8d64cde5f09051342f6969c6c0335a7ea5f73c1e1b13e56fc0e"} Feb 19 10:45:04 crc kubenswrapper[4965]: I0219 10:45:04.215958 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2341b3ee30d6d8d64cde5f09051342f6969c6c0335a7ea5f73c1e1b13e56fc0e" Feb 19 10:45:04 crc kubenswrapper[4965]: I0219 10:45:04.216009 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-4x75w" Feb 19 10:45:04 crc kubenswrapper[4965]: I0219 10:45:04.287368 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524920-vnskl"] Feb 19 10:45:04 crc kubenswrapper[4965]: I0219 10:45:04.298340 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524920-vnskl"] Feb 19 10:45:05 crc kubenswrapper[4965]: I0219 10:45:05.208006 4965 scope.go:117] "RemoveContainer" containerID="dd91099379f13248da41756ec9df975dda5a009207ac101c1b7f089c85137088" Feb 19 10:45:05 crc kubenswrapper[4965]: E0219 10:45:05.208457 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:45:05 crc kubenswrapper[4965]: I0219 10:45:05.226157 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26362129-d9e2-4c99-925d-475b863b274a" path="/var/lib/kubelet/pods/26362129-d9e2-4c99-925d-475b863b274a/volumes" Feb 19 10:45:20 crc kubenswrapper[4965]: I0219 10:45:20.198701 4965 scope.go:117] "RemoveContainer" containerID="dd91099379f13248da41756ec9df975dda5a009207ac101c1b7f089c85137088" Feb 19 10:45:20 crc kubenswrapper[4965]: E0219 10:45:20.200068 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:45:33 crc kubenswrapper[4965]: I0219 10:45:33.201223 4965 scope.go:117] "RemoveContainer" containerID="dd91099379f13248da41756ec9df975dda5a009207ac101c1b7f089c85137088" Feb 19 10:45:33 crc kubenswrapper[4965]: E0219 10:45:33.202001 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:45:43 crc kubenswrapper[4965]: I0219 10:45:43.689883 4965 scope.go:117] "RemoveContainer" containerID="2dec4872582dbc706006c2ed72bf111d3dc386fddffb27499276bb624188b106" Feb 19 10:45:46 crc kubenswrapper[4965]: I0219 10:45:46.198604 4965 scope.go:117] "RemoveContainer" containerID="dd91099379f13248da41756ec9df975dda5a009207ac101c1b7f089c85137088" Feb 19 10:45:46 crc kubenswrapper[4965]: E0219 10:45:46.199269 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:45:59 crc kubenswrapper[4965]: I0219 10:45:59.198879 4965 scope.go:117] "RemoveContainer" containerID="dd91099379f13248da41756ec9df975dda5a009207ac101c1b7f089c85137088" Feb 19 10:45:59 crc kubenswrapper[4965]: E0219 10:45:59.200114 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:46:13 crc kubenswrapper[4965]: I0219 10:46:13.199063 4965 scope.go:117] "RemoveContainer" containerID="dd91099379f13248da41756ec9df975dda5a009207ac101c1b7f089c85137088" Feb 19 10:46:13 crc kubenswrapper[4965]: E0219 10:46:13.200311 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:46:24 crc kubenswrapper[4965]: I0219 10:46:24.197992 4965 scope.go:117] "RemoveContainer" containerID="dd91099379f13248da41756ec9df975dda5a009207ac101c1b7f089c85137088" Feb 19 10:46:24 crc kubenswrapper[4965]: E0219 10:46:24.199099 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:46:31 crc kubenswrapper[4965]: I0219 10:46:31.676845 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-6r9fx/must-gather-6x62j"] Feb 19 10:46:31 crc kubenswrapper[4965]: E0219 10:46:31.678788 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89e087b9-56eb-4bd8-9b2a-b6ac8beabe1d" containerName="collect-profiles" Feb 19 10:46:31 crc kubenswrapper[4965]: I0219 10:46:31.678907 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="89e087b9-56eb-4bd8-9b2a-b6ac8beabe1d" containerName="collect-profiles" Feb 19 10:46:31 crc kubenswrapper[4965]: I0219 10:46:31.679260 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="89e087b9-56eb-4bd8-9b2a-b6ac8beabe1d" containerName="collect-profiles" Feb 19 10:46:31 crc kubenswrapper[4965]: I0219 10:46:31.680560 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6r9fx/must-gather-6x62j" Feb 19 10:46:31 crc kubenswrapper[4965]: I0219 10:46:31.683054 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-6r9fx"/"openshift-service-ca.crt" Feb 19 10:46:31 crc kubenswrapper[4965]: I0219 10:46:31.683130 4965 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-6r9fx"/"kube-root-ca.crt" Feb 19 10:46:31 crc kubenswrapper[4965]: I0219 10:46:31.683228 4965 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-6r9fx"/"default-dockercfg-rfzxf" Feb 19 10:46:31 crc kubenswrapper[4965]: I0219 10:46:31.692228 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-6r9fx/must-gather-6x62j"] Feb 19 10:46:31 crc kubenswrapper[4965]: I0219 10:46:31.858489 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f111f77a-70e9-4d6f-93cf-672cf0d7bd7f-must-gather-output\") pod \"must-gather-6x62j\" (UID: \"f111f77a-70e9-4d6f-93cf-672cf0d7bd7f\") " pod="openshift-must-gather-6r9fx/must-gather-6x62j" Feb 19 10:46:31 crc kubenswrapper[4965]: I0219 10:46:31.858818 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2knbn\" (UniqueName: \"kubernetes.io/projected/f111f77a-70e9-4d6f-93cf-672cf0d7bd7f-kube-api-access-2knbn\") pod \"must-gather-6x62j\" (UID: \"f111f77a-70e9-4d6f-93cf-672cf0d7bd7f\") " pod="openshift-must-gather-6r9fx/must-gather-6x62j" Feb 19 10:46:31 crc kubenswrapper[4965]: I0219 10:46:31.960780 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f111f77a-70e9-4d6f-93cf-672cf0d7bd7f-must-gather-output\") pod \"must-gather-6x62j\" (UID: \"f111f77a-70e9-4d6f-93cf-672cf0d7bd7f\") " pod="openshift-must-gather-6r9fx/must-gather-6x62j" Feb 19 10:46:31 crc kubenswrapper[4965]: I0219 10:46:31.960919 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2knbn\" (UniqueName: \"kubernetes.io/projected/f111f77a-70e9-4d6f-93cf-672cf0d7bd7f-kube-api-access-2knbn\") pod \"must-gather-6x62j\" (UID: \"f111f77a-70e9-4d6f-93cf-672cf0d7bd7f\") " pod="openshift-must-gather-6r9fx/must-gather-6x62j" Feb 19 10:46:31 crc kubenswrapper[4965]: I0219 10:46:31.961868 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f111f77a-70e9-4d6f-93cf-672cf0d7bd7f-must-gather-output\") pod \"must-gather-6x62j\" (UID: \"f111f77a-70e9-4d6f-93cf-672cf0d7bd7f\") " pod="openshift-must-gather-6r9fx/must-gather-6x62j" Feb 19 10:46:31 crc kubenswrapper[4965]: I0219 10:46:31.981012 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2knbn\" (UniqueName: \"kubernetes.io/projected/f111f77a-70e9-4d6f-93cf-672cf0d7bd7f-kube-api-access-2knbn\") pod \"must-gather-6x62j\" (UID: \"f111f77a-70e9-4d6f-93cf-672cf0d7bd7f\") " pod="openshift-must-gather-6r9fx/must-gather-6x62j" Feb 19 10:46:32 crc kubenswrapper[4965]: I0219 10:46:32.004982 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6r9fx/must-gather-6x62j" Feb 19 10:46:32 crc kubenswrapper[4965]: I0219 10:46:32.570539 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-6r9fx/must-gather-6x62j"] Feb 19 10:46:33 crc kubenswrapper[4965]: I0219 10:46:33.178609 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6r9fx/must-gather-6x62j" event={"ID":"f111f77a-70e9-4d6f-93cf-672cf0d7bd7f","Type":"ContainerStarted","Data":"32067aef8af394a29248c5c4c4d72c68fb56494c5f38be04157a99f572407114"} Feb 19 10:46:33 crc kubenswrapper[4965]: I0219 10:46:33.178928 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6r9fx/must-gather-6x62j" event={"ID":"f111f77a-70e9-4d6f-93cf-672cf0d7bd7f","Type":"ContainerStarted","Data":"3ee9af2015241b1911e1e1b881a24747044ddbfe0436cd3ef7872034a12874c3"} Feb 19 10:46:34 crc kubenswrapper[4965]: I0219 10:46:34.191566 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6r9fx/must-gather-6x62j" event={"ID":"f111f77a-70e9-4d6f-93cf-672cf0d7bd7f","Type":"ContainerStarted","Data":"f4f1e82121dffd27ad0aef02e1fd254be171b87d8d37efc6eadac9b6f5150f6a"} Feb 19 10:46:34 crc kubenswrapper[4965]: I0219 10:46:34.215052 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-6r9fx/must-gather-6x62j" podStartSLOduration=3.215022148 podStartE2EDuration="3.215022148s" podCreationTimestamp="2026-02-19 10:46:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:46:34.205448885 +0000 UTC m=+3849.826770205" watchObservedRunningTime="2026-02-19 10:46:34.215022148 +0000 UTC m=+3849.836343488" Feb 19 10:46:36 crc kubenswrapper[4965]: I0219 10:46:36.198534 4965 scope.go:117] "RemoveContainer" containerID="dd91099379f13248da41756ec9df975dda5a009207ac101c1b7f089c85137088" Feb 19 10:46:36 crc kubenswrapper[4965]: E0219 10:46:36.199063 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:46:37 crc kubenswrapper[4965]: I0219 10:46:37.401710 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-6r9fx/crc-debug-ftvll"] Feb 19 10:46:37 crc kubenswrapper[4965]: I0219 10:46:37.404456 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6r9fx/crc-debug-ftvll" Feb 19 10:46:37 crc kubenswrapper[4965]: I0219 10:46:37.479683 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ph56t\" (UniqueName: \"kubernetes.io/projected/b9c5939a-9751-4537-b47b-6041ffe57891-kube-api-access-ph56t\") pod \"crc-debug-ftvll\" (UID: \"b9c5939a-9751-4537-b47b-6041ffe57891\") " pod="openshift-must-gather-6r9fx/crc-debug-ftvll" Feb 19 10:46:37 crc kubenswrapper[4965]: I0219 10:46:37.479812 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b9c5939a-9751-4537-b47b-6041ffe57891-host\") pod \"crc-debug-ftvll\" (UID: \"b9c5939a-9751-4537-b47b-6041ffe57891\") " pod="openshift-must-gather-6r9fx/crc-debug-ftvll" Feb 19 10:46:37 crc kubenswrapper[4965]: I0219 10:46:37.581663 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b9c5939a-9751-4537-b47b-6041ffe57891-host\") pod \"crc-debug-ftvll\" (UID: \"b9c5939a-9751-4537-b47b-6041ffe57891\") " pod="openshift-must-gather-6r9fx/crc-debug-ftvll" Feb 19 10:46:37 crc kubenswrapper[4965]: I0219 10:46:37.581818 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b9c5939a-9751-4537-b47b-6041ffe57891-host\") pod \"crc-debug-ftvll\" (UID: \"b9c5939a-9751-4537-b47b-6041ffe57891\") " pod="openshift-must-gather-6r9fx/crc-debug-ftvll" Feb 19 10:46:37 crc kubenswrapper[4965]: I0219 10:46:37.582183 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ph56t\" (UniqueName: \"kubernetes.io/projected/b9c5939a-9751-4537-b47b-6041ffe57891-kube-api-access-ph56t\") pod \"crc-debug-ftvll\" (UID: \"b9c5939a-9751-4537-b47b-6041ffe57891\") " pod="openshift-must-gather-6r9fx/crc-debug-ftvll" Feb 19 10:46:37 crc kubenswrapper[4965]: I0219 10:46:37.613717 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ph56t\" (UniqueName: \"kubernetes.io/projected/b9c5939a-9751-4537-b47b-6041ffe57891-kube-api-access-ph56t\") pod \"crc-debug-ftvll\" (UID: \"b9c5939a-9751-4537-b47b-6041ffe57891\") " pod="openshift-must-gather-6r9fx/crc-debug-ftvll" Feb 19 10:46:37 crc kubenswrapper[4965]: I0219 10:46:37.729072 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6r9fx/crc-debug-ftvll" Feb 19 10:46:38 crc kubenswrapper[4965]: I0219 10:46:38.252657 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6r9fx/crc-debug-ftvll" event={"ID":"b9c5939a-9751-4537-b47b-6041ffe57891","Type":"ContainerStarted","Data":"f2e99faab2a6ce1f49874fad86075f83f1c6c110f11f1b6c253b421612fa30fd"} Feb 19 10:46:38 crc kubenswrapper[4965]: I0219 10:46:38.253004 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6r9fx/crc-debug-ftvll" event={"ID":"b9c5939a-9751-4537-b47b-6041ffe57891","Type":"ContainerStarted","Data":"02cc64aef777c53e4bc5d701e0b0e7586d4fa21aac2ece2668d07f4845842dbe"} Feb 19 10:46:38 crc kubenswrapper[4965]: I0219 10:46:38.272773 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-6r9fx/crc-debug-ftvll" podStartSLOduration=1.272757802 podStartE2EDuration="1.272757802s" podCreationTimestamp="2026-02-19 10:46:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:46:38.270329543 +0000 UTC m=+3853.891650853" watchObservedRunningTime="2026-02-19 10:46:38.272757802 +0000 UTC m=+3853.894079102" Feb 19 10:46:48 crc kubenswrapper[4965]: I0219 10:46:48.198869 4965 scope.go:117] "RemoveContainer" containerID="dd91099379f13248da41756ec9df975dda5a009207ac101c1b7f089c85137088" Feb 19 10:46:48 crc kubenswrapper[4965]: E0219 10:46:48.199660 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:46:51 crc kubenswrapper[4965]: I0219 10:46:51.768417 4965 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-sbvpc" podUID="23c8c1d2-4e7b-4cd4-99cf-92130064bbbf" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 10:47:02 crc kubenswrapper[4965]: I0219 10:47:02.198238 4965 scope.go:117] "RemoveContainer" containerID="dd91099379f13248da41756ec9df975dda5a009207ac101c1b7f089c85137088" Feb 19 10:47:02 crc kubenswrapper[4965]: E0219 10:47:02.199037 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:47:14 crc kubenswrapper[4965]: I0219 10:47:14.197457 4965 scope.go:117] "RemoveContainer" containerID="dd91099379f13248da41756ec9df975dda5a009207ac101c1b7f089c85137088" Feb 19 10:47:14 crc kubenswrapper[4965]: E0219 10:47:14.198148 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:47:27 crc kubenswrapper[4965]: I0219 10:47:27.199331 4965 scope.go:117] "RemoveContainer" containerID="dd91099379f13248da41756ec9df975dda5a009207ac101c1b7f089c85137088" Feb 19 10:47:27 crc kubenswrapper[4965]: E0219 10:47:27.200973 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:47:31 crc kubenswrapper[4965]: I0219 10:47:31.679155 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5kg7x"] Feb 19 10:47:31 crc kubenswrapper[4965]: I0219 10:47:31.682239 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5kg7x" Feb 19 10:47:31 crc kubenswrapper[4965]: I0219 10:47:31.693543 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5kg7x"] Feb 19 10:47:31 crc kubenswrapper[4965]: I0219 10:47:31.754125 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kkxk\" (UniqueName: \"kubernetes.io/projected/ba92a1b6-927a-44c8-927a-1984643f760d-kube-api-access-5kkxk\") pod \"redhat-operators-5kg7x\" (UID: \"ba92a1b6-927a-44c8-927a-1984643f760d\") " pod="openshift-marketplace/redhat-operators-5kg7x" Feb 19 10:47:31 crc kubenswrapper[4965]: I0219 10:47:31.754346 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba92a1b6-927a-44c8-927a-1984643f760d-utilities\") pod \"redhat-operators-5kg7x\" (UID: \"ba92a1b6-927a-44c8-927a-1984643f760d\") " pod="openshift-marketplace/redhat-operators-5kg7x" Feb 19 10:47:31 crc kubenswrapper[4965]: I0219 10:47:31.754374 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba92a1b6-927a-44c8-927a-1984643f760d-catalog-content\") pod \"redhat-operators-5kg7x\" (UID: \"ba92a1b6-927a-44c8-927a-1984643f760d\") " pod="openshift-marketplace/redhat-operators-5kg7x" Feb 19 10:47:31 crc kubenswrapper[4965]: I0219 10:47:31.856452 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba92a1b6-927a-44c8-927a-1984643f760d-utilities\") pod \"redhat-operators-5kg7x\" (UID: \"ba92a1b6-927a-44c8-927a-1984643f760d\") " pod="openshift-marketplace/redhat-operators-5kg7x" Feb 19 10:47:31 crc kubenswrapper[4965]: I0219 10:47:31.856496 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba92a1b6-927a-44c8-927a-1984643f760d-catalog-content\") pod \"redhat-operators-5kg7x\" (UID: \"ba92a1b6-927a-44c8-927a-1984643f760d\") " pod="openshift-marketplace/redhat-operators-5kg7x" Feb 19 10:47:31 crc kubenswrapper[4965]: I0219 10:47:31.856542 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kkxk\" (UniqueName: \"kubernetes.io/projected/ba92a1b6-927a-44c8-927a-1984643f760d-kube-api-access-5kkxk\") pod \"redhat-operators-5kg7x\" (UID: \"ba92a1b6-927a-44c8-927a-1984643f760d\") " pod="openshift-marketplace/redhat-operators-5kg7x" Feb 19 10:47:31 crc kubenswrapper[4965]: I0219 10:47:31.857173 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba92a1b6-927a-44c8-927a-1984643f760d-utilities\") pod \"redhat-operators-5kg7x\" (UID: \"ba92a1b6-927a-44c8-927a-1984643f760d\") " pod="openshift-marketplace/redhat-operators-5kg7x" Feb 19 10:47:31 crc kubenswrapper[4965]: I0219 10:47:31.857191 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba92a1b6-927a-44c8-927a-1984643f760d-catalog-content\") pod \"redhat-operators-5kg7x\" (UID: \"ba92a1b6-927a-44c8-927a-1984643f760d\") " pod="openshift-marketplace/redhat-operators-5kg7x" Feb 19 10:47:31 crc kubenswrapper[4965]: I0219 10:47:31.875902 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kkxk\" (UniqueName: \"kubernetes.io/projected/ba92a1b6-927a-44c8-927a-1984643f760d-kube-api-access-5kkxk\") pod \"redhat-operators-5kg7x\" (UID: \"ba92a1b6-927a-44c8-927a-1984643f760d\") " pod="openshift-marketplace/redhat-operators-5kg7x" Feb 19 10:47:32 crc kubenswrapper[4965]: I0219 10:47:32.002358 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5kg7x" Feb 19 10:47:32 crc kubenswrapper[4965]: I0219 10:47:32.552286 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5kg7x"] Feb 19 10:47:33 crc kubenswrapper[4965]: I0219 10:47:33.170965 4965 generic.go:334] "Generic (PLEG): container finished" podID="ba92a1b6-927a-44c8-927a-1984643f760d" containerID="c35c4eedbd8b6ef44908e16f1ba583532b821979f0312b3bd3d25081fcf04c0a" exitCode=0 Feb 19 10:47:33 crc kubenswrapper[4965]: I0219 10:47:33.171106 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5kg7x" event={"ID":"ba92a1b6-927a-44c8-927a-1984643f760d","Type":"ContainerDied","Data":"c35c4eedbd8b6ef44908e16f1ba583532b821979f0312b3bd3d25081fcf04c0a"} Feb 19 10:47:33 crc kubenswrapper[4965]: I0219 10:47:33.171280 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5kg7x" event={"ID":"ba92a1b6-927a-44c8-927a-1984643f760d","Type":"ContainerStarted","Data":"9ad7414863cdad68385a683c8170dc1a93b0c58b591010a86a82e1f7d6cb9a19"} Feb 19 10:47:33 crc kubenswrapper[4965]: I0219 10:47:33.172723 4965 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 10:47:34 crc kubenswrapper[4965]: I0219 10:47:34.181967 4965 generic.go:334] "Generic (PLEG): container finished" podID="b9c5939a-9751-4537-b47b-6041ffe57891" containerID="f2e99faab2a6ce1f49874fad86075f83f1c6c110f11f1b6c253b421612fa30fd" exitCode=0 Feb 19 10:47:34 crc kubenswrapper[4965]: I0219 10:47:34.182186 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6r9fx/crc-debug-ftvll" event={"ID":"b9c5939a-9751-4537-b47b-6041ffe57891","Type":"ContainerDied","Data":"f2e99faab2a6ce1f49874fad86075f83f1c6c110f11f1b6c253b421612fa30fd"} Feb 19 10:47:34 crc kubenswrapper[4965]: I0219 10:47:34.189529 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5kg7x" event={"ID":"ba92a1b6-927a-44c8-927a-1984643f760d","Type":"ContainerStarted","Data":"5808c26ddd618ec5056b52f2c5524c836a6d48417cb7a7dadd3ac12e3473d841"} Feb 19 10:47:35 crc kubenswrapper[4965]: I0219 10:47:35.327027 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6r9fx/crc-debug-ftvll" Feb 19 10:47:35 crc kubenswrapper[4965]: I0219 10:47:35.368517 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-6r9fx/crc-debug-ftvll"] Feb 19 10:47:35 crc kubenswrapper[4965]: I0219 10:47:35.382271 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-6r9fx/crc-debug-ftvll"] Feb 19 10:47:35 crc kubenswrapper[4965]: I0219 10:47:35.431453 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ph56t\" (UniqueName: \"kubernetes.io/projected/b9c5939a-9751-4537-b47b-6041ffe57891-kube-api-access-ph56t\") pod \"b9c5939a-9751-4537-b47b-6041ffe57891\" (UID: \"b9c5939a-9751-4537-b47b-6041ffe57891\") " Feb 19 10:47:35 crc kubenswrapper[4965]: I0219 10:47:35.431893 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b9c5939a-9751-4537-b47b-6041ffe57891-host\") pod \"b9c5939a-9751-4537-b47b-6041ffe57891\" (UID: \"b9c5939a-9751-4537-b47b-6041ffe57891\") " Feb 19 10:47:35 crc kubenswrapper[4965]: I0219 10:47:35.432425 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9c5939a-9751-4537-b47b-6041ffe57891-host" (OuterVolumeSpecName: "host") pod "b9c5939a-9751-4537-b47b-6041ffe57891" (UID: "b9c5939a-9751-4537-b47b-6041ffe57891"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 10:47:35 crc kubenswrapper[4965]: I0219 10:47:35.446878 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9c5939a-9751-4537-b47b-6041ffe57891-kube-api-access-ph56t" (OuterVolumeSpecName: "kube-api-access-ph56t") pod "b9c5939a-9751-4537-b47b-6041ffe57891" (UID: "b9c5939a-9751-4537-b47b-6041ffe57891"). InnerVolumeSpecName "kube-api-access-ph56t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:47:35 crc kubenswrapper[4965]: I0219 10:47:35.533882 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ph56t\" (UniqueName: \"kubernetes.io/projected/b9c5939a-9751-4537-b47b-6041ffe57891-kube-api-access-ph56t\") on node \"crc\" DevicePath \"\"" Feb 19 10:47:35 crc kubenswrapper[4965]: I0219 10:47:35.533924 4965 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b9c5939a-9751-4537-b47b-6041ffe57891-host\") on node \"crc\" DevicePath \"\"" Feb 19 10:47:36 crc kubenswrapper[4965]: I0219 10:47:36.210131 4965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02cc64aef777c53e4bc5d701e0b0e7586d4fa21aac2ece2668d07f4845842dbe" Feb 19 10:47:36 crc kubenswrapper[4965]: I0219 10:47:36.210236 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6r9fx/crc-debug-ftvll" Feb 19 10:47:36 crc kubenswrapper[4965]: I0219 10:47:36.603175 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-6r9fx/crc-debug-mtb4c"] Feb 19 10:47:36 crc kubenswrapper[4965]: E0219 10:47:36.603914 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9c5939a-9751-4537-b47b-6041ffe57891" containerName="container-00" Feb 19 10:47:36 crc kubenswrapper[4965]: I0219 10:47:36.603929 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9c5939a-9751-4537-b47b-6041ffe57891" containerName="container-00" Feb 19 10:47:36 crc kubenswrapper[4965]: I0219 10:47:36.604232 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9c5939a-9751-4537-b47b-6041ffe57891" containerName="container-00" Feb 19 10:47:36 crc kubenswrapper[4965]: I0219 10:47:36.605096 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6r9fx/crc-debug-mtb4c" Feb 19 10:47:36 crc kubenswrapper[4965]: I0219 10:47:36.656467 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7e8b4140-9fe1-4f8c-a6fd-982bd3ddaf8f-host\") pod \"crc-debug-mtb4c\" (UID: \"7e8b4140-9fe1-4f8c-a6fd-982bd3ddaf8f\") " pod="openshift-must-gather-6r9fx/crc-debug-mtb4c" Feb 19 10:47:36 crc kubenswrapper[4965]: I0219 10:47:36.656726 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qwxr\" (UniqueName: \"kubernetes.io/projected/7e8b4140-9fe1-4f8c-a6fd-982bd3ddaf8f-kube-api-access-2qwxr\") pod \"crc-debug-mtb4c\" (UID: \"7e8b4140-9fe1-4f8c-a6fd-982bd3ddaf8f\") " pod="openshift-must-gather-6r9fx/crc-debug-mtb4c" Feb 19 10:47:36 crc kubenswrapper[4965]: I0219 10:47:36.759178 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qwxr\" (UniqueName: \"kubernetes.io/projected/7e8b4140-9fe1-4f8c-a6fd-982bd3ddaf8f-kube-api-access-2qwxr\") pod \"crc-debug-mtb4c\" (UID: \"7e8b4140-9fe1-4f8c-a6fd-982bd3ddaf8f\") " pod="openshift-must-gather-6r9fx/crc-debug-mtb4c" Feb 19 10:47:36 crc kubenswrapper[4965]: I0219 10:47:36.759380 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7e8b4140-9fe1-4f8c-a6fd-982bd3ddaf8f-host\") pod \"crc-debug-mtb4c\" (UID: \"7e8b4140-9fe1-4f8c-a6fd-982bd3ddaf8f\") " pod="openshift-must-gather-6r9fx/crc-debug-mtb4c" Feb 19 10:47:36 crc kubenswrapper[4965]: I0219 10:47:36.759514 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7e8b4140-9fe1-4f8c-a6fd-982bd3ddaf8f-host\") pod \"crc-debug-mtb4c\" (UID: \"7e8b4140-9fe1-4f8c-a6fd-982bd3ddaf8f\") " pod="openshift-must-gather-6r9fx/crc-debug-mtb4c" Feb 19 10:47:36 crc kubenswrapper[4965]: I0219 10:47:36.778108 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qwxr\" (UniqueName: \"kubernetes.io/projected/7e8b4140-9fe1-4f8c-a6fd-982bd3ddaf8f-kube-api-access-2qwxr\") pod \"crc-debug-mtb4c\" (UID: \"7e8b4140-9fe1-4f8c-a6fd-982bd3ddaf8f\") " pod="openshift-must-gather-6r9fx/crc-debug-mtb4c" Feb 19 10:47:36 crc kubenswrapper[4965]: I0219 10:47:36.924127 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6r9fx/crc-debug-mtb4c" Feb 19 10:47:37 crc kubenswrapper[4965]: I0219 10:47:37.210650 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9c5939a-9751-4537-b47b-6041ffe57891" path="/var/lib/kubelet/pods/b9c5939a-9751-4537-b47b-6041ffe57891/volumes" Feb 19 10:47:37 crc kubenswrapper[4965]: I0219 10:47:37.220085 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6r9fx/crc-debug-mtb4c" event={"ID":"7e8b4140-9fe1-4f8c-a6fd-982bd3ddaf8f","Type":"ContainerStarted","Data":"a84a5ad9d68e48c5d5be9fa944b640fb557c3d6e1a70c38a44991626a857222f"} Feb 19 10:47:38 crc kubenswrapper[4965]: I0219 10:47:38.198047 4965 scope.go:117] "RemoveContainer" containerID="dd91099379f13248da41756ec9df975dda5a009207ac101c1b7f089c85137088" Feb 19 10:47:38 crc kubenswrapper[4965]: E0219 10:47:38.198348 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:47:38 crc kubenswrapper[4965]: I0219 10:47:38.230381 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6r9fx/crc-debug-mtb4c" event={"ID":"7e8b4140-9fe1-4f8c-a6fd-982bd3ddaf8f","Type":"ContainerStarted","Data":"11a0a5d0d7409d413ddb1e2c950de6aef1566d9afb4981d660978dc6cc3de35f"} Feb 19 10:47:38 crc kubenswrapper[4965]: I0219 10:47:38.247009 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-6r9fx/crc-debug-mtb4c" podStartSLOduration=2.246994163 podStartE2EDuration="2.246994163s" podCreationTimestamp="2026-02-19 10:47:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:47:38.241006317 +0000 UTC m=+3913.862327627" watchObservedRunningTime="2026-02-19 10:47:38.246994163 +0000 UTC m=+3913.868315473" Feb 19 10:47:39 crc kubenswrapper[4965]: I0219 10:47:39.258912 4965 generic.go:334] "Generic (PLEG): container finished" podID="7e8b4140-9fe1-4f8c-a6fd-982bd3ddaf8f" containerID="11a0a5d0d7409d413ddb1e2c950de6aef1566d9afb4981d660978dc6cc3de35f" exitCode=0 Feb 19 10:47:39 crc kubenswrapper[4965]: I0219 10:47:39.259157 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6r9fx/crc-debug-mtb4c" event={"ID":"7e8b4140-9fe1-4f8c-a6fd-982bd3ddaf8f","Type":"ContainerDied","Data":"11a0a5d0d7409d413ddb1e2c950de6aef1566d9afb4981d660978dc6cc3de35f"} Feb 19 10:47:40 crc kubenswrapper[4965]: I0219 10:47:40.269860 4965 generic.go:334] "Generic (PLEG): container finished" podID="ba92a1b6-927a-44c8-927a-1984643f760d" containerID="5808c26ddd618ec5056b52f2c5524c836a6d48417cb7a7dadd3ac12e3473d841" exitCode=0 Feb 19 10:47:40 crc kubenswrapper[4965]: I0219 10:47:40.269955 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5kg7x" event={"ID":"ba92a1b6-927a-44c8-927a-1984643f760d","Type":"ContainerDied","Data":"5808c26ddd618ec5056b52f2c5524c836a6d48417cb7a7dadd3ac12e3473d841"} Feb 19 10:47:40 crc kubenswrapper[4965]: I0219 10:47:40.378809 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6r9fx/crc-debug-mtb4c" Feb 19 10:47:40 crc kubenswrapper[4965]: I0219 10:47:40.408152 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-6r9fx/crc-debug-mtb4c"] Feb 19 10:47:40 crc kubenswrapper[4965]: I0219 10:47:40.435546 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-6r9fx/crc-debug-mtb4c"] Feb 19 10:47:40 crc kubenswrapper[4965]: I0219 10:47:40.539117 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7e8b4140-9fe1-4f8c-a6fd-982bd3ddaf8f-host\") pod \"7e8b4140-9fe1-4f8c-a6fd-982bd3ddaf8f\" (UID: \"7e8b4140-9fe1-4f8c-a6fd-982bd3ddaf8f\") " Feb 19 10:47:40 crc kubenswrapper[4965]: I0219 10:47:40.539168 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qwxr\" (UniqueName: \"kubernetes.io/projected/7e8b4140-9fe1-4f8c-a6fd-982bd3ddaf8f-kube-api-access-2qwxr\") pod \"7e8b4140-9fe1-4f8c-a6fd-982bd3ddaf8f\" (UID: \"7e8b4140-9fe1-4f8c-a6fd-982bd3ddaf8f\") " Feb 19 10:47:40 crc kubenswrapper[4965]: I0219 10:47:40.539236 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e8b4140-9fe1-4f8c-a6fd-982bd3ddaf8f-host" (OuterVolumeSpecName: "host") pod "7e8b4140-9fe1-4f8c-a6fd-982bd3ddaf8f" (UID: "7e8b4140-9fe1-4f8c-a6fd-982bd3ddaf8f"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 10:47:40 crc kubenswrapper[4965]: I0219 10:47:40.539919 4965 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7e8b4140-9fe1-4f8c-a6fd-982bd3ddaf8f-host\") on node \"crc\" DevicePath \"\"" Feb 19 10:47:40 crc kubenswrapper[4965]: I0219 10:47:40.546343 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e8b4140-9fe1-4f8c-a6fd-982bd3ddaf8f-kube-api-access-2qwxr" (OuterVolumeSpecName: "kube-api-access-2qwxr") pod "7e8b4140-9fe1-4f8c-a6fd-982bd3ddaf8f" (UID: "7e8b4140-9fe1-4f8c-a6fd-982bd3ddaf8f"). InnerVolumeSpecName "kube-api-access-2qwxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:47:40 crc kubenswrapper[4965]: I0219 10:47:40.642388 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qwxr\" (UniqueName: \"kubernetes.io/projected/7e8b4140-9fe1-4f8c-a6fd-982bd3ddaf8f-kube-api-access-2qwxr\") on node \"crc\" DevicePath \"\"" Feb 19 10:47:41 crc kubenswrapper[4965]: I0219 10:47:41.216311 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e8b4140-9fe1-4f8c-a6fd-982bd3ddaf8f" path="/var/lib/kubelet/pods/7e8b4140-9fe1-4f8c-a6fd-982bd3ddaf8f/volumes" Feb 19 10:47:41 crc kubenswrapper[4965]: I0219 10:47:41.293026 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5kg7x" event={"ID":"ba92a1b6-927a-44c8-927a-1984643f760d","Type":"ContainerStarted","Data":"d317a00d5f07fcd1489969aaf32af9d216d19a00b58231c48f11447f7d54b215"} Feb 19 10:47:41 crc kubenswrapper[4965]: I0219 10:47:41.295589 4965 scope.go:117] "RemoveContainer" containerID="11a0a5d0d7409d413ddb1e2c950de6aef1566d9afb4981d660978dc6cc3de35f" Feb 19 10:47:41 crc kubenswrapper[4965]: I0219 10:47:41.295593 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6r9fx/crc-debug-mtb4c" Feb 19 10:47:41 crc kubenswrapper[4965]: I0219 10:47:41.321167 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5kg7x" podStartSLOduration=2.799955911 podStartE2EDuration="10.321143636s" podCreationTimestamp="2026-02-19 10:47:31 +0000 UTC" firstStartedPulling="2026-02-19 10:47:33.172534423 +0000 UTC m=+3908.793855733" lastFinishedPulling="2026-02-19 10:47:40.693722148 +0000 UTC m=+3916.315043458" observedRunningTime="2026-02-19 10:47:41.316223097 +0000 UTC m=+3916.937544427" watchObservedRunningTime="2026-02-19 10:47:41.321143636 +0000 UTC m=+3916.942464946" Feb 19 10:47:41 crc kubenswrapper[4965]: I0219 10:47:41.909588 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-6r9fx/crc-debug-ttrvk"] Feb 19 10:47:41 crc kubenswrapper[4965]: E0219 10:47:41.910122 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e8b4140-9fe1-4f8c-a6fd-982bd3ddaf8f" containerName="container-00" Feb 19 10:47:41 crc kubenswrapper[4965]: I0219 10:47:41.910148 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e8b4140-9fe1-4f8c-a6fd-982bd3ddaf8f" containerName="container-00" Feb 19 10:47:41 crc kubenswrapper[4965]: I0219 10:47:41.910400 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e8b4140-9fe1-4f8c-a6fd-982bd3ddaf8f" containerName="container-00" Feb 19 10:47:41 crc kubenswrapper[4965]: I0219 10:47:41.911114 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6r9fx/crc-debug-ttrvk" Feb 19 10:47:42 crc kubenswrapper[4965]: I0219 10:47:42.002541 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5kg7x" Feb 19 10:47:42 crc kubenswrapper[4965]: I0219 10:47:42.002632 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5kg7x" Feb 19 10:47:42 crc kubenswrapper[4965]: I0219 10:47:42.071205 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs26h\" (UniqueName: \"kubernetes.io/projected/8d41756d-00be-48a1-b087-7359114fc01b-kube-api-access-cs26h\") pod \"crc-debug-ttrvk\" (UID: \"8d41756d-00be-48a1-b087-7359114fc01b\") " pod="openshift-must-gather-6r9fx/crc-debug-ttrvk" Feb 19 10:47:42 crc kubenswrapper[4965]: I0219 10:47:42.071275 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8d41756d-00be-48a1-b087-7359114fc01b-host\") pod \"crc-debug-ttrvk\" (UID: \"8d41756d-00be-48a1-b087-7359114fc01b\") " pod="openshift-must-gather-6r9fx/crc-debug-ttrvk" Feb 19 10:47:42 crc kubenswrapper[4965]: I0219 10:47:42.173335 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cs26h\" (UniqueName: \"kubernetes.io/projected/8d41756d-00be-48a1-b087-7359114fc01b-kube-api-access-cs26h\") pod \"crc-debug-ttrvk\" (UID: \"8d41756d-00be-48a1-b087-7359114fc01b\") " pod="openshift-must-gather-6r9fx/crc-debug-ttrvk" Feb 19 10:47:42 crc kubenswrapper[4965]: I0219 10:47:42.173390 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8d41756d-00be-48a1-b087-7359114fc01b-host\") pod \"crc-debug-ttrvk\" (UID: \"8d41756d-00be-48a1-b087-7359114fc01b\") " pod="openshift-must-gather-6r9fx/crc-debug-ttrvk" Feb 19 10:47:42 crc kubenswrapper[4965]: I0219 10:47:42.173576 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8d41756d-00be-48a1-b087-7359114fc01b-host\") pod \"crc-debug-ttrvk\" (UID: \"8d41756d-00be-48a1-b087-7359114fc01b\") " pod="openshift-must-gather-6r9fx/crc-debug-ttrvk" Feb 19 10:47:42 crc kubenswrapper[4965]: I0219 10:47:42.196823 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cs26h\" (UniqueName: \"kubernetes.io/projected/8d41756d-00be-48a1-b087-7359114fc01b-kube-api-access-cs26h\") pod \"crc-debug-ttrvk\" (UID: \"8d41756d-00be-48a1-b087-7359114fc01b\") " pod="openshift-must-gather-6r9fx/crc-debug-ttrvk" Feb 19 10:47:42 crc kubenswrapper[4965]: I0219 10:47:42.228712 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6r9fx/crc-debug-ttrvk" Feb 19 10:47:42 crc kubenswrapper[4965]: W0219 10:47:42.273555 4965 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d41756d_00be_48a1_b087_7359114fc01b.slice/crio-844e15a92a94285a3bbbdb3386614cbb13683cacf71e923c478c816d0e200d85 WatchSource:0}: Error finding container 844e15a92a94285a3bbbdb3386614cbb13683cacf71e923c478c816d0e200d85: Status 404 returned error can't find the container with id 844e15a92a94285a3bbbdb3386614cbb13683cacf71e923c478c816d0e200d85 Feb 19 10:47:42 crc kubenswrapper[4965]: I0219 10:47:42.307742 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6r9fx/crc-debug-ttrvk" event={"ID":"8d41756d-00be-48a1-b087-7359114fc01b","Type":"ContainerStarted","Data":"844e15a92a94285a3bbbdb3386614cbb13683cacf71e923c478c816d0e200d85"} Feb 19 10:47:43 crc kubenswrapper[4965]: I0219 10:47:43.057908 4965 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5kg7x" podUID="ba92a1b6-927a-44c8-927a-1984643f760d" containerName="registry-server" probeResult="failure" output=< Feb 19 10:47:43 crc kubenswrapper[4965]: timeout: failed to connect service ":50051" within 1s Feb 19 10:47:43 crc kubenswrapper[4965]: > Feb 19 10:47:43 crc kubenswrapper[4965]: I0219 10:47:43.321099 4965 generic.go:334] "Generic (PLEG): container finished" podID="8d41756d-00be-48a1-b087-7359114fc01b" containerID="5c23b6e6a7cc5181101b93ef29825dd4122d7db806e6a4a7fdb59ffc5325cfd9" exitCode=0 Feb 19 10:47:43 crc kubenswrapper[4965]: I0219 10:47:43.321220 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6r9fx/crc-debug-ttrvk" event={"ID":"8d41756d-00be-48a1-b087-7359114fc01b","Type":"ContainerDied","Data":"5c23b6e6a7cc5181101b93ef29825dd4122d7db806e6a4a7fdb59ffc5325cfd9"} Feb 19 10:47:43 crc kubenswrapper[4965]: I0219 10:47:43.368451 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-6r9fx/crc-debug-ttrvk"] Feb 19 10:47:43 crc kubenswrapper[4965]: I0219 10:47:43.376721 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-6r9fx/crc-debug-ttrvk"] Feb 19 10:47:44 crc kubenswrapper[4965]: I0219 10:47:44.444525 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6r9fx/crc-debug-ttrvk" Feb 19 10:47:44 crc kubenswrapper[4965]: I0219 10:47:44.525920 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cs26h\" (UniqueName: \"kubernetes.io/projected/8d41756d-00be-48a1-b087-7359114fc01b-kube-api-access-cs26h\") pod \"8d41756d-00be-48a1-b087-7359114fc01b\" (UID: \"8d41756d-00be-48a1-b087-7359114fc01b\") " Feb 19 10:47:44 crc kubenswrapper[4965]: I0219 10:47:44.526191 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8d41756d-00be-48a1-b087-7359114fc01b-host\") pod \"8d41756d-00be-48a1-b087-7359114fc01b\" (UID: \"8d41756d-00be-48a1-b087-7359114fc01b\") " Feb 19 10:47:44 crc kubenswrapper[4965]: I0219 10:47:44.526256 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d41756d-00be-48a1-b087-7359114fc01b-host" (OuterVolumeSpecName: "host") pod "8d41756d-00be-48a1-b087-7359114fc01b" (UID: "8d41756d-00be-48a1-b087-7359114fc01b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 10:47:44 crc kubenswrapper[4965]: I0219 10:47:44.527299 4965 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8d41756d-00be-48a1-b087-7359114fc01b-host\") on node \"crc\" DevicePath \"\"" Feb 19 10:47:44 crc kubenswrapper[4965]: I0219 10:47:44.540492 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d41756d-00be-48a1-b087-7359114fc01b-kube-api-access-cs26h" (OuterVolumeSpecName: "kube-api-access-cs26h") pod "8d41756d-00be-48a1-b087-7359114fc01b" (UID: "8d41756d-00be-48a1-b087-7359114fc01b"). InnerVolumeSpecName "kube-api-access-cs26h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:47:44 crc kubenswrapper[4965]: I0219 10:47:44.631259 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cs26h\" (UniqueName: \"kubernetes.io/projected/8d41756d-00be-48a1-b087-7359114fc01b-kube-api-access-cs26h\") on node \"crc\" DevicePath \"\"" Feb 19 10:47:45 crc kubenswrapper[4965]: I0219 10:47:45.213283 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d41756d-00be-48a1-b087-7359114fc01b" path="/var/lib/kubelet/pods/8d41756d-00be-48a1-b087-7359114fc01b/volumes" Feb 19 10:47:45 crc kubenswrapper[4965]: I0219 10:47:45.338911 4965 scope.go:117] "RemoveContainer" containerID="5c23b6e6a7cc5181101b93ef29825dd4122d7db806e6a4a7fdb59ffc5325cfd9" Feb 19 10:47:45 crc kubenswrapper[4965]: I0219 10:47:45.339162 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6r9fx/crc-debug-ttrvk" Feb 19 10:47:52 crc kubenswrapper[4965]: I0219 10:47:52.091646 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5kg7x" Feb 19 10:47:52 crc kubenswrapper[4965]: I0219 10:47:52.198556 4965 scope.go:117] "RemoveContainer" containerID="dd91099379f13248da41756ec9df975dda5a009207ac101c1b7f089c85137088" Feb 19 10:47:52 crc kubenswrapper[4965]: I0219 10:47:52.232806 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5kg7x" Feb 19 10:47:53 crc kubenswrapper[4965]: I0219 10:47:53.420623 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" event={"ID":"63ef3eb8-6103-492d-b6ef-f16081d15e83","Type":"ContainerStarted","Data":"c3e85c95253f7e8eee9d6a40dcd4eec4ece10846ad5e5ac11a1038255d85beca"} Feb 19 10:47:55 crc kubenswrapper[4965]: I0219 10:47:55.995537 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5kg7x"] Feb 19 10:47:55 crc kubenswrapper[4965]: I0219 10:47:55.996188 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5kg7x" podUID="ba92a1b6-927a-44c8-927a-1984643f760d" containerName="registry-server" containerID="cri-o://d317a00d5f07fcd1489969aaf32af9d216d19a00b58231c48f11447f7d54b215" gracePeriod=2 Feb 19 10:47:56 crc kubenswrapper[4965]: I0219 10:47:56.504572 4965 generic.go:334] "Generic (PLEG): container finished" podID="ba92a1b6-927a-44c8-927a-1984643f760d" containerID="d317a00d5f07fcd1489969aaf32af9d216d19a00b58231c48f11447f7d54b215" exitCode=0 Feb 19 10:47:56 crc kubenswrapper[4965]: I0219 10:47:56.504816 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5kg7x" event={"ID":"ba92a1b6-927a-44c8-927a-1984643f760d","Type":"ContainerDied","Data":"d317a00d5f07fcd1489969aaf32af9d216d19a00b58231c48f11447f7d54b215"} Feb 19 10:47:56 crc kubenswrapper[4965]: I0219 10:47:56.780671 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5kg7x" Feb 19 10:47:56 crc kubenswrapper[4965]: I0219 10:47:56.895524 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba92a1b6-927a-44c8-927a-1984643f760d-utilities\") pod \"ba92a1b6-927a-44c8-927a-1984643f760d\" (UID: \"ba92a1b6-927a-44c8-927a-1984643f760d\") " Feb 19 10:47:56 crc kubenswrapper[4965]: I0219 10:47:56.895973 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kkxk\" (UniqueName: \"kubernetes.io/projected/ba92a1b6-927a-44c8-927a-1984643f760d-kube-api-access-5kkxk\") pod \"ba92a1b6-927a-44c8-927a-1984643f760d\" (UID: \"ba92a1b6-927a-44c8-927a-1984643f760d\") " Feb 19 10:47:56 crc kubenswrapper[4965]: I0219 10:47:56.896050 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba92a1b6-927a-44c8-927a-1984643f760d-catalog-content\") pod \"ba92a1b6-927a-44c8-927a-1984643f760d\" (UID: \"ba92a1b6-927a-44c8-927a-1984643f760d\") " Feb 19 10:47:56 crc kubenswrapper[4965]: I0219 10:47:56.896467 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba92a1b6-927a-44c8-927a-1984643f760d-utilities" (OuterVolumeSpecName: "utilities") pod "ba92a1b6-927a-44c8-927a-1984643f760d" (UID: "ba92a1b6-927a-44c8-927a-1984643f760d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:47:56 crc kubenswrapper[4965]: I0219 10:47:56.896608 4965 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba92a1b6-927a-44c8-927a-1984643f760d-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:47:56 crc kubenswrapper[4965]: I0219 10:47:56.910504 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba92a1b6-927a-44c8-927a-1984643f760d-kube-api-access-5kkxk" (OuterVolumeSpecName: "kube-api-access-5kkxk") pod "ba92a1b6-927a-44c8-927a-1984643f760d" (UID: "ba92a1b6-927a-44c8-927a-1984643f760d"). InnerVolumeSpecName "kube-api-access-5kkxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:47:56 crc kubenswrapper[4965]: I0219 10:47:56.999632 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kkxk\" (UniqueName: \"kubernetes.io/projected/ba92a1b6-927a-44c8-927a-1984643f760d-kube-api-access-5kkxk\") on node \"crc\" DevicePath \"\"" Feb 19 10:47:57 crc kubenswrapper[4965]: I0219 10:47:57.058518 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba92a1b6-927a-44c8-927a-1984643f760d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ba92a1b6-927a-44c8-927a-1984643f760d" (UID: "ba92a1b6-927a-44c8-927a-1984643f760d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:47:57 crc kubenswrapper[4965]: I0219 10:47:57.102315 4965 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba92a1b6-927a-44c8-927a-1984643f760d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:47:57 crc kubenswrapper[4965]: I0219 10:47:57.524392 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5kg7x" event={"ID":"ba92a1b6-927a-44c8-927a-1984643f760d","Type":"ContainerDied","Data":"9ad7414863cdad68385a683c8170dc1a93b0c58b591010a86a82e1f7d6cb9a19"} Feb 19 10:47:57 crc kubenswrapper[4965]: I0219 10:47:57.524461 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5kg7x" Feb 19 10:47:57 crc kubenswrapper[4965]: I0219 10:47:57.524737 4965 scope.go:117] "RemoveContainer" containerID="d317a00d5f07fcd1489969aaf32af9d216d19a00b58231c48f11447f7d54b215" Feb 19 10:47:57 crc kubenswrapper[4965]: I0219 10:47:57.556275 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5kg7x"] Feb 19 10:47:57 crc kubenswrapper[4965]: I0219 10:47:57.574065 4965 scope.go:117] "RemoveContainer" containerID="5808c26ddd618ec5056b52f2c5524c836a6d48417cb7a7dadd3ac12e3473d841" Feb 19 10:47:57 crc kubenswrapper[4965]: I0219 10:47:57.576894 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5kg7x"] Feb 19 10:47:57 crc kubenswrapper[4965]: I0219 10:47:57.619032 4965 scope.go:117] "RemoveContainer" containerID="c35c4eedbd8b6ef44908e16f1ba583532b821979f0312b3bd3d25081fcf04c0a" Feb 19 10:47:59 crc kubenswrapper[4965]: I0219 10:47:59.209707 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba92a1b6-927a-44c8-927a-1984643f760d" path="/var/lib/kubelet/pods/ba92a1b6-927a-44c8-927a-1984643f760d/volumes" Feb 19 10:48:27 crc kubenswrapper[4965]: I0219 10:48:27.345900 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_45105c9e-db96-41c5-ba42-d56027ca318c/init-config-reloader/0.log" Feb 19 10:48:27 crc kubenswrapper[4965]: I0219 10:48:27.465456 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_45105c9e-db96-41c5-ba42-d56027ca318c/init-config-reloader/0.log" Feb 19 10:48:27 crc kubenswrapper[4965]: I0219 10:48:27.500374 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_45105c9e-db96-41c5-ba42-d56027ca318c/alertmanager/0.log" Feb 19 10:48:27 crc kubenswrapper[4965]: I0219 10:48:27.580573 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_45105c9e-db96-41c5-ba42-d56027ca318c/config-reloader/0.log" Feb 19 10:48:27 crc kubenswrapper[4965]: I0219 10:48:27.789441 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-c97f468c6-bwf6p_45f4a2b8-338f-4c3d-afe8-305eb599081c/barbican-api/0.log" Feb 19 10:48:27 crc kubenswrapper[4965]: I0219 10:48:27.823540 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-c97f468c6-bwf6p_45f4a2b8-338f-4c3d-afe8-305eb599081c/barbican-api-log/0.log" Feb 19 10:48:27 crc kubenswrapper[4965]: I0219 10:48:27.898700 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6855d4854d-gc94v_efe10142-642a-45d3-9f5a-8d1f2cb717e9/barbican-keystone-listener/0.log" Feb 19 10:48:28 crc kubenswrapper[4965]: I0219 10:48:28.157101 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6855d4854d-gc94v_efe10142-642a-45d3-9f5a-8d1f2cb717e9/barbican-keystone-listener-log/0.log" Feb 19 10:48:28 crc kubenswrapper[4965]: I0219 10:48:28.201479 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-55c5f69ff7-qk9n8_a32c3eed-880c-428c-b58e-d89c763d11b9/barbican-worker/0.log" Feb 19 10:48:28 crc kubenswrapper[4965]: I0219 10:48:28.254488 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-55c5f69ff7-qk9n8_a32c3eed-880c-428c-b58e-d89c763d11b9/barbican-worker-log/0.log" Feb 19 10:48:28 crc kubenswrapper[4965]: I0219 10:48:28.508865 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-m8wg9_a6a006f0-d704-4e08-bc46-118269ad9b1a/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 10:48:28 crc kubenswrapper[4965]: I0219 10:48:28.758435 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2a094c76-7174-4b58-8b32-12020982c63b/ceilometer-central-agent/0.log" Feb 19 10:48:28 crc kubenswrapper[4965]: I0219 10:48:28.875101 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2a094c76-7174-4b58-8b32-12020982c63b/proxy-httpd/0.log" Feb 19 10:48:28 crc kubenswrapper[4965]: I0219 10:48:28.882316 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2a094c76-7174-4b58-8b32-12020982c63b/ceilometer-notification-agent/0.log" Feb 19 10:48:28 crc kubenswrapper[4965]: I0219 10:48:28.923969 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2a094c76-7174-4b58-8b32-12020982c63b/sg-core/0.log" Feb 19 10:48:29 crc kubenswrapper[4965]: I0219 10:48:29.173148 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_92030a04-19d0-4766-b560-3d5b64be8716/cinder-api-log/0.log" Feb 19 10:48:29 crc kubenswrapper[4965]: I0219 10:48:29.233229 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_92030a04-19d0-4766-b560-3d5b64be8716/cinder-api/0.log" Feb 19 10:48:29 crc kubenswrapper[4965]: I0219 10:48:29.509722 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_2e838493-1547-4574-8af2-eff17e75c65b/cinder-scheduler/0.log" Feb 19 10:48:29 crc kubenswrapper[4965]: I0219 10:48:29.541729 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_2e838493-1547-4574-8af2-eff17e75c65b/probe/0.log" Feb 19 10:48:29 crc kubenswrapper[4965]: I0219 10:48:29.652312 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-api-0_3ed10660-2674-4274-a62b-366af8d375da/cloudkitty-api/0.log" Feb 19 10:48:29 crc kubenswrapper[4965]: I0219 10:48:29.861747 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-compactor-0_5038aafe-e39d-479c-b355-bbac1a77fa4a/loki-compactor/0.log" Feb 19 10:48:29 crc kubenswrapper[4965]: I0219 10:48:29.900075 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-api-0_3ed10660-2674-4274-a62b-366af8d375da/cloudkitty-api-log/0.log" Feb 19 10:48:30 crc kubenswrapper[4965]: I0219 10:48:30.066969 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-distributor-585d9bcbc-ktrzq_afbb0d2a-5cd0-4358-b5b0-c22749400326/loki-distributor/0.log" Feb 19 10:48:30 crc kubenswrapper[4965]: I0219 10:48:30.188270 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-gateway-7f8685b49f-9vkbl_3c673b0f-7739-4b94-99b9-abd66fb51937/gateway/0.log" Feb 19 10:48:30 crc kubenswrapper[4965]: I0219 10:48:30.327043 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-gateway-7f8685b49f-h6555_b1f1bb07-4bb0-4a1c-ab8a-0e97e8d57f1a/gateway/0.log" Feb 19 10:48:30 crc kubenswrapper[4965]: I0219 10:48:30.563134 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-index-gateway-0_f9902193-fba0-4ea4-8de6-352459b1c13f/loki-index-gateway/0.log" Feb 19 10:48:30 crc kubenswrapper[4965]: I0219 10:48:30.760277 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-ingester-0_faab82f2-bc31-438d-b329-9a31d6ba5040/loki-ingester/0.log" Feb 19 10:48:31 crc kubenswrapper[4965]: I0219 10:48:31.114139 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-query-frontend-67bb4dfcd8-4bmcg_849f49ac-72be-49ce-ab6b-2eb5890a6337/loki-query-frontend/0.log" Feb 19 10:48:31 crc kubenswrapper[4965]: I0219 10:48:31.235420 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-querier-58c84b5844-hb4c6_7adcb318-8832-417d-814a-7a2d21c8af30/loki-querier/0.log" Feb 19 10:48:31 crc kubenswrapper[4965]: I0219 10:48:31.778763 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-lkc2z_04f58633-9350-49a8-9c41-522490a298eb/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 10:48:31 crc kubenswrapper[4965]: I0219 10:48:31.920032 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-sxwbj_64c1fbe6-a102-40e1-920a-319b6664c77e/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 10:48:32 crc kubenswrapper[4965]: I0219 10:48:32.580457 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-c4b758ff5-gdxjp_5f29d993-47df-4952-a137-bb5cf52ea59a/init/0.log" Feb 19 10:48:32 crc kubenswrapper[4965]: I0219 10:48:32.878826 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-c4b758ff5-gdxjp_5f29d993-47df-4952-a137-bb5cf52ea59a/init/0.log" Feb 19 10:48:32 crc kubenswrapper[4965]: I0219 10:48:32.889827 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-c4b758ff5-gdxjp_5f29d993-47df-4952-a137-bb5cf52ea59a/dnsmasq-dns/0.log" Feb 19 10:48:32 crc kubenswrapper[4965]: I0219 10:48:32.891775 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-vfth9_2cc29510-fe65-45e1-b4fe-fef9bb2923b0/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 10:48:33 crc kubenswrapper[4965]: I0219 10:48:33.153552 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_d1ece847-d2dd-42e7-ad4c-5f9ad04529f8/glance-httpd/0.log" Feb 19 10:48:33 crc kubenswrapper[4965]: I0219 10:48:33.249243 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_d1ece847-d2dd-42e7-ad4c-5f9ad04529f8/glance-log/0.log" Feb 19 10:48:33 crc kubenswrapper[4965]: I0219 10:48:33.448141 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_b7e632f4-f05e-4ac6-a1cd-96ae3244c450/glance-httpd/0.log" Feb 19 10:48:33 crc kubenswrapper[4965]: I0219 10:48:33.465680 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_b7e632f4-f05e-4ac6-a1cd-96ae3244c450/glance-log/0.log" Feb 19 10:48:33 crc kubenswrapper[4965]: I0219 10:48:33.613261 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-gtd56_25080ebe-a4ea-4698-b64c-b7064ff93db6/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 10:48:33 crc kubenswrapper[4965]: I0219 10:48:33.785262 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-kpx6m_0f72a778-ba2a-4454-bba8-865897b5d656/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 10:48:34 crc kubenswrapper[4965]: I0219 10:48:34.520614 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_947a2943-c25c-4606-848a-a2942e8988c9/kube-state-metrics/0.log" Feb 19 10:48:34 crc kubenswrapper[4965]: I0219 10:48:34.828424 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-zt9j8_6b29dda1-69ac-4d2a-a078-e2f1a7103b67/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 10:48:34 crc kubenswrapper[4965]: I0219 10:48:34.937444 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-6dbb44f597-5cgmc_3a9b7b7c-7f72-46f8-aa26-1f03e4f0fd4b/keystone-api/0.log" Feb 19 10:48:35 crc kubenswrapper[4965]: I0219 10:48:35.020644 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-proc-0_aa682c21-5c48-4518-9033-2f28eae7f24d/cloudkitty-proc/0.log" Feb 19 10:48:35 crc kubenswrapper[4965]: I0219 10:48:35.331851 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-76546766f9-plbd4_40c5d1a6-44fc-4f35-a393-d82f69dde17f/neutron-httpd/0.log" Feb 19 10:48:35 crc kubenswrapper[4965]: I0219 10:48:35.372663 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-76546766f9-plbd4_40c5d1a6-44fc-4f35-a393-d82f69dde17f/neutron-api/0.log" Feb 19 10:48:35 crc kubenswrapper[4965]: I0219 10:48:35.660156 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-dnm76_1189041a-04c1-4fa1-9c71-daf77ef8b3fe/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 10:48:36 crc kubenswrapper[4965]: I0219 10:48:36.173702 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_4b71fda7-2162-4dda-a5ba-053eb96e59a9/nova-api-log/0.log" Feb 19 10:48:36 crc kubenswrapper[4965]: I0219 10:48:36.377942 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_8af99122-00d4-45e7-8e66-f541ba54a66a/nova-cell0-conductor-conductor/0.log" Feb 19 10:48:36 crc kubenswrapper[4965]: I0219 10:48:36.609487 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_04d07332-2cb5-49b4-b70c-9f3a13f73a09/nova-cell1-conductor-conductor/0.log" Feb 19 10:48:36 crc kubenswrapper[4965]: I0219 10:48:36.612726 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_4b71fda7-2162-4dda-a5ba-053eb96e59a9/nova-api-api/0.log" Feb 19 10:48:36 crc kubenswrapper[4965]: I0219 10:48:36.699858 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_9358573e-5a2b-4f2a-bbff-0e55e0e00869/nova-cell1-novncproxy-novncproxy/0.log" Feb 19 10:48:36 crc kubenswrapper[4965]: I0219 10:48:36.891298 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-x4cs9_4bb72c3c-878c-497d-8105-767df1971b0d/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 10:48:37 crc kubenswrapper[4965]: I0219 10:48:37.120456 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_b29094ed-8036-44ed-a882-7ad1d5ad4cc3/nova-metadata-log/0.log" Feb 19 10:48:37 crc kubenswrapper[4965]: I0219 10:48:37.503400 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_6bc77d18-18ba-4f28-ab8c-a1d4e77996f3/nova-scheduler-scheduler/0.log" Feb 19 10:48:37 crc kubenswrapper[4965]: I0219 10:48:37.546788 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_5b862187-0edd-4939-9260-d0d35653485c/mysql-bootstrap/0.log" Feb 19 10:48:37 crc kubenswrapper[4965]: I0219 10:48:37.738813 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_5b862187-0edd-4939-9260-d0d35653485c/mysql-bootstrap/0.log" Feb 19 10:48:37 crc kubenswrapper[4965]: I0219 10:48:37.755872 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_5b862187-0edd-4939-9260-d0d35653485c/galera/0.log" Feb 19 10:48:38 crc kubenswrapper[4965]: I0219 10:48:38.027757 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_215df1f4-6c30-4144-b141-5a867e8d2728/mysql-bootstrap/0.log" Feb 19 10:48:38 crc kubenswrapper[4965]: I0219 10:48:38.218038 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_215df1f4-6c30-4144-b141-5a867e8d2728/mysql-bootstrap/0.log" Feb 19 10:48:38 crc kubenswrapper[4965]: I0219 10:48:38.312847 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_215df1f4-6c30-4144-b141-5a867e8d2728/galera/0.log" Feb 19 10:48:38 crc kubenswrapper[4965]: I0219 10:48:38.558893 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_96d45563-22bf-42f1-bc03-4fd3b223293d/openstackclient/0.log" Feb 19 10:48:38 crc kubenswrapper[4965]: I0219 10:48:38.622543 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_b29094ed-8036-44ed-a882-7ad1d5ad4cc3/nova-metadata-metadata/0.log" Feb 19 10:48:38 crc kubenswrapper[4965]: I0219 10:48:38.674535 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-wwgl6_154fb9e1-1e52-4338-964c-8210b8bbbc57/openstack-network-exporter/0.log" Feb 19 10:48:38 crc kubenswrapper[4965]: I0219 10:48:38.829813 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-mwlb6_0a44ec63-c497-4874-b0ca-ecb9d6c9bc2a/ovn-controller/0.log" Feb 19 10:48:38 crc kubenswrapper[4965]: I0219 10:48:38.953372 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jlns7_3f408d9e-6ca2-490c-be7e-0516fa19db75/ovsdb-server-init/0.log" Feb 19 10:48:39 crc kubenswrapper[4965]: I0219 10:48:39.142138 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jlns7_3f408d9e-6ca2-490c-be7e-0516fa19db75/ovs-vswitchd/0.log" Feb 19 10:48:39 crc kubenswrapper[4965]: I0219 10:48:39.179775 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jlns7_3f408d9e-6ca2-490c-be7e-0516fa19db75/ovsdb-server-init/0.log" Feb 19 10:48:39 crc kubenswrapper[4965]: I0219 10:48:39.194578 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jlns7_3f408d9e-6ca2-490c-be7e-0516fa19db75/ovsdb-server/0.log" Feb 19 10:48:39 crc kubenswrapper[4965]: I0219 10:48:39.459114 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-xd7zh_24c52aa6-9277-4040-8262-1bac8005a463/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 10:48:39 crc kubenswrapper[4965]: I0219 10:48:39.705027 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_89c93b76-069c-4c94-aa84-a77d7e4c8e26/openstack-network-exporter/0.log" Feb 19 10:48:39 crc kubenswrapper[4965]: I0219 10:48:39.841041 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_89c93b76-069c-4c94-aa84-a77d7e4c8e26/ovn-northd/0.log" Feb 19 10:48:40 crc kubenswrapper[4965]: I0219 10:48:40.046784 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_98633dba-c95c-4f35-a045-5c738d652492/ovsdbserver-nb/0.log" Feb 19 10:48:40 crc kubenswrapper[4965]: I0219 10:48:40.063649 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_98633dba-c95c-4f35-a045-5c738d652492/openstack-network-exporter/0.log" Feb 19 10:48:40 crc kubenswrapper[4965]: I0219 10:48:40.208415 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_1520d7ba-9d74-47f8-9c7a-9731ae9ff49e/openstack-network-exporter/0.log" Feb 19 10:48:40 crc kubenswrapper[4965]: I0219 10:48:40.284536 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_1520d7ba-9d74-47f8-9c7a-9731ae9ff49e/ovsdbserver-sb/0.log" Feb 19 10:48:40 crc kubenswrapper[4965]: I0219 10:48:40.428652 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-fbb65bccb-zmlg7_8505e9f1-238a-4f32-95a4-95979a4f7bac/placement-api/0.log" Feb 19 10:48:40 crc kubenswrapper[4965]: I0219 10:48:40.629482 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-fbb65bccb-zmlg7_8505e9f1-238a-4f32-95a4-95979a4f7bac/placement-log/0.log" Feb 19 10:48:40 crc kubenswrapper[4965]: I0219 10:48:40.713903 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_efd4d0e2-bc4c-4bac-9236-37338445f7c7/init-config-reloader/0.log" Feb 19 10:48:40 crc kubenswrapper[4965]: I0219 10:48:40.960324 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_efd4d0e2-bc4c-4bac-9236-37338445f7c7/prometheus/0.log" Feb 19 10:48:40 crc kubenswrapper[4965]: I0219 10:48:40.969431 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_efd4d0e2-bc4c-4bac-9236-37338445f7c7/thanos-sidecar/0.log" Feb 19 10:48:41 crc kubenswrapper[4965]: I0219 10:48:41.030692 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_efd4d0e2-bc4c-4bac-9236-37338445f7c7/config-reloader/0.log" Feb 19 10:48:41 crc kubenswrapper[4965]: I0219 10:48:41.118685 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_efd4d0e2-bc4c-4bac-9236-37338445f7c7/init-config-reloader/0.log" Feb 19 10:48:41 crc kubenswrapper[4965]: I0219 10:48:41.483058 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_8214d39f-90ff-4188-abbf-6a097f33eef0/setup-container/0.log" Feb 19 10:48:41 crc kubenswrapper[4965]: I0219 10:48:41.778026 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_8214d39f-90ff-4188-abbf-6a097f33eef0/setup-container/0.log" Feb 19 10:48:41 crc kubenswrapper[4965]: I0219 10:48:41.786434 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_8214d39f-90ff-4188-abbf-6a097f33eef0/rabbitmq/0.log" Feb 19 10:48:41 crc kubenswrapper[4965]: I0219 10:48:41.978829 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa/setup-container/0.log" Feb 19 10:48:42 crc kubenswrapper[4965]: I0219 10:48:42.093037 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa/setup-container/0.log" Feb 19 10:48:42 crc kubenswrapper[4965]: I0219 10:48:42.264759 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-vprz4_6991c5fe-b928-4ea6-a3e5-bb8dbf6f9763/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 10:48:42 crc kubenswrapper[4965]: I0219 10:48:42.267499 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_e01d61c0-b7b2-4ddc-88f6-49dce4c5cdaa/rabbitmq/0.log" Feb 19 10:48:42 crc kubenswrapper[4965]: I0219 10:48:42.576507 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-87z8v_58f8c7f1-d425-4b21-ba27-1e47c69ddd93/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 10:48:42 crc kubenswrapper[4965]: I0219 10:48:42.589482 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-rdb84_da068017-3803-4d74-bea1-932b1d829055/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 10:48:43 crc kubenswrapper[4965]: I0219 10:48:43.405041 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-bv8v8_6827b2eb-c6f9-42d2-b11d-ef676213f97f/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 10:48:44 crc kubenswrapper[4965]: I0219 10:48:44.001276 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-rz8q9_b34f48f2-8dcc-4e0f-a1db-03a8adcc08e4/ssh-known-hosts-edpm-deployment/0.log" Feb 19 10:48:44 crc kubenswrapper[4965]: I0219 10:48:44.158233 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-76485c5b9f-wzzpl_69ee7a64-2965-42d1-bad2-82087733b567/proxy-httpd/0.log" Feb 19 10:48:44 crc kubenswrapper[4965]: I0219 10:48:44.258253 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-76485c5b9f-wzzpl_69ee7a64-2965-42d1-bad2-82087733b567/proxy-server/0.log" Feb 19 10:48:44 crc kubenswrapper[4965]: I0219 10:48:44.295645 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-kx9jd_f2a6db35-796d-485d-9b96-5c03b7d7725b/swift-ring-rebalance/0.log" Feb 19 10:48:44 crc kubenswrapper[4965]: I0219 10:48:44.503885 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2c3ae050-b164-4fbc-9e5b-392eb0a4fb53/account-auditor/0.log" Feb 19 10:48:44 crc kubenswrapper[4965]: I0219 10:48:44.586746 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2c3ae050-b164-4fbc-9e5b-392eb0a4fb53/account-reaper/0.log" Feb 19 10:48:44 crc kubenswrapper[4965]: I0219 10:48:44.635565 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2c3ae050-b164-4fbc-9e5b-392eb0a4fb53/account-replicator/0.log" Feb 19 10:48:44 crc kubenswrapper[4965]: I0219 10:48:44.707326 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2c3ae050-b164-4fbc-9e5b-392eb0a4fb53/account-server/0.log" Feb 19 10:48:44 crc kubenswrapper[4965]: I0219 10:48:44.810233 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2c3ae050-b164-4fbc-9e5b-392eb0a4fb53/container-auditor/0.log" Feb 19 10:48:44 crc kubenswrapper[4965]: I0219 10:48:44.847662 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2c3ae050-b164-4fbc-9e5b-392eb0a4fb53/container-server/0.log" Feb 19 10:48:44 crc kubenswrapper[4965]: I0219 10:48:44.848026 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2c3ae050-b164-4fbc-9e5b-392eb0a4fb53/container-replicator/0.log" Feb 19 10:48:44 crc kubenswrapper[4965]: I0219 10:48:44.933227 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2c3ae050-b164-4fbc-9e5b-392eb0a4fb53/container-updater/0.log" Feb 19 10:48:45 crc kubenswrapper[4965]: I0219 10:48:45.100255 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2c3ae050-b164-4fbc-9e5b-392eb0a4fb53/object-replicator/0.log" Feb 19 10:48:45 crc kubenswrapper[4965]: I0219 10:48:45.107844 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2c3ae050-b164-4fbc-9e5b-392eb0a4fb53/object-auditor/0.log" Feb 19 10:48:45 crc kubenswrapper[4965]: I0219 10:48:45.110157 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2c3ae050-b164-4fbc-9e5b-392eb0a4fb53/object-expirer/0.log" Feb 19 10:48:45 crc kubenswrapper[4965]: I0219 10:48:45.345132 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2c3ae050-b164-4fbc-9e5b-392eb0a4fb53/object-updater/0.log" Feb 19 10:48:45 crc kubenswrapper[4965]: I0219 10:48:45.346499 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2c3ae050-b164-4fbc-9e5b-392eb0a4fb53/swift-recon-cron/0.log" Feb 19 10:48:45 crc kubenswrapper[4965]: I0219 10:48:45.364256 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2c3ae050-b164-4fbc-9e5b-392eb0a4fb53/object-server/0.log" Feb 19 10:48:45 crc kubenswrapper[4965]: I0219 10:48:45.447324 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2c3ae050-b164-4fbc-9e5b-392eb0a4fb53/rsync/0.log" Feb 19 10:48:45 crc kubenswrapper[4965]: I0219 10:48:45.705589 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-lf5p2_b3d2f922-3941-4ff3-92fc-6bb14cd46698/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 10:48:45 crc kubenswrapper[4965]: I0219 10:48:45.730147 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_acd2e1a9-ac5b-4be2-b7e2-6bc4526bf67a/tempest-tests-tempest-tests-runner/0.log" Feb 19 10:48:45 crc kubenswrapper[4965]: I0219 10:48:45.895281 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_ca9aec62-8a03-4f2d-acf7-cb4c5a08be00/test-operator-logs-container/0.log" Feb 19 10:48:45 crc kubenswrapper[4965]: I0219 10:48:45.958107 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-58l4m_9873ade5-a134-4b72-bbfe-468df59b993f/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 10:48:50 crc kubenswrapper[4965]: I0219 10:48:50.996498 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_40caef4c-7f84-42cb-b51c-b0884efc2052/memcached/0.log" Feb 19 10:49:20 crc kubenswrapper[4965]: I0219 10:49:20.611233 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b02772c0e8c97b926926e0a1d8d8c995d8f01d9d9d64402b28cb4393dfntdwr_62a8af9d-5a83-4c80-bc2e-49c0c576ed6e/util/0.log" Feb 19 10:49:20 crc kubenswrapper[4965]: I0219 10:49:20.836453 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b02772c0e8c97b926926e0a1d8d8c995d8f01d9d9d64402b28cb4393dfntdwr_62a8af9d-5a83-4c80-bc2e-49c0c576ed6e/pull/0.log" Feb 19 10:49:20 crc kubenswrapper[4965]: I0219 10:49:20.854990 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b02772c0e8c97b926926e0a1d8d8c995d8f01d9d9d64402b28cb4393dfntdwr_62a8af9d-5a83-4c80-bc2e-49c0c576ed6e/util/0.log" Feb 19 10:49:20 crc kubenswrapper[4965]: I0219 10:49:20.896529 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b02772c0e8c97b926926e0a1d8d8c995d8f01d9d9d64402b28cb4393dfntdwr_62a8af9d-5a83-4c80-bc2e-49c0c576ed6e/pull/0.log" Feb 19 10:49:21 crc kubenswrapper[4965]: I0219 10:49:21.111052 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b02772c0e8c97b926926e0a1d8d8c995d8f01d9d9d64402b28cb4393dfntdwr_62a8af9d-5a83-4c80-bc2e-49c0c576ed6e/util/0.log" Feb 19 10:49:21 crc kubenswrapper[4965]: I0219 10:49:21.159463 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b02772c0e8c97b926926e0a1d8d8c995d8f01d9d9d64402b28cb4393dfntdwr_62a8af9d-5a83-4c80-bc2e-49c0c576ed6e/extract/0.log" Feb 19 10:49:21 crc kubenswrapper[4965]: I0219 10:49:21.161086 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b02772c0e8c97b926926e0a1d8d8c995d8f01d9d9d64402b28cb4393dfntdwr_62a8af9d-5a83-4c80-bc2e-49c0c576ed6e/pull/0.log" Feb 19 10:49:21 crc kubenswrapper[4965]: I0219 10:49:21.642537 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-ztvs5_7c1737a3-9dfe-4208-a8da-8be7f09394d9/manager/0.log" Feb 19 10:49:22 crc kubenswrapper[4965]: I0219 10:49:22.169343 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-bndgq_ef077548-5e44-43f1-9f0d-3cf539bca16b/manager/0.log" Feb 19 10:49:22 crc kubenswrapper[4965]: I0219 10:49:22.426436 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-4rtq9_8e1c4dc5-2d5b-46fb-b3cc-1ae2749fd02c/manager/0.log" Feb 19 10:49:22 crc kubenswrapper[4965]: I0219 10:49:22.652019 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-vgqfx_fe5bbdd4-d10a-4bc6-bd35-76c7abb54600/manager/0.log" Feb 19 10:49:23 crc kubenswrapper[4965]: I0219 10:49:23.362307 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-zqmsr_2f161526-b0fd-453b-8ae7-7b9b7a485b97/manager/0.log" Feb 19 10:49:23 crc kubenswrapper[4965]: I0219 10:49:23.390746 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-z78k7_73c20094-0abc-4525-ae77-d571755841fa/manager/0.log" Feb 19 10:49:23 crc kubenswrapper[4965]: I0219 10:49:23.772984 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-h5skt_5747cc94-5621-4a7d-b599-f2a0f2a2aa29/manager/0.log" Feb 19 10:49:24 crc kubenswrapper[4965]: I0219 10:49:24.007260 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-zh77z_c94f0d1d-5edd-4b64-b2c7-85bdc5022ec3/manager/0.log" Feb 19 10:49:24 crc kubenswrapper[4965]: I0219 10:49:24.166124 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-2g7mq_74f4ddc1-28bd-411f-8f0c-c5bfc3bfcec6/manager/0.log" Feb 19 10:49:24 crc kubenswrapper[4965]: I0219 10:49:24.338513 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-4md54_9898282c-422b-49dd-b369-da910d49a2d8/manager/0.log" Feb 19 10:49:24 crc kubenswrapper[4965]: I0219 10:49:24.453952 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-49xr8_ec34bcd2-48d7-4522-a32a-268a3a1b385c/manager/0.log" Feb 19 10:49:24 crc kubenswrapper[4965]: I0219 10:49:24.656098 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-7mzd9_18230479-3d13-49f7-a2a1-95a191acb3db/manager/0.log" Feb 19 10:49:24 crc kubenswrapper[4965]: I0219 10:49:24.874478 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cpknzq_58e82cd5-3bd0-4f99-b958-29e5541fa49a/manager/0.log" Feb 19 10:49:25 crc kubenswrapper[4965]: I0219 10:49:25.137653 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-d86db9fbc-vplp8_5c35ac3d-3c0a-48b2-a17d-ce896fa8a00e/operator/0.log" Feb 19 10:49:25 crc kubenswrapper[4965]: I0219 10:49:25.320812 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-wpjxk_2c206b8c-0a2e-4081-8f51-29977545ef20/registry-server/0.log" Feb 19 10:49:25 crc kubenswrapper[4965]: I0219 10:49:25.678273 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-wp77d_ca57fee7-64f8-4c49-9170-6f6e618c78e7/manager/0.log" Feb 19 10:49:25 crc kubenswrapper[4965]: I0219 10:49:25.987755 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-h27hl_a354e865-3819-4147-a565-4682bc4c6a6c/manager/0.log" Feb 19 10:49:26 crc kubenswrapper[4965]: I0219 10:49:26.194149 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-frln4_f1723aed-01cb-4ac1-b191-299a6dd638e5/operator/0.log" Feb 19 10:49:26 crc kubenswrapper[4965]: I0219 10:49:26.449488 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-h5rvt_e70fa350-bca9-4007-80a9-15cfb3a56b11/manager/0.log" Feb 19 10:49:27 crc kubenswrapper[4965]: I0219 10:49:27.060440 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-jzssc_7e1ae3d6-7af0-406d-b740-98c9f5c9403c/manager/0.log" Feb 19 10:49:27 crc kubenswrapper[4965]: I0219 10:49:27.331151 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7f6588fc96-6phd8_186369a2-50b6-4226-be98-8876e469033f/manager/0.log" Feb 19 10:49:27 crc kubenswrapper[4965]: I0219 10:49:27.496500 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5db88f68c-54vfn_6bd1df07-8b75-44b8-91a3-4f612b64c279/manager/0.log" Feb 19 10:49:27 crc kubenswrapper[4965]: I0219 10:49:27.537132 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-dff68c48-5928s_f1fcb3fa-62de-4b0b-93db-3e401ff94fe4/manager/0.log" Feb 19 10:49:27 crc kubenswrapper[4965]: I0219 10:49:27.898078 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-glzx9_a0ff2743-9ab6-4388-b0af-06e06c3e7587/manager/0.log" Feb 19 10:49:33 crc kubenswrapper[4965]: I0219 10:49:33.429175 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-jncdt_24b54009-86e7-409a-991e-a406d38ab751/manager/0.log" Feb 19 10:49:54 crc kubenswrapper[4965]: I0219 10:49:54.148832 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-2v6fb_5e0fcf66-e50c-4c4c-9370-08ed336d25d9/control-plane-machine-set-operator/0.log" Feb 19 10:49:54 crc kubenswrapper[4965]: I0219 10:49:54.339888 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-hqt8l_49cf856e-b37d-4ab6-9c6e-241cbc4be93e/kube-rbac-proxy/0.log" Feb 19 10:49:54 crc kubenswrapper[4965]: I0219 10:49:54.394115 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-hqt8l_49cf856e-b37d-4ab6-9c6e-241cbc4be93e/machine-api-operator/0.log" Feb 19 10:50:10 crc kubenswrapper[4965]: I0219 10:50:10.028992 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-vmfkz_41967e40-5df3-456a-aae9-86b898d18216/cert-manager-controller/0.log" Feb 19 10:50:10 crc kubenswrapper[4965]: I0219 10:50:10.056063 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-ls5h7_592650ba-f791-4f32-bbbe-23c0a5d9e82b/cert-manager-cainjector/0.log" Feb 19 10:50:10 crc kubenswrapper[4965]: I0219 10:50:10.147453 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-5vx5v_ac8283e8-11a9-4b2f-ac84-4f8f6a7821bc/cert-manager-webhook/0.log" Feb 19 10:50:16 crc kubenswrapper[4965]: I0219 10:50:16.600923 4965 patch_prober.go:28] interesting pod/machine-config-daemon-7mhh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:50:16 crc kubenswrapper[4965]: I0219 10:50:16.601400 4965 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:50:23 crc kubenswrapper[4965]: I0219 10:50:23.592654 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-5xfsg_85cb536f-7492-4fb3-90dd-d71c7d207771/nmstate-console-plugin/0.log" Feb 19 10:50:23 crc kubenswrapper[4965]: I0219 10:50:23.826587 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-ftvpm_75ab303c-d1a1-45fd-b457-b5c2a118e898/nmstate-handler/0.log" Feb 19 10:50:23 crc kubenswrapper[4965]: I0219 10:50:23.880975 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-jnl6n_fbb4bfee-56b1-49ff-ae41-a6ea373fd06a/kube-rbac-proxy/0.log" Feb 19 10:50:23 crc kubenswrapper[4965]: I0219 10:50:23.966239 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-jnl6n_fbb4bfee-56b1-49ff-ae41-a6ea373fd06a/nmstate-metrics/0.log" Feb 19 10:50:24 crc kubenswrapper[4965]: I0219 10:50:24.079897 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-kkrqs_56e58cdb-3ef2-4cbf-a926-70ac47e83f9c/nmstate-operator/0.log" Feb 19 10:50:24 crc kubenswrapper[4965]: I0219 10:50:24.190490 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-8gm87_1b0feb3d-7d0d-43b4-bf7c-afd4e11dc0b7/nmstate-webhook/0.log" Feb 19 10:50:39 crc kubenswrapper[4965]: I0219 10:50:39.081775 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-564bb987d4-6pxn4_d8ed232a-7084-4f69-afdf-6d674b5864de/kube-rbac-proxy/0.log" Feb 19 10:50:39 crc kubenswrapper[4965]: I0219 10:50:39.125011 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-564bb987d4-6pxn4_d8ed232a-7084-4f69-afdf-6d674b5864de/manager/0.log" Feb 19 10:50:46 crc kubenswrapper[4965]: I0219 10:50:46.600910 4965 patch_prober.go:28] interesting pod/machine-config-daemon-7mhh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:50:46 crc kubenswrapper[4965]: I0219 10:50:46.601452 4965 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:50:54 crc kubenswrapper[4965]: I0219 10:50:54.964606 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-qfjz7_97e4a3bf-25d9-4a7b-ab73-7be5267dcfb1/prometheus-operator/0.log" Feb 19 10:50:55 crc kubenswrapper[4965]: I0219 10:50:55.241001 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5675bf8465-d42dv_0e50e1bd-3144-4362-9c46-355cfb2ba24f/prometheus-operator-admission-webhook/0.log" Feb 19 10:50:55 crc kubenswrapper[4965]: I0219 10:50:55.361386 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5675bf8465-h45db_0d85e95a-22ec-4364-a43c-04e60d68be0d/prometheus-operator-admission-webhook/0.log" Feb 19 10:50:55 crc kubenswrapper[4965]: I0219 10:50:55.490720 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-h4689_b7e1070f-f099-4a4f-a107-c1b8589af7c7/operator/0.log" Feb 19 10:50:55 crc kubenswrapper[4965]: I0219 10:50:55.570580 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-x7xjb_d55c4261-3d41-49fd-97dd-098bb8747449/perses-operator/0.log" Feb 19 10:50:56 crc kubenswrapper[4965]: I0219 10:50:56.254523 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tffsw"] Feb 19 10:50:56 crc kubenswrapper[4965]: E0219 10:50:56.255159 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba92a1b6-927a-44c8-927a-1984643f760d" containerName="registry-server" Feb 19 10:50:56 crc kubenswrapper[4965]: I0219 10:50:56.255171 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba92a1b6-927a-44c8-927a-1984643f760d" containerName="registry-server" Feb 19 10:50:56 crc kubenswrapper[4965]: E0219 10:50:56.255185 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba92a1b6-927a-44c8-927a-1984643f760d" containerName="extract-utilities" Feb 19 10:50:56 crc kubenswrapper[4965]: I0219 10:50:56.255208 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba92a1b6-927a-44c8-927a-1984643f760d" containerName="extract-utilities" Feb 19 10:50:56 crc kubenswrapper[4965]: E0219 10:50:56.255234 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba92a1b6-927a-44c8-927a-1984643f760d" containerName="extract-content" Feb 19 10:50:56 crc kubenswrapper[4965]: I0219 10:50:56.255240 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba92a1b6-927a-44c8-927a-1984643f760d" containerName="extract-content" Feb 19 10:50:56 crc kubenswrapper[4965]: E0219 10:50:56.255249 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d41756d-00be-48a1-b087-7359114fc01b" containerName="container-00" Feb 19 10:50:56 crc kubenswrapper[4965]: I0219 10:50:56.255265 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d41756d-00be-48a1-b087-7359114fc01b" containerName="container-00" Feb 19 10:50:56 crc kubenswrapper[4965]: I0219 10:50:56.255467 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d41756d-00be-48a1-b087-7359114fc01b" containerName="container-00" Feb 19 10:50:56 crc kubenswrapper[4965]: I0219 10:50:56.255486 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba92a1b6-927a-44c8-927a-1984643f760d" containerName="registry-server" Feb 19 10:50:56 crc kubenswrapper[4965]: I0219 10:50:56.256944 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tffsw" Feb 19 10:50:56 crc kubenswrapper[4965]: I0219 10:50:56.279483 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tffsw"] Feb 19 10:50:56 crc kubenswrapper[4965]: I0219 10:50:56.349872 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf020c44-0110-4d4b-b3d8-9c7101640c06-utilities\") pod \"certified-operators-tffsw\" (UID: \"cf020c44-0110-4d4b-b3d8-9c7101640c06\") " pod="openshift-marketplace/certified-operators-tffsw" Feb 19 10:50:56 crc kubenswrapper[4965]: I0219 10:50:56.349928 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf020c44-0110-4d4b-b3d8-9c7101640c06-catalog-content\") pod \"certified-operators-tffsw\" (UID: \"cf020c44-0110-4d4b-b3d8-9c7101640c06\") " pod="openshift-marketplace/certified-operators-tffsw" Feb 19 10:50:56 crc kubenswrapper[4965]: I0219 10:50:56.350003 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhwh4\" (UniqueName: \"kubernetes.io/projected/cf020c44-0110-4d4b-b3d8-9c7101640c06-kube-api-access-hhwh4\") pod \"certified-operators-tffsw\" (UID: \"cf020c44-0110-4d4b-b3d8-9c7101640c06\") " pod="openshift-marketplace/certified-operators-tffsw" Feb 19 10:50:56 crc kubenswrapper[4965]: I0219 10:50:56.452318 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf020c44-0110-4d4b-b3d8-9c7101640c06-utilities\") pod \"certified-operators-tffsw\" (UID: \"cf020c44-0110-4d4b-b3d8-9c7101640c06\") " pod="openshift-marketplace/certified-operators-tffsw" Feb 19 10:50:56 crc kubenswrapper[4965]: I0219 10:50:56.452370 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf020c44-0110-4d4b-b3d8-9c7101640c06-catalog-content\") pod \"certified-operators-tffsw\" (UID: \"cf020c44-0110-4d4b-b3d8-9c7101640c06\") " pod="openshift-marketplace/certified-operators-tffsw" Feb 19 10:50:56 crc kubenswrapper[4965]: I0219 10:50:56.452452 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhwh4\" (UniqueName: \"kubernetes.io/projected/cf020c44-0110-4d4b-b3d8-9c7101640c06-kube-api-access-hhwh4\") pod \"certified-operators-tffsw\" (UID: \"cf020c44-0110-4d4b-b3d8-9c7101640c06\") " pod="openshift-marketplace/certified-operators-tffsw" Feb 19 10:50:56 crc kubenswrapper[4965]: I0219 10:50:56.453275 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf020c44-0110-4d4b-b3d8-9c7101640c06-utilities\") pod \"certified-operators-tffsw\" (UID: \"cf020c44-0110-4d4b-b3d8-9c7101640c06\") " pod="openshift-marketplace/certified-operators-tffsw" Feb 19 10:50:56 crc kubenswrapper[4965]: I0219 10:50:56.453488 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf020c44-0110-4d4b-b3d8-9c7101640c06-catalog-content\") pod \"certified-operators-tffsw\" (UID: \"cf020c44-0110-4d4b-b3d8-9c7101640c06\") " pod="openshift-marketplace/certified-operators-tffsw" Feb 19 10:50:56 crc kubenswrapper[4965]: I0219 10:50:56.470532 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhwh4\" (UniqueName: \"kubernetes.io/projected/cf020c44-0110-4d4b-b3d8-9c7101640c06-kube-api-access-hhwh4\") pod \"certified-operators-tffsw\" (UID: \"cf020c44-0110-4d4b-b3d8-9c7101640c06\") " pod="openshift-marketplace/certified-operators-tffsw" Feb 19 10:50:56 crc kubenswrapper[4965]: I0219 10:50:56.583108 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tffsw" Feb 19 10:50:57 crc kubenswrapper[4965]: I0219 10:50:57.294872 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tffsw"] Feb 19 10:50:57 crc kubenswrapper[4965]: I0219 10:50:57.365500 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tffsw" event={"ID":"cf020c44-0110-4d4b-b3d8-9c7101640c06","Type":"ContainerStarted","Data":"b3f0b6910093f34fd6c7166daa1644900e263e6b1f4c3da77a93b02c353b215d"} Feb 19 10:50:58 crc kubenswrapper[4965]: I0219 10:50:58.376792 4965 generic.go:334] "Generic (PLEG): container finished" podID="cf020c44-0110-4d4b-b3d8-9c7101640c06" containerID="3440c49eb36f1eeee82cf99777cae8943692729be265b0a8b98c7e1cf00a210b" exitCode=0 Feb 19 10:50:58 crc kubenswrapper[4965]: I0219 10:50:58.376864 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tffsw" event={"ID":"cf020c44-0110-4d4b-b3d8-9c7101640c06","Type":"ContainerDied","Data":"3440c49eb36f1eeee82cf99777cae8943692729be265b0a8b98c7e1cf00a210b"} Feb 19 10:51:00 crc kubenswrapper[4965]: I0219 10:51:00.398971 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tffsw" event={"ID":"cf020c44-0110-4d4b-b3d8-9c7101640c06","Type":"ContainerStarted","Data":"beb6131856ad458a2a3daeaa992d211f874f71d04e8dda5ac573058102ae0d7b"} Feb 19 10:51:02 crc kubenswrapper[4965]: I0219 10:51:02.417000 4965 generic.go:334] "Generic (PLEG): container finished" podID="cf020c44-0110-4d4b-b3d8-9c7101640c06" containerID="beb6131856ad458a2a3daeaa992d211f874f71d04e8dda5ac573058102ae0d7b" exitCode=0 Feb 19 10:51:02 crc kubenswrapper[4965]: I0219 10:51:02.417090 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tffsw" event={"ID":"cf020c44-0110-4d4b-b3d8-9c7101640c06","Type":"ContainerDied","Data":"beb6131856ad458a2a3daeaa992d211f874f71d04e8dda5ac573058102ae0d7b"} Feb 19 10:51:03 crc kubenswrapper[4965]: I0219 10:51:03.428327 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tffsw" event={"ID":"cf020c44-0110-4d4b-b3d8-9c7101640c06","Type":"ContainerStarted","Data":"0c8e90048263604e099a97fc5b918bcb09a14aa65f9ec7d3dad55671813a8eb1"} Feb 19 10:51:03 crc kubenswrapper[4965]: I0219 10:51:03.452850 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tffsw" podStartSLOduration=2.999709212 podStartE2EDuration="7.452825004s" podCreationTimestamp="2026-02-19 10:50:56 +0000 UTC" firstStartedPulling="2026-02-19 10:50:58.379918031 +0000 UTC m=+4114.001239351" lastFinishedPulling="2026-02-19 10:51:02.833033793 +0000 UTC m=+4118.454355143" observedRunningTime="2026-02-19 10:51:03.444935643 +0000 UTC m=+4119.066256943" watchObservedRunningTime="2026-02-19 10:51:03.452825004 +0000 UTC m=+4119.074146314" Feb 19 10:51:06 crc kubenswrapper[4965]: I0219 10:51:06.584238 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tffsw" Feb 19 10:51:06 crc kubenswrapper[4965]: I0219 10:51:06.586651 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tffsw" Feb 19 10:51:06 crc kubenswrapper[4965]: I0219 10:51:06.637445 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tffsw" Feb 19 10:51:08 crc kubenswrapper[4965]: I0219 10:51:08.142532 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rzsbp"] Feb 19 10:51:08 crc kubenswrapper[4965]: I0219 10:51:08.147995 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rzsbp" Feb 19 10:51:08 crc kubenswrapper[4965]: I0219 10:51:08.173511 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rzsbp"] Feb 19 10:51:08 crc kubenswrapper[4965]: I0219 10:51:08.247114 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bdd78dd-bb23-4589-9991-ad5296dacb37-catalog-content\") pod \"redhat-marketplace-rzsbp\" (UID: \"3bdd78dd-bb23-4589-9991-ad5296dacb37\") " pod="openshift-marketplace/redhat-marketplace-rzsbp" Feb 19 10:51:08 crc kubenswrapper[4965]: I0219 10:51:08.247399 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpmvr\" (UniqueName: \"kubernetes.io/projected/3bdd78dd-bb23-4589-9991-ad5296dacb37-kube-api-access-qpmvr\") pod \"redhat-marketplace-rzsbp\" (UID: \"3bdd78dd-bb23-4589-9991-ad5296dacb37\") " pod="openshift-marketplace/redhat-marketplace-rzsbp" Feb 19 10:51:08 crc kubenswrapper[4965]: I0219 10:51:08.247440 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bdd78dd-bb23-4589-9991-ad5296dacb37-utilities\") pod \"redhat-marketplace-rzsbp\" (UID: \"3bdd78dd-bb23-4589-9991-ad5296dacb37\") " pod="openshift-marketplace/redhat-marketplace-rzsbp" Feb 19 10:51:08 crc kubenswrapper[4965]: I0219 10:51:08.349442 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bdd78dd-bb23-4589-9991-ad5296dacb37-catalog-content\") pod \"redhat-marketplace-rzsbp\" (UID: \"3bdd78dd-bb23-4589-9991-ad5296dacb37\") " pod="openshift-marketplace/redhat-marketplace-rzsbp" Feb 19 10:51:08 crc kubenswrapper[4965]: I0219 10:51:08.349649 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpmvr\" (UniqueName: \"kubernetes.io/projected/3bdd78dd-bb23-4589-9991-ad5296dacb37-kube-api-access-qpmvr\") pod \"redhat-marketplace-rzsbp\" (UID: \"3bdd78dd-bb23-4589-9991-ad5296dacb37\") " pod="openshift-marketplace/redhat-marketplace-rzsbp" Feb 19 10:51:08 crc kubenswrapper[4965]: I0219 10:51:08.349684 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bdd78dd-bb23-4589-9991-ad5296dacb37-utilities\") pod \"redhat-marketplace-rzsbp\" (UID: \"3bdd78dd-bb23-4589-9991-ad5296dacb37\") " pod="openshift-marketplace/redhat-marketplace-rzsbp" Feb 19 10:51:08 crc kubenswrapper[4965]: I0219 10:51:08.349908 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bdd78dd-bb23-4589-9991-ad5296dacb37-catalog-content\") pod \"redhat-marketplace-rzsbp\" (UID: \"3bdd78dd-bb23-4589-9991-ad5296dacb37\") " pod="openshift-marketplace/redhat-marketplace-rzsbp" Feb 19 10:51:08 crc kubenswrapper[4965]: I0219 10:51:08.350086 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bdd78dd-bb23-4589-9991-ad5296dacb37-utilities\") pod \"redhat-marketplace-rzsbp\" (UID: \"3bdd78dd-bb23-4589-9991-ad5296dacb37\") " pod="openshift-marketplace/redhat-marketplace-rzsbp" Feb 19 10:51:08 crc kubenswrapper[4965]: I0219 10:51:08.367856 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpmvr\" (UniqueName: \"kubernetes.io/projected/3bdd78dd-bb23-4589-9991-ad5296dacb37-kube-api-access-qpmvr\") pod \"redhat-marketplace-rzsbp\" (UID: \"3bdd78dd-bb23-4589-9991-ad5296dacb37\") " pod="openshift-marketplace/redhat-marketplace-rzsbp" Feb 19 10:51:08 crc kubenswrapper[4965]: I0219 10:51:08.514622 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rzsbp" Feb 19 10:51:09 crc kubenswrapper[4965]: I0219 10:51:09.160958 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tffsw" Feb 19 10:51:09 crc kubenswrapper[4965]: I0219 10:51:09.484416 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rzsbp"] Feb 19 10:51:10 crc kubenswrapper[4965]: I0219 10:51:10.498756 4965 generic.go:334] "Generic (PLEG): container finished" podID="3bdd78dd-bb23-4589-9991-ad5296dacb37" containerID="0ef69fd5f1a8f6a6eb05a1f7bb97990cda14afa6482271d3de722c0b6b54f2ee" exitCode=0 Feb 19 10:51:10 crc kubenswrapper[4965]: I0219 10:51:10.498797 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rzsbp" event={"ID":"3bdd78dd-bb23-4589-9991-ad5296dacb37","Type":"ContainerDied","Data":"0ef69fd5f1a8f6a6eb05a1f7bb97990cda14afa6482271d3de722c0b6b54f2ee"} Feb 19 10:51:10 crc kubenswrapper[4965]: I0219 10:51:10.498845 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rzsbp" event={"ID":"3bdd78dd-bb23-4589-9991-ad5296dacb37","Type":"ContainerStarted","Data":"79e6502b06500543984acd3c9e9d8dc1194c33c251d7ef42cb2f147a6f63d482"} Feb 19 10:51:11 crc kubenswrapper[4965]: I0219 10:51:11.510591 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rzsbp" event={"ID":"3bdd78dd-bb23-4589-9991-ad5296dacb37","Type":"ContainerStarted","Data":"1b63640bafd5b5356c44238390301207a32e58ab5880a0f9138c3900061aed2b"} Feb 19 10:51:11 crc kubenswrapper[4965]: I0219 10:51:11.513591 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tffsw"] Feb 19 10:51:11 crc kubenswrapper[4965]: I0219 10:51:11.513758 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tffsw" podUID="cf020c44-0110-4d4b-b3d8-9c7101640c06" containerName="registry-server" containerID="cri-o://0c8e90048263604e099a97fc5b918bcb09a14aa65f9ec7d3dad55671813a8eb1" gracePeriod=2 Feb 19 10:51:12 crc kubenswrapper[4965]: I0219 10:51:12.469450 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tffsw" Feb 19 10:51:12 crc kubenswrapper[4965]: I0219 10:51:12.577521 4965 generic.go:334] "Generic (PLEG): container finished" podID="cf020c44-0110-4d4b-b3d8-9c7101640c06" containerID="0c8e90048263604e099a97fc5b918bcb09a14aa65f9ec7d3dad55671813a8eb1" exitCode=0 Feb 19 10:51:12 crc kubenswrapper[4965]: I0219 10:51:12.578242 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tffsw" event={"ID":"cf020c44-0110-4d4b-b3d8-9c7101640c06","Type":"ContainerDied","Data":"0c8e90048263604e099a97fc5b918bcb09a14aa65f9ec7d3dad55671813a8eb1"} Feb 19 10:51:12 crc kubenswrapper[4965]: I0219 10:51:12.578338 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tffsw" event={"ID":"cf020c44-0110-4d4b-b3d8-9c7101640c06","Type":"ContainerDied","Data":"b3f0b6910093f34fd6c7166daa1644900e263e6b1f4c3da77a93b02c353b215d"} Feb 19 10:51:12 crc kubenswrapper[4965]: I0219 10:51:12.578428 4965 scope.go:117] "RemoveContainer" containerID="0c8e90048263604e099a97fc5b918bcb09a14aa65f9ec7d3dad55671813a8eb1" Feb 19 10:51:12 crc kubenswrapper[4965]: I0219 10:51:12.578653 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tffsw" Feb 19 10:51:12 crc kubenswrapper[4965]: I0219 10:51:12.597505 4965 generic.go:334] "Generic (PLEG): container finished" podID="3bdd78dd-bb23-4589-9991-ad5296dacb37" containerID="1b63640bafd5b5356c44238390301207a32e58ab5880a0f9138c3900061aed2b" exitCode=0 Feb 19 10:51:12 crc kubenswrapper[4965]: I0219 10:51:12.597682 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rzsbp" event={"ID":"3bdd78dd-bb23-4589-9991-ad5296dacb37","Type":"ContainerDied","Data":"1b63640bafd5b5356c44238390301207a32e58ab5880a0f9138c3900061aed2b"} Feb 19 10:51:12 crc kubenswrapper[4965]: I0219 10:51:12.641463 4965 scope.go:117] "RemoveContainer" containerID="beb6131856ad458a2a3daeaa992d211f874f71d04e8dda5ac573058102ae0d7b" Feb 19 10:51:12 crc kubenswrapper[4965]: I0219 10:51:12.641913 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhwh4\" (UniqueName: \"kubernetes.io/projected/cf020c44-0110-4d4b-b3d8-9c7101640c06-kube-api-access-hhwh4\") pod \"cf020c44-0110-4d4b-b3d8-9c7101640c06\" (UID: \"cf020c44-0110-4d4b-b3d8-9c7101640c06\") " Feb 19 10:51:12 crc kubenswrapper[4965]: I0219 10:51:12.642123 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf020c44-0110-4d4b-b3d8-9c7101640c06-utilities\") pod \"cf020c44-0110-4d4b-b3d8-9c7101640c06\" (UID: \"cf020c44-0110-4d4b-b3d8-9c7101640c06\") " Feb 19 10:51:12 crc kubenswrapper[4965]: I0219 10:51:12.642343 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf020c44-0110-4d4b-b3d8-9c7101640c06-catalog-content\") pod \"cf020c44-0110-4d4b-b3d8-9c7101640c06\" (UID: \"cf020c44-0110-4d4b-b3d8-9c7101640c06\") " Feb 19 10:51:12 crc kubenswrapper[4965]: I0219 10:51:12.644885 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf020c44-0110-4d4b-b3d8-9c7101640c06-utilities" (OuterVolumeSpecName: "utilities") pod "cf020c44-0110-4d4b-b3d8-9c7101640c06" (UID: "cf020c44-0110-4d4b-b3d8-9c7101640c06"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:51:12 crc kubenswrapper[4965]: I0219 10:51:12.658432 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf020c44-0110-4d4b-b3d8-9c7101640c06-kube-api-access-hhwh4" (OuterVolumeSpecName: "kube-api-access-hhwh4") pod "cf020c44-0110-4d4b-b3d8-9c7101640c06" (UID: "cf020c44-0110-4d4b-b3d8-9c7101640c06"). InnerVolumeSpecName "kube-api-access-hhwh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:51:12 crc kubenswrapper[4965]: I0219 10:51:12.694044 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf020c44-0110-4d4b-b3d8-9c7101640c06-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cf020c44-0110-4d4b-b3d8-9c7101640c06" (UID: "cf020c44-0110-4d4b-b3d8-9c7101640c06"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:51:12 crc kubenswrapper[4965]: I0219 10:51:12.719049 4965 scope.go:117] "RemoveContainer" containerID="3440c49eb36f1eeee82cf99777cae8943692729be265b0a8b98c7e1cf00a210b" Feb 19 10:51:12 crc kubenswrapper[4965]: I0219 10:51:12.744624 4965 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf020c44-0110-4d4b-b3d8-9c7101640c06-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:51:12 crc kubenswrapper[4965]: I0219 10:51:12.744661 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhwh4\" (UniqueName: \"kubernetes.io/projected/cf020c44-0110-4d4b-b3d8-9c7101640c06-kube-api-access-hhwh4\") on node \"crc\" DevicePath \"\"" Feb 19 10:51:12 crc kubenswrapper[4965]: I0219 10:51:12.744674 4965 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf020c44-0110-4d4b-b3d8-9c7101640c06-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:51:12 crc kubenswrapper[4965]: I0219 10:51:12.755165 4965 scope.go:117] "RemoveContainer" containerID="0c8e90048263604e099a97fc5b918bcb09a14aa65f9ec7d3dad55671813a8eb1" Feb 19 10:51:12 crc kubenswrapper[4965]: E0219 10:51:12.755678 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c8e90048263604e099a97fc5b918bcb09a14aa65f9ec7d3dad55671813a8eb1\": container with ID starting with 0c8e90048263604e099a97fc5b918bcb09a14aa65f9ec7d3dad55671813a8eb1 not found: ID does not exist" containerID="0c8e90048263604e099a97fc5b918bcb09a14aa65f9ec7d3dad55671813a8eb1" Feb 19 10:51:12 crc kubenswrapper[4965]: I0219 10:51:12.755706 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c8e90048263604e099a97fc5b918bcb09a14aa65f9ec7d3dad55671813a8eb1"} err="failed to get container status \"0c8e90048263604e099a97fc5b918bcb09a14aa65f9ec7d3dad55671813a8eb1\": rpc error: code = NotFound desc = could not find container \"0c8e90048263604e099a97fc5b918bcb09a14aa65f9ec7d3dad55671813a8eb1\": container with ID starting with 0c8e90048263604e099a97fc5b918bcb09a14aa65f9ec7d3dad55671813a8eb1 not found: ID does not exist" Feb 19 10:51:12 crc kubenswrapper[4965]: I0219 10:51:12.755726 4965 scope.go:117] "RemoveContainer" containerID="beb6131856ad458a2a3daeaa992d211f874f71d04e8dda5ac573058102ae0d7b" Feb 19 10:51:12 crc kubenswrapper[4965]: E0219 10:51:12.755957 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"beb6131856ad458a2a3daeaa992d211f874f71d04e8dda5ac573058102ae0d7b\": container with ID starting with beb6131856ad458a2a3daeaa992d211f874f71d04e8dda5ac573058102ae0d7b not found: ID does not exist" containerID="beb6131856ad458a2a3daeaa992d211f874f71d04e8dda5ac573058102ae0d7b" Feb 19 10:51:12 crc kubenswrapper[4965]: I0219 10:51:12.755980 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"beb6131856ad458a2a3daeaa992d211f874f71d04e8dda5ac573058102ae0d7b"} err="failed to get container status \"beb6131856ad458a2a3daeaa992d211f874f71d04e8dda5ac573058102ae0d7b\": rpc error: code = NotFound desc = could not find container \"beb6131856ad458a2a3daeaa992d211f874f71d04e8dda5ac573058102ae0d7b\": container with ID starting with beb6131856ad458a2a3daeaa992d211f874f71d04e8dda5ac573058102ae0d7b not found: ID does not exist" Feb 19 10:51:12 crc kubenswrapper[4965]: I0219 10:51:12.755994 4965 scope.go:117] "RemoveContainer" containerID="3440c49eb36f1eeee82cf99777cae8943692729be265b0a8b98c7e1cf00a210b" Feb 19 10:51:12 crc kubenswrapper[4965]: E0219 10:51:12.756500 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3440c49eb36f1eeee82cf99777cae8943692729be265b0a8b98c7e1cf00a210b\": container with ID starting with 3440c49eb36f1eeee82cf99777cae8943692729be265b0a8b98c7e1cf00a210b not found: ID does not exist" containerID="3440c49eb36f1eeee82cf99777cae8943692729be265b0a8b98c7e1cf00a210b" Feb 19 10:51:12 crc kubenswrapper[4965]: I0219 10:51:12.756531 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3440c49eb36f1eeee82cf99777cae8943692729be265b0a8b98c7e1cf00a210b"} err="failed to get container status \"3440c49eb36f1eeee82cf99777cae8943692729be265b0a8b98c7e1cf00a210b\": rpc error: code = NotFound desc = could not find container \"3440c49eb36f1eeee82cf99777cae8943692729be265b0a8b98c7e1cf00a210b\": container with ID starting with 3440c49eb36f1eeee82cf99777cae8943692729be265b0a8b98c7e1cf00a210b not found: ID does not exist" Feb 19 10:51:12 crc kubenswrapper[4965]: I0219 10:51:12.916520 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tffsw"] Feb 19 10:51:12 crc kubenswrapper[4965]: I0219 10:51:12.931032 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tffsw"] Feb 19 10:51:13 crc kubenswrapper[4965]: I0219 10:51:13.207882 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf020c44-0110-4d4b-b3d8-9c7101640c06" path="/var/lib/kubelet/pods/cf020c44-0110-4d4b-b3d8-9c7101640c06/volumes" Feb 19 10:51:13 crc kubenswrapper[4965]: I0219 10:51:13.610304 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rzsbp" event={"ID":"3bdd78dd-bb23-4589-9991-ad5296dacb37","Type":"ContainerStarted","Data":"90027a754ff134cc067c4be6de0a7c7f48cca46910cfea93471ae2f2166002c6"} Feb 19 10:51:13 crc kubenswrapper[4965]: I0219 10:51:13.631301 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rzsbp" podStartSLOduration=2.8856496160000003 podStartE2EDuration="5.631285761s" podCreationTimestamp="2026-02-19 10:51:08 +0000 UTC" firstStartedPulling="2026-02-19 10:51:10.500805641 +0000 UTC m=+4126.122126951" lastFinishedPulling="2026-02-19 10:51:13.246441746 +0000 UTC m=+4128.867763096" observedRunningTime="2026-02-19 10:51:13.627864968 +0000 UTC m=+4129.249186278" watchObservedRunningTime="2026-02-19 10:51:13.631285761 +0000 UTC m=+4129.252607071" Feb 19 10:51:14 crc kubenswrapper[4965]: I0219 10:51:14.146757 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-62qgz_4fb263d7-f864-47e8-ba07-5a8860db5d11/controller/0.log" Feb 19 10:51:14 crc kubenswrapper[4965]: I0219 10:51:14.207643 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-62qgz_4fb263d7-f864-47e8-ba07-5a8860db5d11/kube-rbac-proxy/0.log" Feb 19 10:51:14 crc kubenswrapper[4965]: I0219 10:51:14.472615 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hcb66_684dceb2-01ab-4856-b857-0d6ade07aadd/cp-frr-files/0.log" Feb 19 10:51:14 crc kubenswrapper[4965]: I0219 10:51:14.704614 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hcb66_684dceb2-01ab-4856-b857-0d6ade07aadd/cp-frr-files/0.log" Feb 19 10:51:14 crc kubenswrapper[4965]: I0219 10:51:14.706299 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hcb66_684dceb2-01ab-4856-b857-0d6ade07aadd/cp-metrics/0.log" Feb 19 10:51:14 crc kubenswrapper[4965]: I0219 10:51:14.738173 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hcb66_684dceb2-01ab-4856-b857-0d6ade07aadd/cp-reloader/0.log" Feb 19 10:51:14 crc kubenswrapper[4965]: I0219 10:51:14.765997 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hcb66_684dceb2-01ab-4856-b857-0d6ade07aadd/cp-reloader/0.log" Feb 19 10:51:14 crc kubenswrapper[4965]: I0219 10:51:14.989487 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hcb66_684dceb2-01ab-4856-b857-0d6ade07aadd/cp-frr-files/0.log" Feb 19 10:51:15 crc kubenswrapper[4965]: I0219 10:51:15.040583 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hcb66_684dceb2-01ab-4856-b857-0d6ade07aadd/cp-reloader/0.log" Feb 19 10:51:15 crc kubenswrapper[4965]: I0219 10:51:15.050470 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hcb66_684dceb2-01ab-4856-b857-0d6ade07aadd/cp-metrics/0.log" Feb 19 10:51:15 crc kubenswrapper[4965]: I0219 10:51:15.064894 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hcb66_684dceb2-01ab-4856-b857-0d6ade07aadd/cp-metrics/0.log" Feb 19 10:51:15 crc kubenswrapper[4965]: I0219 10:51:15.308262 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hcb66_684dceb2-01ab-4856-b857-0d6ade07aadd/cp-reloader/0.log" Feb 19 10:51:15 crc kubenswrapper[4965]: I0219 10:51:15.384066 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hcb66_684dceb2-01ab-4856-b857-0d6ade07aadd/controller/0.log" Feb 19 10:51:15 crc kubenswrapper[4965]: I0219 10:51:15.392958 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hcb66_684dceb2-01ab-4856-b857-0d6ade07aadd/cp-frr-files/0.log" Feb 19 10:51:15 crc kubenswrapper[4965]: I0219 10:51:15.423643 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hcb66_684dceb2-01ab-4856-b857-0d6ade07aadd/cp-metrics/0.log" Feb 19 10:51:15 crc kubenswrapper[4965]: I0219 10:51:15.572647 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hcb66_684dceb2-01ab-4856-b857-0d6ade07aadd/frr-metrics/0.log" Feb 19 10:51:15 crc kubenswrapper[4965]: I0219 10:51:15.599304 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hcb66_684dceb2-01ab-4856-b857-0d6ade07aadd/kube-rbac-proxy/0.log" Feb 19 10:51:15 crc kubenswrapper[4965]: I0219 10:51:15.632115 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hcb66_684dceb2-01ab-4856-b857-0d6ade07aadd/kube-rbac-proxy-frr/0.log" Feb 19 10:51:15 crc kubenswrapper[4965]: I0219 10:51:15.814094 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hcb66_684dceb2-01ab-4856-b857-0d6ade07aadd/reloader/0.log" Feb 19 10:51:15 crc kubenswrapper[4965]: I0219 10:51:15.892405 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-l7r8n_de0e351f-d402-4a5b-8942-d22a20ad2fa4/frr-k8s-webhook-server/0.log" Feb 19 10:51:16 crc kubenswrapper[4965]: I0219 10:51:16.039555 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7df4b8cb75-tnc6t_281afb41-32a0-42c3-b25c-e2b5ee969867/manager/0.log" Feb 19 10:51:16 crc kubenswrapper[4965]: I0219 10:51:16.255738 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-fc95c66df-lk6qw_6be4b034-d7e8-410b-bbef-e4989108becd/webhook-server/0.log" Feb 19 10:51:16 crc kubenswrapper[4965]: I0219 10:51:16.496499 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-5f69j_2a12c073-8d46-4579-a422-6344a8a4959f/kube-rbac-proxy/0.log" Feb 19 10:51:16 crc kubenswrapper[4965]: I0219 10:51:16.600781 4965 patch_prober.go:28] interesting pod/machine-config-daemon-7mhh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:51:16 crc kubenswrapper[4965]: I0219 10:51:16.601073 4965 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:51:16 crc kubenswrapper[4965]: I0219 10:51:16.603249 4965 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" Feb 19 10:51:16 crc kubenswrapper[4965]: I0219 10:51:16.604259 4965 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c3e85c95253f7e8eee9d6a40dcd4eec4ece10846ad5e5ac11a1038255d85beca"} pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 10:51:16 crc kubenswrapper[4965]: I0219 10:51:16.604394 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerName="machine-config-daemon" containerID="cri-o://c3e85c95253f7e8eee9d6a40dcd4eec4ece10846ad5e5ac11a1038255d85beca" gracePeriod=600 Feb 19 10:51:17 crc kubenswrapper[4965]: I0219 10:51:17.055364 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hcb66_684dceb2-01ab-4856-b857-0d6ade07aadd/frr/0.log" Feb 19 10:51:17 crc kubenswrapper[4965]: I0219 10:51:17.120565 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-5f69j_2a12c073-8d46-4579-a422-6344a8a4959f/speaker/0.log" Feb 19 10:51:17 crc kubenswrapper[4965]: I0219 10:51:17.745979 4965 generic.go:334] "Generic (PLEG): container finished" podID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerID="c3e85c95253f7e8eee9d6a40dcd4eec4ece10846ad5e5ac11a1038255d85beca" exitCode=0 Feb 19 10:51:17 crc kubenswrapper[4965]: I0219 10:51:17.746023 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" event={"ID":"63ef3eb8-6103-492d-b6ef-f16081d15e83","Type":"ContainerDied","Data":"c3e85c95253f7e8eee9d6a40dcd4eec4ece10846ad5e5ac11a1038255d85beca"} Feb 19 10:51:17 crc kubenswrapper[4965]: I0219 10:51:17.746301 4965 scope.go:117] "RemoveContainer" containerID="dd91099379f13248da41756ec9df975dda5a009207ac101c1b7f089c85137088" Feb 19 10:51:18 crc kubenswrapper[4965]: I0219 10:51:18.515300 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rzsbp" Feb 19 10:51:18 crc kubenswrapper[4965]: I0219 10:51:18.515531 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rzsbp" Feb 19 10:51:18 crc kubenswrapper[4965]: I0219 10:51:18.576591 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rzsbp" Feb 19 10:51:18 crc kubenswrapper[4965]: I0219 10:51:18.756602 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" event={"ID":"63ef3eb8-6103-492d-b6ef-f16081d15e83","Type":"ContainerStarted","Data":"045e0b9aa6772b3a1d25435f13900d7065d2499b39f0e927eaefa0a25d09ea17"} Feb 19 10:51:18 crc kubenswrapper[4965]: I0219 10:51:18.808184 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rzsbp" Feb 19 10:51:18 crc kubenswrapper[4965]: I0219 10:51:18.895911 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rzsbp"] Feb 19 10:51:20 crc kubenswrapper[4965]: I0219 10:51:20.774501 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rzsbp" podUID="3bdd78dd-bb23-4589-9991-ad5296dacb37" containerName="registry-server" containerID="cri-o://90027a754ff134cc067c4be6de0a7c7f48cca46910cfea93471ae2f2166002c6" gracePeriod=2 Feb 19 10:51:21 crc kubenswrapper[4965]: I0219 10:51:21.543661 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rzsbp" Feb 19 10:51:21 crc kubenswrapper[4965]: I0219 10:51:21.725702 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bdd78dd-bb23-4589-9991-ad5296dacb37-catalog-content\") pod \"3bdd78dd-bb23-4589-9991-ad5296dacb37\" (UID: \"3bdd78dd-bb23-4589-9991-ad5296dacb37\") " Feb 19 10:51:21 crc kubenswrapper[4965]: I0219 10:51:21.725771 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bdd78dd-bb23-4589-9991-ad5296dacb37-utilities\") pod \"3bdd78dd-bb23-4589-9991-ad5296dacb37\" (UID: \"3bdd78dd-bb23-4589-9991-ad5296dacb37\") " Feb 19 10:51:21 crc kubenswrapper[4965]: I0219 10:51:21.725873 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpmvr\" (UniqueName: \"kubernetes.io/projected/3bdd78dd-bb23-4589-9991-ad5296dacb37-kube-api-access-qpmvr\") pod \"3bdd78dd-bb23-4589-9991-ad5296dacb37\" (UID: \"3bdd78dd-bb23-4589-9991-ad5296dacb37\") " Feb 19 10:51:21 crc kubenswrapper[4965]: I0219 10:51:21.726666 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bdd78dd-bb23-4589-9991-ad5296dacb37-utilities" (OuterVolumeSpecName: "utilities") pod "3bdd78dd-bb23-4589-9991-ad5296dacb37" (UID: "3bdd78dd-bb23-4589-9991-ad5296dacb37"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:51:21 crc kubenswrapper[4965]: I0219 10:51:21.733595 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bdd78dd-bb23-4589-9991-ad5296dacb37-kube-api-access-qpmvr" (OuterVolumeSpecName: "kube-api-access-qpmvr") pod "3bdd78dd-bb23-4589-9991-ad5296dacb37" (UID: "3bdd78dd-bb23-4589-9991-ad5296dacb37"). InnerVolumeSpecName "kube-api-access-qpmvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:51:21 crc kubenswrapper[4965]: I0219 10:51:21.754177 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bdd78dd-bb23-4589-9991-ad5296dacb37-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3bdd78dd-bb23-4589-9991-ad5296dacb37" (UID: "3bdd78dd-bb23-4589-9991-ad5296dacb37"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:51:21 crc kubenswrapper[4965]: I0219 10:51:21.786470 4965 generic.go:334] "Generic (PLEG): container finished" podID="3bdd78dd-bb23-4589-9991-ad5296dacb37" containerID="90027a754ff134cc067c4be6de0a7c7f48cca46910cfea93471ae2f2166002c6" exitCode=0 Feb 19 10:51:21 crc kubenswrapper[4965]: I0219 10:51:21.786521 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rzsbp" event={"ID":"3bdd78dd-bb23-4589-9991-ad5296dacb37","Type":"ContainerDied","Data":"90027a754ff134cc067c4be6de0a7c7f48cca46910cfea93471ae2f2166002c6"} Feb 19 10:51:21 crc kubenswrapper[4965]: I0219 10:51:21.786560 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rzsbp" event={"ID":"3bdd78dd-bb23-4589-9991-ad5296dacb37","Type":"ContainerDied","Data":"79e6502b06500543984acd3c9e9d8dc1194c33c251d7ef42cb2f147a6f63d482"} Feb 19 10:51:21 crc kubenswrapper[4965]: I0219 10:51:21.786557 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rzsbp" Feb 19 10:51:21 crc kubenswrapper[4965]: I0219 10:51:21.786639 4965 scope.go:117] "RemoveContainer" containerID="90027a754ff134cc067c4be6de0a7c7f48cca46910cfea93471ae2f2166002c6" Feb 19 10:51:21 crc kubenswrapper[4965]: I0219 10:51:21.809301 4965 scope.go:117] "RemoveContainer" containerID="1b63640bafd5b5356c44238390301207a32e58ab5880a0f9138c3900061aed2b" Feb 19 10:51:21 crc kubenswrapper[4965]: I0219 10:51:21.828074 4965 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bdd78dd-bb23-4589-9991-ad5296dacb37-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:51:21 crc kubenswrapper[4965]: I0219 10:51:21.828112 4965 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bdd78dd-bb23-4589-9991-ad5296dacb37-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:51:21 crc kubenswrapper[4965]: I0219 10:51:21.828124 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpmvr\" (UniqueName: \"kubernetes.io/projected/3bdd78dd-bb23-4589-9991-ad5296dacb37-kube-api-access-qpmvr\") on node \"crc\" DevicePath \"\"" Feb 19 10:51:21 crc kubenswrapper[4965]: I0219 10:51:21.846936 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rzsbp"] Feb 19 10:51:21 crc kubenswrapper[4965]: I0219 10:51:21.853370 4965 scope.go:117] "RemoveContainer" containerID="0ef69fd5f1a8f6a6eb05a1f7bb97990cda14afa6482271d3de722c0b6b54f2ee" Feb 19 10:51:21 crc kubenswrapper[4965]: I0219 10:51:21.859801 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rzsbp"] Feb 19 10:51:21 crc kubenswrapper[4965]: I0219 10:51:21.904395 4965 scope.go:117] "RemoveContainer" containerID="90027a754ff134cc067c4be6de0a7c7f48cca46910cfea93471ae2f2166002c6" Feb 19 10:51:21 crc kubenswrapper[4965]: E0219 10:51:21.904960 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90027a754ff134cc067c4be6de0a7c7f48cca46910cfea93471ae2f2166002c6\": container with ID starting with 90027a754ff134cc067c4be6de0a7c7f48cca46910cfea93471ae2f2166002c6 not found: ID does not exist" containerID="90027a754ff134cc067c4be6de0a7c7f48cca46910cfea93471ae2f2166002c6" Feb 19 10:51:21 crc kubenswrapper[4965]: I0219 10:51:21.905007 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90027a754ff134cc067c4be6de0a7c7f48cca46910cfea93471ae2f2166002c6"} err="failed to get container status \"90027a754ff134cc067c4be6de0a7c7f48cca46910cfea93471ae2f2166002c6\": rpc error: code = NotFound desc = could not find container \"90027a754ff134cc067c4be6de0a7c7f48cca46910cfea93471ae2f2166002c6\": container with ID starting with 90027a754ff134cc067c4be6de0a7c7f48cca46910cfea93471ae2f2166002c6 not found: ID does not exist" Feb 19 10:51:21 crc kubenswrapper[4965]: I0219 10:51:21.905034 4965 scope.go:117] "RemoveContainer" containerID="1b63640bafd5b5356c44238390301207a32e58ab5880a0f9138c3900061aed2b" Feb 19 10:51:21 crc kubenswrapper[4965]: E0219 10:51:21.905545 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b63640bafd5b5356c44238390301207a32e58ab5880a0f9138c3900061aed2b\": container with ID starting with 1b63640bafd5b5356c44238390301207a32e58ab5880a0f9138c3900061aed2b not found: ID does not exist" containerID="1b63640bafd5b5356c44238390301207a32e58ab5880a0f9138c3900061aed2b" Feb 19 10:51:21 crc kubenswrapper[4965]: I0219 10:51:21.905598 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b63640bafd5b5356c44238390301207a32e58ab5880a0f9138c3900061aed2b"} err="failed to get container status \"1b63640bafd5b5356c44238390301207a32e58ab5880a0f9138c3900061aed2b\": rpc error: code = NotFound desc = could not find container \"1b63640bafd5b5356c44238390301207a32e58ab5880a0f9138c3900061aed2b\": container with ID starting with 1b63640bafd5b5356c44238390301207a32e58ab5880a0f9138c3900061aed2b not found: ID does not exist" Feb 19 10:51:21 crc kubenswrapper[4965]: I0219 10:51:21.905627 4965 scope.go:117] "RemoveContainer" containerID="0ef69fd5f1a8f6a6eb05a1f7bb97990cda14afa6482271d3de722c0b6b54f2ee" Feb 19 10:51:21 crc kubenswrapper[4965]: E0219 10:51:21.906040 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ef69fd5f1a8f6a6eb05a1f7bb97990cda14afa6482271d3de722c0b6b54f2ee\": container with ID starting with 0ef69fd5f1a8f6a6eb05a1f7bb97990cda14afa6482271d3de722c0b6b54f2ee not found: ID does not exist" containerID="0ef69fd5f1a8f6a6eb05a1f7bb97990cda14afa6482271d3de722c0b6b54f2ee" Feb 19 10:51:21 crc kubenswrapper[4965]: I0219 10:51:21.906077 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ef69fd5f1a8f6a6eb05a1f7bb97990cda14afa6482271d3de722c0b6b54f2ee"} err="failed to get container status \"0ef69fd5f1a8f6a6eb05a1f7bb97990cda14afa6482271d3de722c0b6b54f2ee\": rpc error: code = NotFound desc = could not find container \"0ef69fd5f1a8f6a6eb05a1f7bb97990cda14afa6482271d3de722c0b6b54f2ee\": container with ID starting with 0ef69fd5f1a8f6a6eb05a1f7bb97990cda14afa6482271d3de722c0b6b54f2ee not found: ID does not exist" Feb 19 10:51:23 crc kubenswrapper[4965]: I0219 10:51:23.210860 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bdd78dd-bb23-4589-9991-ad5296dacb37" path="/var/lib/kubelet/pods/3bdd78dd-bb23-4589-9991-ad5296dacb37/volumes" Feb 19 10:51:33 crc kubenswrapper[4965]: I0219 10:51:33.662086 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651vxf6v_476478a2-24c2-4386-9876-ab59f36cabbf/util/0.log" Feb 19 10:51:33 crc kubenswrapper[4965]: I0219 10:51:33.807159 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651vxf6v_476478a2-24c2-4386-9876-ab59f36cabbf/pull/0.log" Feb 19 10:51:33 crc kubenswrapper[4965]: I0219 10:51:33.809310 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651vxf6v_476478a2-24c2-4386-9876-ab59f36cabbf/util/0.log" Feb 19 10:51:33 crc kubenswrapper[4965]: I0219 10:51:33.854657 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651vxf6v_476478a2-24c2-4386-9876-ab59f36cabbf/pull/0.log" Feb 19 10:51:34 crc kubenswrapper[4965]: I0219 10:51:34.141160 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651vxf6v_476478a2-24c2-4386-9876-ab59f36cabbf/pull/0.log" Feb 19 10:51:34 crc kubenswrapper[4965]: I0219 10:51:34.178468 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651vxf6v_476478a2-24c2-4386-9876-ab59f36cabbf/util/0.log" Feb 19 10:51:34 crc kubenswrapper[4965]: I0219 10:51:34.249020 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651vxf6v_476478a2-24c2-4386-9876-ab59f36cabbf/extract/0.log" Feb 19 10:51:34 crc kubenswrapper[4965]: I0219 10:51:34.386480 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lsxc7_73c31c1a-7233-4c2c-b79b-70abd832d746/util/0.log" Feb 19 10:51:34 crc kubenswrapper[4965]: I0219 10:51:34.726045 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lsxc7_73c31c1a-7233-4c2c-b79b-70abd832d746/pull/0.log" Feb 19 10:51:34 crc kubenswrapper[4965]: I0219 10:51:34.755314 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lsxc7_73c31c1a-7233-4c2c-b79b-70abd832d746/pull/0.log" Feb 19 10:51:34 crc kubenswrapper[4965]: I0219 10:51:34.763562 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lsxc7_73c31c1a-7233-4c2c-b79b-70abd832d746/util/0.log" Feb 19 10:51:34 crc kubenswrapper[4965]: I0219 10:51:34.936626 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lsxc7_73c31c1a-7233-4c2c-b79b-70abd832d746/util/0.log" Feb 19 10:51:34 crc kubenswrapper[4965]: I0219 10:51:34.963436 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lsxc7_73c31c1a-7233-4c2c-b79b-70abd832d746/pull/0.log" Feb 19 10:51:35 crc kubenswrapper[4965]: I0219 10:51:35.015397 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lsxc7_73c31c1a-7233-4c2c-b79b-70abd832d746/extract/0.log" Feb 19 10:51:35 crc kubenswrapper[4965]: I0219 10:51:35.127570 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l45v6_38b3ecaf-956c-479a-8c6e-4e0dd083f186/util/0.log" Feb 19 10:51:35 crc kubenswrapper[4965]: I0219 10:51:35.387589 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l45v6_38b3ecaf-956c-479a-8c6e-4e0dd083f186/pull/0.log" Feb 19 10:51:35 crc kubenswrapper[4965]: I0219 10:51:35.409158 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l45v6_38b3ecaf-956c-479a-8c6e-4e0dd083f186/util/0.log" Feb 19 10:51:35 crc kubenswrapper[4965]: I0219 10:51:35.443816 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l45v6_38b3ecaf-956c-479a-8c6e-4e0dd083f186/pull/0.log" Feb 19 10:51:35 crc kubenswrapper[4965]: I0219 10:51:35.703282 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l45v6_38b3ecaf-956c-479a-8c6e-4e0dd083f186/extract/0.log" Feb 19 10:51:35 crc kubenswrapper[4965]: I0219 10:51:35.746942 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l45v6_38b3ecaf-956c-479a-8c6e-4e0dd083f186/util/0.log" Feb 19 10:51:35 crc kubenswrapper[4965]: I0219 10:51:35.777366 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l45v6_38b3ecaf-956c-479a-8c6e-4e0dd083f186/pull/0.log" Feb 19 10:51:35 crc kubenswrapper[4965]: I0219 10:51:35.903271 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jjz7h_28ed8d8d-3c38-43ae-b93c-d5c88112a04b/extract-utilities/0.log" Feb 19 10:51:36 crc kubenswrapper[4965]: I0219 10:51:36.120769 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jjz7h_28ed8d8d-3c38-43ae-b93c-d5c88112a04b/extract-content/0.log" Feb 19 10:51:36 crc kubenswrapper[4965]: I0219 10:51:36.123065 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jjz7h_28ed8d8d-3c38-43ae-b93c-d5c88112a04b/extract-content/0.log" Feb 19 10:51:36 crc kubenswrapper[4965]: I0219 10:51:36.131218 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jjz7h_28ed8d8d-3c38-43ae-b93c-d5c88112a04b/extract-utilities/0.log" Feb 19 10:51:36 crc kubenswrapper[4965]: I0219 10:51:36.508313 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jjz7h_28ed8d8d-3c38-43ae-b93c-d5c88112a04b/extract-content/0.log" Feb 19 10:51:36 crc kubenswrapper[4965]: I0219 10:51:36.533547 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jjz7h_28ed8d8d-3c38-43ae-b93c-d5c88112a04b/extract-utilities/0.log" Feb 19 10:51:36 crc kubenswrapper[4965]: I0219 10:51:36.760584 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jjz7h_28ed8d8d-3c38-43ae-b93c-d5c88112a04b/registry-server/0.log" Feb 19 10:51:37 crc kubenswrapper[4965]: I0219 10:51:37.274242 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6ml7g_32baa37b-a196-447f-af2a-0f1cc92785d8/extract-utilities/0.log" Feb 19 10:51:37 crc kubenswrapper[4965]: I0219 10:51:37.471943 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6ml7g_32baa37b-a196-447f-af2a-0f1cc92785d8/extract-utilities/0.log" Feb 19 10:51:37 crc kubenswrapper[4965]: I0219 10:51:37.500023 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6ml7g_32baa37b-a196-447f-af2a-0f1cc92785d8/extract-content/0.log" Feb 19 10:51:37 crc kubenswrapper[4965]: I0219 10:51:37.555650 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6ml7g_32baa37b-a196-447f-af2a-0f1cc92785d8/extract-content/0.log" Feb 19 10:51:37 crc kubenswrapper[4965]: I0219 10:51:37.724697 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6ml7g_32baa37b-a196-447f-af2a-0f1cc92785d8/extract-utilities/0.log" Feb 19 10:51:37 crc kubenswrapper[4965]: I0219 10:51:37.848477 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6ml7g_32baa37b-a196-447f-af2a-0f1cc92785d8/extract-content/0.log" Feb 19 10:51:37 crc kubenswrapper[4965]: I0219 10:51:37.932048 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf999t_a18e3883-75f3-47f3-a6a6-31358dbc980a/util/0.log" Feb 19 10:51:38 crc kubenswrapper[4965]: I0219 10:51:38.171405 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf999t_a18e3883-75f3-47f3-a6a6-31358dbc980a/pull/0.log" Feb 19 10:51:38 crc kubenswrapper[4965]: I0219 10:51:38.179111 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf999t_a18e3883-75f3-47f3-a6a6-31358dbc980a/util/0.log" Feb 19 10:51:38 crc kubenswrapper[4965]: I0219 10:51:38.232098 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf999t_a18e3883-75f3-47f3-a6a6-31358dbc980a/pull/0.log" Feb 19 10:51:38 crc kubenswrapper[4965]: I0219 10:51:38.305865 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf999t_a18e3883-75f3-47f3-a6a6-31358dbc980a/util/0.log" Feb 19 10:51:38 crc kubenswrapper[4965]: I0219 10:51:38.358682 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf999t_a18e3883-75f3-47f3-a6a6-31358dbc980a/pull/0.log" Feb 19 10:51:38 crc kubenswrapper[4965]: I0219 10:51:38.558966 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6ml7g_32baa37b-a196-447f-af2a-0f1cc92785d8/registry-server/0.log" Feb 19 10:51:39 crc kubenswrapper[4965]: I0219 10:51:39.078611 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-p94wp_f8b960f6-0e57-4ebd-83e9-b245cbcd3b9d/extract-utilities/0.log" Feb 19 10:51:39 crc kubenswrapper[4965]: I0219 10:51:39.101147 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf999t_a18e3883-75f3-47f3-a6a6-31358dbc980a/extract/0.log" Feb 19 10:51:39 crc kubenswrapper[4965]: I0219 10:51:39.109642 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-pbfkw_16a589f2-57f9-460f-9802-1c63bd877a05/marketplace-operator/0.log" Feb 19 10:51:39 crc kubenswrapper[4965]: I0219 10:51:39.384122 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-p94wp_f8b960f6-0e57-4ebd-83e9-b245cbcd3b9d/extract-content/0.log" Feb 19 10:51:39 crc kubenswrapper[4965]: I0219 10:51:39.415136 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-p94wp_f8b960f6-0e57-4ebd-83e9-b245cbcd3b9d/extract-utilities/0.log" Feb 19 10:51:39 crc kubenswrapper[4965]: I0219 10:51:39.420387 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-p94wp_f8b960f6-0e57-4ebd-83e9-b245cbcd3b9d/extract-content/0.log" Feb 19 10:51:39 crc kubenswrapper[4965]: I0219 10:51:39.637593 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-p94wp_f8b960f6-0e57-4ebd-83e9-b245cbcd3b9d/extract-content/0.log" Feb 19 10:51:39 crc kubenswrapper[4965]: I0219 10:51:39.655361 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-p94wp_f8b960f6-0e57-4ebd-83e9-b245cbcd3b9d/extract-utilities/0.log" Feb 19 10:51:39 crc kubenswrapper[4965]: I0219 10:51:39.681230 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-w6fn7_59eb38d1-a115-462c-b054-4660ec8e6ac1/extract-utilities/0.log" Feb 19 10:51:39 crc kubenswrapper[4965]: I0219 10:51:39.798425 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-p94wp_f8b960f6-0e57-4ebd-83e9-b245cbcd3b9d/registry-server/0.log" Feb 19 10:51:39 crc kubenswrapper[4965]: I0219 10:51:39.953807 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-w6fn7_59eb38d1-a115-462c-b054-4660ec8e6ac1/extract-content/0.log" Feb 19 10:51:40 crc kubenswrapper[4965]: I0219 10:51:40.003859 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-w6fn7_59eb38d1-a115-462c-b054-4660ec8e6ac1/extract-utilities/0.log" Feb 19 10:51:40 crc kubenswrapper[4965]: I0219 10:51:40.011863 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-w6fn7_59eb38d1-a115-462c-b054-4660ec8e6ac1/extract-content/0.log" Feb 19 10:51:40 crc kubenswrapper[4965]: I0219 10:51:40.249950 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-w6fn7_59eb38d1-a115-462c-b054-4660ec8e6ac1/extract-content/0.log" Feb 19 10:51:40 crc kubenswrapper[4965]: I0219 10:51:40.311525 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-w6fn7_59eb38d1-a115-462c-b054-4660ec8e6ac1/extract-utilities/0.log" Feb 19 10:51:40 crc kubenswrapper[4965]: I0219 10:51:40.809224 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-w6fn7_59eb38d1-a115-462c-b054-4660ec8e6ac1/registry-server/0.log" Feb 19 10:51:54 crc kubenswrapper[4965]: I0219 10:51:54.819116 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5675bf8465-d42dv_0e50e1bd-3144-4362-9c46-355cfb2ba24f/prometheus-operator-admission-webhook/0.log" Feb 19 10:51:54 crc kubenswrapper[4965]: I0219 10:51:54.898058 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5675bf8465-h45db_0d85e95a-22ec-4364-a43c-04e60d68be0d/prometheus-operator-admission-webhook/0.log" Feb 19 10:51:54 crc kubenswrapper[4965]: I0219 10:51:54.914633 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-qfjz7_97e4a3bf-25d9-4a7b-ab73-7be5267dcfb1/prometheus-operator/0.log" Feb 19 10:51:55 crc kubenswrapper[4965]: I0219 10:51:55.045024 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-h4689_b7e1070f-f099-4a4f-a107-c1b8589af7c7/operator/0.log" Feb 19 10:51:55 crc kubenswrapper[4965]: I0219 10:51:55.118314 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-x7xjb_d55c4261-3d41-49fd-97dd-098bb8747449/perses-operator/0.log" Feb 19 10:52:10 crc kubenswrapper[4965]: I0219 10:52:10.479127 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-564bb987d4-6pxn4_d8ed232a-7084-4f69-afdf-6d674b5864de/kube-rbac-proxy/0.log" Feb 19 10:52:10 crc kubenswrapper[4965]: I0219 10:52:10.584743 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-564bb987d4-6pxn4_d8ed232a-7084-4f69-afdf-6d674b5864de/manager/0.log" Feb 19 10:52:43 crc kubenswrapper[4965]: I0219 10:52:43.996626 4965 scope.go:117] "RemoveContainer" containerID="f2e99faab2a6ce1f49874fad86075f83f1c6c110f11f1b6c253b421612fa30fd" Feb 19 10:52:48 crc kubenswrapper[4965]: I0219 10:52:48.807162 4965 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="2a094c76-7174-4b58-8b32-12020982c63b" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Feb 19 10:52:49 crc kubenswrapper[4965]: I0219 10:52:49.968486 4965 patch_prober.go:28] interesting pod/oauth-openshift-77df6bdc9c-lfqjn container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.63:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 10:52:49 crc kubenswrapper[4965]: I0219 10:52:49.968836 4965 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-77df6bdc9c-lfqjn" podUID="bcd62ed6-4ef3-48f2-9e7a-08b37f79f69a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.63:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 10:53:03 crc kubenswrapper[4965]: I0219 10:53:03.133561 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6tplv"] Feb 19 10:53:03 crc kubenswrapper[4965]: E0219 10:53:03.134739 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf020c44-0110-4d4b-b3d8-9c7101640c06" containerName="registry-server" Feb 19 10:53:03 crc kubenswrapper[4965]: I0219 10:53:03.134754 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf020c44-0110-4d4b-b3d8-9c7101640c06" containerName="registry-server" Feb 19 10:53:03 crc kubenswrapper[4965]: E0219 10:53:03.134765 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bdd78dd-bb23-4589-9991-ad5296dacb37" containerName="extract-utilities" Feb 19 10:53:03 crc kubenswrapper[4965]: I0219 10:53:03.134771 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bdd78dd-bb23-4589-9991-ad5296dacb37" containerName="extract-utilities" Feb 19 10:53:03 crc kubenswrapper[4965]: E0219 10:53:03.134790 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf020c44-0110-4d4b-b3d8-9c7101640c06" containerName="extract-utilities" Feb 19 10:53:03 crc kubenswrapper[4965]: I0219 10:53:03.134798 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf020c44-0110-4d4b-b3d8-9c7101640c06" containerName="extract-utilities" Feb 19 10:53:03 crc kubenswrapper[4965]: E0219 10:53:03.134808 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bdd78dd-bb23-4589-9991-ad5296dacb37" containerName="registry-server" Feb 19 10:53:03 crc kubenswrapper[4965]: I0219 10:53:03.134813 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bdd78dd-bb23-4589-9991-ad5296dacb37" containerName="registry-server" Feb 19 10:53:03 crc kubenswrapper[4965]: E0219 10:53:03.134830 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bdd78dd-bb23-4589-9991-ad5296dacb37" containerName="extract-content" Feb 19 10:53:03 crc kubenswrapper[4965]: I0219 10:53:03.134836 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bdd78dd-bb23-4589-9991-ad5296dacb37" containerName="extract-content" Feb 19 10:53:03 crc kubenswrapper[4965]: E0219 10:53:03.134850 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf020c44-0110-4d4b-b3d8-9c7101640c06" containerName="extract-content" Feb 19 10:53:03 crc kubenswrapper[4965]: I0219 10:53:03.134856 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf020c44-0110-4d4b-b3d8-9c7101640c06" containerName="extract-content" Feb 19 10:53:03 crc kubenswrapper[4965]: I0219 10:53:03.135055 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bdd78dd-bb23-4589-9991-ad5296dacb37" containerName="registry-server" Feb 19 10:53:03 crc kubenswrapper[4965]: I0219 10:53:03.135083 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf020c44-0110-4d4b-b3d8-9c7101640c06" containerName="registry-server" Feb 19 10:53:03 crc kubenswrapper[4965]: I0219 10:53:03.138236 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6tplv" Feb 19 10:53:03 crc kubenswrapper[4965]: I0219 10:53:03.172127 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6tplv"] Feb 19 10:53:03 crc kubenswrapper[4965]: I0219 10:53:03.200795 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ac58895-c2f0-4231-b9af-69b821f10453-catalog-content\") pod \"community-operators-6tplv\" (UID: \"2ac58895-c2f0-4231-b9af-69b821f10453\") " pod="openshift-marketplace/community-operators-6tplv" Feb 19 10:53:03 crc kubenswrapper[4965]: I0219 10:53:03.200864 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ac58895-c2f0-4231-b9af-69b821f10453-utilities\") pod \"community-operators-6tplv\" (UID: \"2ac58895-c2f0-4231-b9af-69b821f10453\") " pod="openshift-marketplace/community-operators-6tplv" Feb 19 10:53:03 crc kubenswrapper[4965]: I0219 10:53:03.200903 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxdhq\" (UniqueName: \"kubernetes.io/projected/2ac58895-c2f0-4231-b9af-69b821f10453-kube-api-access-pxdhq\") pod \"community-operators-6tplv\" (UID: \"2ac58895-c2f0-4231-b9af-69b821f10453\") " pod="openshift-marketplace/community-operators-6tplv" Feb 19 10:53:03 crc kubenswrapper[4965]: I0219 10:53:03.302889 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ac58895-c2f0-4231-b9af-69b821f10453-catalog-content\") pod \"community-operators-6tplv\" (UID: \"2ac58895-c2f0-4231-b9af-69b821f10453\") " pod="openshift-marketplace/community-operators-6tplv" Feb 19 10:53:03 crc kubenswrapper[4965]: I0219 10:53:03.302951 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ac58895-c2f0-4231-b9af-69b821f10453-utilities\") pod \"community-operators-6tplv\" (UID: \"2ac58895-c2f0-4231-b9af-69b821f10453\") " pod="openshift-marketplace/community-operators-6tplv" Feb 19 10:53:03 crc kubenswrapper[4965]: I0219 10:53:03.302987 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxdhq\" (UniqueName: \"kubernetes.io/projected/2ac58895-c2f0-4231-b9af-69b821f10453-kube-api-access-pxdhq\") pod \"community-operators-6tplv\" (UID: \"2ac58895-c2f0-4231-b9af-69b821f10453\") " pod="openshift-marketplace/community-operators-6tplv" Feb 19 10:53:03 crc kubenswrapper[4965]: I0219 10:53:03.303379 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ac58895-c2f0-4231-b9af-69b821f10453-catalog-content\") pod \"community-operators-6tplv\" (UID: \"2ac58895-c2f0-4231-b9af-69b821f10453\") " pod="openshift-marketplace/community-operators-6tplv" Feb 19 10:53:03 crc kubenswrapper[4965]: I0219 10:53:03.303705 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ac58895-c2f0-4231-b9af-69b821f10453-utilities\") pod \"community-operators-6tplv\" (UID: \"2ac58895-c2f0-4231-b9af-69b821f10453\") " pod="openshift-marketplace/community-operators-6tplv" Feb 19 10:53:03 crc kubenswrapper[4965]: I0219 10:53:03.320954 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxdhq\" (UniqueName: \"kubernetes.io/projected/2ac58895-c2f0-4231-b9af-69b821f10453-kube-api-access-pxdhq\") pod \"community-operators-6tplv\" (UID: \"2ac58895-c2f0-4231-b9af-69b821f10453\") " pod="openshift-marketplace/community-operators-6tplv" Feb 19 10:53:03 crc kubenswrapper[4965]: I0219 10:53:03.471519 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6tplv" Feb 19 10:53:04 crc kubenswrapper[4965]: I0219 10:53:04.099032 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6tplv"] Feb 19 10:53:04 crc kubenswrapper[4965]: I0219 10:53:04.804043 4965 generic.go:334] "Generic (PLEG): container finished" podID="2ac58895-c2f0-4231-b9af-69b821f10453" containerID="a0aa9e9a332e4ca38a4b4b7e10ae00a5304b64455731439f6a062affa9269242" exitCode=0 Feb 19 10:53:04 crc kubenswrapper[4965]: I0219 10:53:04.804154 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6tplv" event={"ID":"2ac58895-c2f0-4231-b9af-69b821f10453","Type":"ContainerDied","Data":"a0aa9e9a332e4ca38a4b4b7e10ae00a5304b64455731439f6a062affa9269242"} Feb 19 10:53:04 crc kubenswrapper[4965]: I0219 10:53:04.804498 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6tplv" event={"ID":"2ac58895-c2f0-4231-b9af-69b821f10453","Type":"ContainerStarted","Data":"cba6145001289a89fa6dc8c5664702e8dc068443746f678231af41399b8bb0b6"} Feb 19 10:53:04 crc kubenswrapper[4965]: I0219 10:53:04.805635 4965 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 10:53:07 crc kubenswrapper[4965]: I0219 10:53:07.854720 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6tplv" event={"ID":"2ac58895-c2f0-4231-b9af-69b821f10453","Type":"ContainerStarted","Data":"3f2d7b82a141cd0814a9c26144709239473f3f97e902b9cb8206aba1a230e95d"} Feb 19 10:53:13 crc kubenswrapper[4965]: I0219 10:53:13.808776 4965 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="2a094c76-7174-4b58-8b32-12020982c63b" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Feb 19 10:53:15 crc kubenswrapper[4965]: I0219 10:53:15.685456 4965 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-5f69j" podUID="2a12c073-8d46-4579-a422-6344a8a4959f" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 10:53:18 crc kubenswrapper[4965]: I0219 10:53:18.808140 4965 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="2a094c76-7174-4b58-8b32-12020982c63b" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Feb 19 10:53:23 crc kubenswrapper[4965]: I0219 10:53:23.806441 4965 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="2a094c76-7174-4b58-8b32-12020982c63b" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Feb 19 10:53:23 crc kubenswrapper[4965]: I0219 10:53:23.807102 4965 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Feb 19 10:53:23 crc kubenswrapper[4965]: I0219 10:53:23.808631 4965 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-central-agent" containerStatusID={"Type":"cri-o","ID":"a3d70d4da191ace2768d5dff43bbf9ad163f40c602833bd37ac54e23b3c98589"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-central-agent failed liveness probe, will be restarted" Feb 19 10:53:23 crc kubenswrapper[4965]: I0219 10:53:23.808767 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2a094c76-7174-4b58-8b32-12020982c63b" containerName="ceilometer-central-agent" containerID="cri-o://a3d70d4da191ace2768d5dff43bbf9ad163f40c602833bd37ac54e23b3c98589" gracePeriod=30 Feb 19 10:53:25 crc kubenswrapper[4965]: I0219 10:53:25.725380 4965 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-5f69j" podUID="2a12c073-8d46-4579-a422-6344a8a4959f" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 10:53:25 crc kubenswrapper[4965]: I0219 10:53:25.725491 4965 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-5f69j" podUID="2a12c073-8d46-4579-a422-6344a8a4959f" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 10:53:28 crc kubenswrapper[4965]: I0219 10:53:28.808284 4965 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="2a094c76-7174-4b58-8b32-12020982c63b" containerName="ceilometer-notification-agent" probeResult="failure" output="command timed out" Feb 19 10:53:34 crc kubenswrapper[4965]: I0219 10:53:34.869481 4965 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpknzq" podUID="58e82cd5-3bd0-4f99-b958-29e5541fa49a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.94:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 10:53:35 crc kubenswrapper[4965]: I0219 10:53:35.726479 4965 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-5f69j" podUID="2a12c073-8d46-4579-a422-6344a8a4959f" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 10:53:35 crc kubenswrapper[4965]: I0219 10:53:35.726538 4965 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-5f69j" podUID="2a12c073-8d46-4579-a422-6344a8a4959f" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 10:53:35 crc kubenswrapper[4965]: I0219 10:53:35.726624 4965 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/speaker-5f69j" Feb 19 10:53:35 crc kubenswrapper[4965]: I0219 10:53:35.727994 4965 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="speaker" containerStatusID={"Type":"cri-o","ID":"0cea63cd4dd845e86558c23c8683d047a1be8161b2084ada1ae757021ddcadc0"} pod="metallb-system/speaker-5f69j" containerMessage="Container speaker failed liveness probe, will be restarted" Feb 19 10:53:35 crc kubenswrapper[4965]: I0219 10:53:35.728140 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/speaker-5f69j" podUID="2a12c073-8d46-4579-a422-6344a8a4959f" containerName="speaker" containerID="cri-o://0cea63cd4dd845e86558c23c8683d047a1be8161b2084ada1ae757021ddcadc0" gracePeriod=2 Feb 19 10:53:37 crc kubenswrapper[4965]: I0219 10:53:37.529586 4965 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-d86db9fbc-vplp8" podUID="5c35ac3d-3c0a-48b2-a17d-ce896fa8a00e" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.77:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 10:53:37 crc kubenswrapper[4965]: I0219 10:53:37.529569 4965 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-init-d86db9fbc-vplp8" podUID="5c35ac3d-3c0a-48b2-a17d-ce896fa8a00e" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.77:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 10:53:44 crc kubenswrapper[4965]: I0219 10:53:44.646135 4965 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-5f69j" podUID="2a12c073-8d46-4579-a422-6344a8a4959f" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": dial tcp [::1]:29150: connect: connection refused" Feb 19 10:53:44 crc kubenswrapper[4965]: I0219 10:53:44.647168 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-5f69j" Feb 19 10:53:44 crc kubenswrapper[4965]: I0219 10:53:44.912422 4965 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpknzq" podUID="58e82cd5-3bd0-4f99-b958-29e5541fa49a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.94:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 10:53:44 crc kubenswrapper[4965]: I0219 10:53:44.912448 4965 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpknzq" podUID="58e82cd5-3bd0-4f99-b958-29e5541fa49a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.94:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 10:53:46 crc kubenswrapper[4965]: I0219 10:53:46.601519 4965 patch_prober.go:28] interesting pod/machine-config-daemon-7mhh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:53:46 crc kubenswrapper[4965]: I0219 10:53:46.601888 4965 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:53:51 crc kubenswrapper[4965]: I0219 10:53:51.308647 4965 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 19 10:53:51 crc kubenswrapper[4965]: I0219 10:53:51.309073 4965 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 19 10:53:51 crc kubenswrapper[4965]: I0219 10:53:51.359594 4965 generic.go:334] "Generic (PLEG): container finished" podID="2a094c76-7174-4b58-8b32-12020982c63b" containerID="a3d70d4da191ace2768d5dff43bbf9ad163f40c602833bd37ac54e23b3c98589" exitCode=0 Feb 19 10:53:51 crc kubenswrapper[4965]: I0219 10:53:51.359667 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2a094c76-7174-4b58-8b32-12020982c63b","Type":"ContainerDied","Data":"a3d70d4da191ace2768d5dff43bbf9ad163f40c602833bd37ac54e23b3c98589"} Feb 19 10:53:51 crc kubenswrapper[4965]: I0219 10:53:51.361565 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 19 10:53:51 crc kubenswrapper[4965]: I0219 10:53:51.554935 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 19 10:53:51 crc kubenswrapper[4965]: I0219 10:53:51.555319 4965 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="fde05b39565efd0c16535f3967be499f5ab546b0c21a3aea53e43a4537646db3" exitCode=1 Feb 19 10:53:51 crc kubenswrapper[4965]: I0219 10:53:51.555437 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"fde05b39565efd0c16535f3967be499f5ab546b0c21a3aea53e43a4537646db3"} Feb 19 10:53:51 crc kubenswrapper[4965]: I0219 10:53:51.555516 4965 scope.go:117] "RemoveContainer" containerID="b5d4ac252f5069500eef4e1579559c883095bf1c21a29cb96a36a4aab507a4de" Feb 19 10:53:51 crc kubenswrapper[4965]: I0219 10:53:51.556391 4965 scope.go:117] "RemoveContainer" containerID="fde05b39565efd0c16535f3967be499f5ab546b0c21a3aea53e43a4537646db3" Feb 19 10:53:51 crc kubenswrapper[4965]: I0219 10:53:51.558410 4965 generic.go:334] "Generic (PLEG): container finished" podID="2a12c073-8d46-4579-a422-6344a8a4959f" containerID="0cea63cd4dd845e86558c23c8683d047a1be8161b2084ada1ae757021ddcadc0" exitCode=137 Feb 19 10:53:51 crc kubenswrapper[4965]: I0219 10:53:51.558451 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-5f69j" event={"ID":"2a12c073-8d46-4579-a422-6344a8a4959f","Type":"ContainerDied","Data":"0cea63cd4dd845e86558c23c8683d047a1be8161b2084ada1ae757021ddcadc0"} Feb 19 10:53:52 crc kubenswrapper[4965]: I0219 10:53:52.577945 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2a094c76-7174-4b58-8b32-12020982c63b","Type":"ContainerStarted","Data":"13f956fad2a494beaba068db128235ac923004622e8da844091bb0c500dbad60"} Feb 19 10:53:52 crc kubenswrapper[4965]: I0219 10:53:52.581920 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 19 10:53:52 crc kubenswrapper[4965]: I0219 10:53:52.584939 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3c36e017cb0beb7abd72cd0e0533b7e3ed84e6ebc0bb1c90bf5be450e06d87d8"} Feb 19 10:53:52 crc kubenswrapper[4965]: I0219 10:53:52.587770 4965 generic.go:334] "Generic (PLEG): container finished" podID="2ac58895-c2f0-4231-b9af-69b821f10453" containerID="3f2d7b82a141cd0814a9c26144709239473f3f97e902b9cb8206aba1a230e95d" exitCode=0 Feb 19 10:53:52 crc kubenswrapper[4965]: I0219 10:53:52.587810 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6tplv" event={"ID":"2ac58895-c2f0-4231-b9af-69b821f10453","Type":"ContainerDied","Data":"3f2d7b82a141cd0814a9c26144709239473f3f97e902b9cb8206aba1a230e95d"} Feb 19 10:53:52 crc kubenswrapper[4965]: I0219 10:53:52.591019 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-5f69j" event={"ID":"2a12c073-8d46-4579-a422-6344a8a4959f","Type":"ContainerStarted","Data":"073154f990171ea5456184d9292238e47195682771c64e28c4a0d8a38a389954"} Feb 19 10:53:52 crc kubenswrapper[4965]: I0219 10:53:52.591211 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-5f69j" Feb 19 10:53:53 crc kubenswrapper[4965]: I0219 10:53:53.603491 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6tplv" event={"ID":"2ac58895-c2f0-4231-b9af-69b821f10453","Type":"ContainerStarted","Data":"d65a95728e948303a9f68cd2141e611357c689151064379339f13854ede010aa"} Feb 19 10:53:53 crc kubenswrapper[4965]: I0219 10:53:53.626831 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6tplv" podStartSLOduration=2.142643341 podStartE2EDuration="50.626811187s" podCreationTimestamp="2026-02-19 10:53:03 +0000 UTC" firstStartedPulling="2026-02-19 10:53:04.805372447 +0000 UTC m=+4240.426693767" lastFinishedPulling="2026-02-19 10:53:53.289540303 +0000 UTC m=+4288.910861613" observedRunningTime="2026-02-19 10:53:53.623321543 +0000 UTC m=+4289.244642863" watchObservedRunningTime="2026-02-19 10:53:53.626811187 +0000 UTC m=+4289.248132497" Feb 19 10:53:54 crc kubenswrapper[4965]: I0219 10:53:54.448099 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 10:54:01 crc kubenswrapper[4965]: I0219 10:54:01.229015 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 10:54:01 crc kubenswrapper[4965]: I0219 10:54:01.232371 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 10:54:03 crc kubenswrapper[4965]: I0219 10:54:03.471811 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6tplv" Feb 19 10:54:03 crc kubenswrapper[4965]: I0219 10:54:03.472091 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6tplv" Feb 19 10:54:03 crc kubenswrapper[4965]: I0219 10:54:03.517378 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6tplv" Feb 19 10:54:04 crc kubenswrapper[4965]: I0219 10:54:04.360290 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6tplv" Feb 19 10:54:04 crc kubenswrapper[4965]: I0219 10:54:04.422852 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6tplv"] Feb 19 10:54:04 crc kubenswrapper[4965]: I0219 10:54:04.452955 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 10:54:04 crc kubenswrapper[4965]: I0219 10:54:04.646917 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-5f69j" Feb 19 10:54:06 crc kubenswrapper[4965]: I0219 10:54:06.315087 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6tplv" podUID="2ac58895-c2f0-4231-b9af-69b821f10453" containerName="registry-server" containerID="cri-o://d65a95728e948303a9f68cd2141e611357c689151064379339f13854ede010aa" gracePeriod=2 Feb 19 10:54:07 crc kubenswrapper[4965]: I0219 10:54:07.343409 4965 generic.go:334] "Generic (PLEG): container finished" podID="2ac58895-c2f0-4231-b9af-69b821f10453" containerID="d65a95728e948303a9f68cd2141e611357c689151064379339f13854ede010aa" exitCode=0 Feb 19 10:54:07 crc kubenswrapper[4965]: I0219 10:54:07.343902 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6tplv" event={"ID":"2ac58895-c2f0-4231-b9af-69b821f10453","Type":"ContainerDied","Data":"d65a95728e948303a9f68cd2141e611357c689151064379339f13854ede010aa"} Feb 19 10:54:07 crc kubenswrapper[4965]: I0219 10:54:07.690464 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6tplv" Feb 19 10:54:07 crc kubenswrapper[4965]: I0219 10:54:07.803872 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ac58895-c2f0-4231-b9af-69b821f10453-utilities\") pod \"2ac58895-c2f0-4231-b9af-69b821f10453\" (UID: \"2ac58895-c2f0-4231-b9af-69b821f10453\") " Feb 19 10:54:07 crc kubenswrapper[4965]: I0219 10:54:07.804023 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxdhq\" (UniqueName: \"kubernetes.io/projected/2ac58895-c2f0-4231-b9af-69b821f10453-kube-api-access-pxdhq\") pod \"2ac58895-c2f0-4231-b9af-69b821f10453\" (UID: \"2ac58895-c2f0-4231-b9af-69b821f10453\") " Feb 19 10:54:07 crc kubenswrapper[4965]: I0219 10:54:07.804094 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ac58895-c2f0-4231-b9af-69b821f10453-catalog-content\") pod \"2ac58895-c2f0-4231-b9af-69b821f10453\" (UID: \"2ac58895-c2f0-4231-b9af-69b821f10453\") " Feb 19 10:54:07 crc kubenswrapper[4965]: I0219 10:54:07.805308 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ac58895-c2f0-4231-b9af-69b821f10453-utilities" (OuterVolumeSpecName: "utilities") pod "2ac58895-c2f0-4231-b9af-69b821f10453" (UID: "2ac58895-c2f0-4231-b9af-69b821f10453"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:54:07 crc kubenswrapper[4965]: I0219 10:54:07.810933 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ac58895-c2f0-4231-b9af-69b821f10453-kube-api-access-pxdhq" (OuterVolumeSpecName: "kube-api-access-pxdhq") pod "2ac58895-c2f0-4231-b9af-69b821f10453" (UID: "2ac58895-c2f0-4231-b9af-69b821f10453"). InnerVolumeSpecName "kube-api-access-pxdhq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:54:07 crc kubenswrapper[4965]: I0219 10:54:07.887176 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ac58895-c2f0-4231-b9af-69b821f10453-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2ac58895-c2f0-4231-b9af-69b821f10453" (UID: "2ac58895-c2f0-4231-b9af-69b821f10453"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:54:07 crc kubenswrapper[4965]: I0219 10:54:07.907926 4965 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ac58895-c2f0-4231-b9af-69b821f10453-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:54:07 crc kubenswrapper[4965]: I0219 10:54:07.907982 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxdhq\" (UniqueName: \"kubernetes.io/projected/2ac58895-c2f0-4231-b9af-69b821f10453-kube-api-access-pxdhq\") on node \"crc\" DevicePath \"\"" Feb 19 10:54:07 crc kubenswrapper[4965]: I0219 10:54:07.908002 4965 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ac58895-c2f0-4231-b9af-69b821f10453-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:54:08 crc kubenswrapper[4965]: I0219 10:54:08.356853 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6tplv" event={"ID":"2ac58895-c2f0-4231-b9af-69b821f10453","Type":"ContainerDied","Data":"cba6145001289a89fa6dc8c5664702e8dc068443746f678231af41399b8bb0b6"} Feb 19 10:54:08 crc kubenswrapper[4965]: I0219 10:54:08.356912 4965 scope.go:117] "RemoveContainer" containerID="d65a95728e948303a9f68cd2141e611357c689151064379339f13854ede010aa" Feb 19 10:54:08 crc kubenswrapper[4965]: I0219 10:54:08.357056 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6tplv" Feb 19 10:54:08 crc kubenswrapper[4965]: I0219 10:54:08.386779 4965 scope.go:117] "RemoveContainer" containerID="3f2d7b82a141cd0814a9c26144709239473f3f97e902b9cb8206aba1a230e95d" Feb 19 10:54:08 crc kubenswrapper[4965]: I0219 10:54:08.397430 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6tplv"] Feb 19 10:54:08 crc kubenswrapper[4965]: I0219 10:54:08.418999 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6tplv"] Feb 19 10:54:08 crc kubenswrapper[4965]: I0219 10:54:08.623484 4965 scope.go:117] "RemoveContainer" containerID="a0aa9e9a332e4ca38a4b4b7e10ae00a5304b64455731439f6a062affa9269242" Feb 19 10:54:09 crc kubenswrapper[4965]: I0219 10:54:09.210677 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ac58895-c2f0-4231-b9af-69b821f10453" path="/var/lib/kubelet/pods/2ac58895-c2f0-4231-b9af-69b821f10453/volumes" Feb 19 10:54:16 crc kubenswrapper[4965]: I0219 10:54:16.601133 4965 patch_prober.go:28] interesting pod/machine-config-daemon-7mhh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:54:16 crc kubenswrapper[4965]: I0219 10:54:16.601682 4965 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:54:46 crc kubenswrapper[4965]: I0219 10:54:46.601689 4965 patch_prober.go:28] interesting pod/machine-config-daemon-7mhh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:54:46 crc kubenswrapper[4965]: I0219 10:54:46.602236 4965 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:54:46 crc kubenswrapper[4965]: I0219 10:54:46.602279 4965 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" Feb 19 10:54:46 crc kubenswrapper[4965]: I0219 10:54:46.602959 4965 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"045e0b9aa6772b3a1d25435f13900d7065d2499b39f0e927eaefa0a25d09ea17"} pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 10:54:46 crc kubenswrapper[4965]: I0219 10:54:46.602999 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerName="machine-config-daemon" containerID="cri-o://045e0b9aa6772b3a1d25435f13900d7065d2499b39f0e927eaefa0a25d09ea17" gracePeriod=600 Feb 19 10:54:46 crc kubenswrapper[4965]: E0219 10:54:46.728915 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:54:46 crc kubenswrapper[4965]: I0219 10:54:46.781312 4965 generic.go:334] "Generic (PLEG): container finished" podID="63ef3eb8-6103-492d-b6ef-f16081d15e83" containerID="045e0b9aa6772b3a1d25435f13900d7065d2499b39f0e927eaefa0a25d09ea17" exitCode=0 Feb 19 10:54:46 crc kubenswrapper[4965]: I0219 10:54:46.781370 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" event={"ID":"63ef3eb8-6103-492d-b6ef-f16081d15e83","Type":"ContainerDied","Data":"045e0b9aa6772b3a1d25435f13900d7065d2499b39f0e927eaefa0a25d09ea17"} Feb 19 10:54:46 crc kubenswrapper[4965]: I0219 10:54:46.781410 4965 scope.go:117] "RemoveContainer" containerID="c3e85c95253f7e8eee9d6a40dcd4eec4ece10846ad5e5ac11a1038255d85beca" Feb 19 10:54:46 crc kubenswrapper[4965]: I0219 10:54:46.782070 4965 scope.go:117] "RemoveContainer" containerID="045e0b9aa6772b3a1d25435f13900d7065d2499b39f0e927eaefa0a25d09ea17" Feb 19 10:54:46 crc kubenswrapper[4965]: E0219 10:54:46.782384 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:55:00 crc kubenswrapper[4965]: I0219 10:55:00.198455 4965 scope.go:117] "RemoveContainer" containerID="045e0b9aa6772b3a1d25435f13900d7065d2499b39f0e927eaefa0a25d09ea17" Feb 19 10:55:00 crc kubenswrapper[4965]: E0219 10:55:00.199365 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:55:10 crc kubenswrapper[4965]: I0219 10:55:10.032771 4965 generic.go:334] "Generic (PLEG): container finished" podID="f111f77a-70e9-4d6f-93cf-672cf0d7bd7f" containerID="32067aef8af394a29248c5c4c4d72c68fb56494c5f38be04157a99f572407114" exitCode=0 Feb 19 10:55:10 crc kubenswrapper[4965]: I0219 10:55:10.032856 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6r9fx/must-gather-6x62j" event={"ID":"f111f77a-70e9-4d6f-93cf-672cf0d7bd7f","Type":"ContainerDied","Data":"32067aef8af394a29248c5c4c4d72c68fb56494c5f38be04157a99f572407114"} Feb 19 10:55:10 crc kubenswrapper[4965]: I0219 10:55:10.033719 4965 scope.go:117] "RemoveContainer" containerID="32067aef8af394a29248c5c4c4d72c68fb56494c5f38be04157a99f572407114" Feb 19 10:55:10 crc kubenswrapper[4965]: I0219 10:55:10.500330 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-6r9fx_must-gather-6x62j_f111f77a-70e9-4d6f-93cf-672cf0d7bd7f/gather/0.log" Feb 19 10:55:13 crc kubenswrapper[4965]: I0219 10:55:13.199008 4965 scope.go:117] "RemoveContainer" containerID="045e0b9aa6772b3a1d25435f13900d7065d2499b39f0e927eaefa0a25d09ea17" Feb 19 10:55:13 crc kubenswrapper[4965]: E0219 10:55:13.200920 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:55:22 crc kubenswrapper[4965]: I0219 10:55:22.932510 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-6r9fx/must-gather-6x62j"] Feb 19 10:55:22 crc kubenswrapper[4965]: I0219 10:55:22.933346 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-6r9fx/must-gather-6x62j" podUID="f111f77a-70e9-4d6f-93cf-672cf0d7bd7f" containerName="copy" containerID="cri-o://f4f1e82121dffd27ad0aef02e1fd254be171b87d8d37efc6eadac9b6f5150f6a" gracePeriod=2 Feb 19 10:55:22 crc kubenswrapper[4965]: I0219 10:55:22.977270 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-6r9fx/must-gather-6x62j"] Feb 19 10:55:24 crc kubenswrapper[4965]: I0219 10:55:24.022757 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-6r9fx_must-gather-6x62j_f111f77a-70e9-4d6f-93cf-672cf0d7bd7f/copy/0.log" Feb 19 10:55:24 crc kubenswrapper[4965]: I0219 10:55:24.023576 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6r9fx/must-gather-6x62j" Feb 19 10:55:24 crc kubenswrapper[4965]: I0219 10:55:24.168351 4965 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-6r9fx_must-gather-6x62j_f111f77a-70e9-4d6f-93cf-672cf0d7bd7f/copy/0.log" Feb 19 10:55:24 crc kubenswrapper[4965]: I0219 10:55:24.168665 4965 generic.go:334] "Generic (PLEG): container finished" podID="f111f77a-70e9-4d6f-93cf-672cf0d7bd7f" containerID="f4f1e82121dffd27ad0aef02e1fd254be171b87d8d37efc6eadac9b6f5150f6a" exitCode=143 Feb 19 10:55:24 crc kubenswrapper[4965]: I0219 10:55:24.168725 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6r9fx/must-gather-6x62j" Feb 19 10:55:24 crc kubenswrapper[4965]: I0219 10:55:24.168755 4965 scope.go:117] "RemoveContainer" containerID="f4f1e82121dffd27ad0aef02e1fd254be171b87d8d37efc6eadac9b6f5150f6a" Feb 19 10:55:24 crc kubenswrapper[4965]: I0219 10:55:24.195818 4965 scope.go:117] "RemoveContainer" containerID="32067aef8af394a29248c5c4c4d72c68fb56494c5f38be04157a99f572407114" Feb 19 10:55:24 crc kubenswrapper[4965]: I0219 10:55:24.201104 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f111f77a-70e9-4d6f-93cf-672cf0d7bd7f-must-gather-output\") pod \"f111f77a-70e9-4d6f-93cf-672cf0d7bd7f\" (UID: \"f111f77a-70e9-4d6f-93cf-672cf0d7bd7f\") " Feb 19 10:55:24 crc kubenswrapper[4965]: I0219 10:55:24.201225 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2knbn\" (UniqueName: \"kubernetes.io/projected/f111f77a-70e9-4d6f-93cf-672cf0d7bd7f-kube-api-access-2knbn\") pod \"f111f77a-70e9-4d6f-93cf-672cf0d7bd7f\" (UID: \"f111f77a-70e9-4d6f-93cf-672cf0d7bd7f\") " Feb 19 10:55:24 crc kubenswrapper[4965]: I0219 10:55:24.213628 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f111f77a-70e9-4d6f-93cf-672cf0d7bd7f-kube-api-access-2knbn" (OuterVolumeSpecName: "kube-api-access-2knbn") pod "f111f77a-70e9-4d6f-93cf-672cf0d7bd7f" (UID: "f111f77a-70e9-4d6f-93cf-672cf0d7bd7f"). InnerVolumeSpecName "kube-api-access-2knbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:55:24 crc kubenswrapper[4965]: I0219 10:55:24.276636 4965 scope.go:117] "RemoveContainer" containerID="f4f1e82121dffd27ad0aef02e1fd254be171b87d8d37efc6eadac9b6f5150f6a" Feb 19 10:55:24 crc kubenswrapper[4965]: E0219 10:55:24.287272 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4f1e82121dffd27ad0aef02e1fd254be171b87d8d37efc6eadac9b6f5150f6a\": container with ID starting with f4f1e82121dffd27ad0aef02e1fd254be171b87d8d37efc6eadac9b6f5150f6a not found: ID does not exist" containerID="f4f1e82121dffd27ad0aef02e1fd254be171b87d8d37efc6eadac9b6f5150f6a" Feb 19 10:55:24 crc kubenswrapper[4965]: I0219 10:55:24.287326 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4f1e82121dffd27ad0aef02e1fd254be171b87d8d37efc6eadac9b6f5150f6a"} err="failed to get container status \"f4f1e82121dffd27ad0aef02e1fd254be171b87d8d37efc6eadac9b6f5150f6a\": rpc error: code = NotFound desc = could not find container \"f4f1e82121dffd27ad0aef02e1fd254be171b87d8d37efc6eadac9b6f5150f6a\": container with ID starting with f4f1e82121dffd27ad0aef02e1fd254be171b87d8d37efc6eadac9b6f5150f6a not found: ID does not exist" Feb 19 10:55:24 crc kubenswrapper[4965]: I0219 10:55:24.287358 4965 scope.go:117] "RemoveContainer" containerID="32067aef8af394a29248c5c4c4d72c68fb56494c5f38be04157a99f572407114" Feb 19 10:55:24 crc kubenswrapper[4965]: E0219 10:55:24.287663 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32067aef8af394a29248c5c4c4d72c68fb56494c5f38be04157a99f572407114\": container with ID starting with 32067aef8af394a29248c5c4c4d72c68fb56494c5f38be04157a99f572407114 not found: ID does not exist" containerID="32067aef8af394a29248c5c4c4d72c68fb56494c5f38be04157a99f572407114" Feb 19 10:55:24 crc kubenswrapper[4965]: I0219 10:55:24.287700 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32067aef8af394a29248c5c4c4d72c68fb56494c5f38be04157a99f572407114"} err="failed to get container status \"32067aef8af394a29248c5c4c4d72c68fb56494c5f38be04157a99f572407114\": rpc error: code = NotFound desc = could not find container \"32067aef8af394a29248c5c4c4d72c68fb56494c5f38be04157a99f572407114\": container with ID starting with 32067aef8af394a29248c5c4c4d72c68fb56494c5f38be04157a99f572407114 not found: ID does not exist" Feb 19 10:55:24 crc kubenswrapper[4965]: I0219 10:55:24.306398 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2knbn\" (UniqueName: \"kubernetes.io/projected/f111f77a-70e9-4d6f-93cf-672cf0d7bd7f-kube-api-access-2knbn\") on node \"crc\" DevicePath \"\"" Feb 19 10:55:24 crc kubenswrapper[4965]: I0219 10:55:24.421040 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f111f77a-70e9-4d6f-93cf-672cf0d7bd7f-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "f111f77a-70e9-4d6f-93cf-672cf0d7bd7f" (UID: "f111f77a-70e9-4d6f-93cf-672cf0d7bd7f"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:55:24 crc kubenswrapper[4965]: I0219 10:55:24.510559 4965 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f111f77a-70e9-4d6f-93cf-672cf0d7bd7f-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 19 10:55:25 crc kubenswrapper[4965]: I0219 10:55:25.222898 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f111f77a-70e9-4d6f-93cf-672cf0d7bd7f" path="/var/lib/kubelet/pods/f111f77a-70e9-4d6f-93cf-672cf0d7bd7f/volumes" Feb 19 10:55:25 crc kubenswrapper[4965]: I0219 10:55:25.225918 4965 scope.go:117] "RemoveContainer" containerID="045e0b9aa6772b3a1d25435f13900d7065d2499b39f0e927eaefa0a25d09ea17" Feb 19 10:55:25 crc kubenswrapper[4965]: E0219 10:55:25.226226 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:55:37 crc kubenswrapper[4965]: I0219 10:55:37.198430 4965 scope.go:117] "RemoveContainer" containerID="045e0b9aa6772b3a1d25435f13900d7065d2499b39f0e927eaefa0a25d09ea17" Feb 19 10:55:37 crc kubenswrapper[4965]: E0219 10:55:37.199375 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:55:48 crc kubenswrapper[4965]: I0219 10:55:48.198594 4965 scope.go:117] "RemoveContainer" containerID="045e0b9aa6772b3a1d25435f13900d7065d2499b39f0e927eaefa0a25d09ea17" Feb 19 10:55:48 crc kubenswrapper[4965]: E0219 10:55:48.199335 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:55:59 crc kubenswrapper[4965]: I0219 10:55:59.198309 4965 scope.go:117] "RemoveContainer" containerID="045e0b9aa6772b3a1d25435f13900d7065d2499b39f0e927eaefa0a25d09ea17" Feb 19 10:55:59 crc kubenswrapper[4965]: E0219 10:55:59.199146 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:56:13 crc kubenswrapper[4965]: I0219 10:56:13.198234 4965 scope.go:117] "RemoveContainer" containerID="045e0b9aa6772b3a1d25435f13900d7065d2499b39f0e927eaefa0a25d09ea17" Feb 19 10:56:13 crc kubenswrapper[4965]: E0219 10:56:13.199100 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:56:25 crc kubenswrapper[4965]: I0219 10:56:25.207789 4965 scope.go:117] "RemoveContainer" containerID="045e0b9aa6772b3a1d25435f13900d7065d2499b39f0e927eaefa0a25d09ea17" Feb 19 10:56:25 crc kubenswrapper[4965]: E0219 10:56:25.208570 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:56:40 crc kubenswrapper[4965]: I0219 10:56:40.199690 4965 scope.go:117] "RemoveContainer" containerID="045e0b9aa6772b3a1d25435f13900d7065d2499b39f0e927eaefa0a25d09ea17" Feb 19 10:56:40 crc kubenswrapper[4965]: E0219 10:56:40.200809 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:56:51 crc kubenswrapper[4965]: I0219 10:56:51.298518 4965 scope.go:117] "RemoveContainer" containerID="045e0b9aa6772b3a1d25435f13900d7065d2499b39f0e927eaefa0a25d09ea17" Feb 19 10:56:51 crc kubenswrapper[4965]: E0219 10:56:51.315641 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:57:03 crc kubenswrapper[4965]: I0219 10:57:03.197962 4965 scope.go:117] "RemoveContainer" containerID="045e0b9aa6772b3a1d25435f13900d7065d2499b39f0e927eaefa0a25d09ea17" Feb 19 10:57:03 crc kubenswrapper[4965]: E0219 10:57:03.199004 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:57:16 crc kubenswrapper[4965]: I0219 10:57:16.197924 4965 scope.go:117] "RemoveContainer" containerID="045e0b9aa6772b3a1d25435f13900d7065d2499b39f0e927eaefa0a25d09ea17" Feb 19 10:57:16 crc kubenswrapper[4965]: E0219 10:57:16.198747 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:57:30 crc kubenswrapper[4965]: I0219 10:57:30.198476 4965 scope.go:117] "RemoveContainer" containerID="045e0b9aa6772b3a1d25435f13900d7065d2499b39f0e927eaefa0a25d09ea17" Feb 19 10:57:30 crc kubenswrapper[4965]: E0219 10:57:30.199375 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:57:32 crc kubenswrapper[4965]: I0219 10:57:32.704695 4965 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-86m2v"] Feb 19 10:57:32 crc kubenswrapper[4965]: E0219 10:57:32.705376 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ac58895-c2f0-4231-b9af-69b821f10453" containerName="extract-utilities" Feb 19 10:57:32 crc kubenswrapper[4965]: I0219 10:57:32.705391 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ac58895-c2f0-4231-b9af-69b821f10453" containerName="extract-utilities" Feb 19 10:57:32 crc kubenswrapper[4965]: E0219 10:57:32.705405 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f111f77a-70e9-4d6f-93cf-672cf0d7bd7f" containerName="gather" Feb 19 10:57:32 crc kubenswrapper[4965]: I0219 10:57:32.705410 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="f111f77a-70e9-4d6f-93cf-672cf0d7bd7f" containerName="gather" Feb 19 10:57:32 crc kubenswrapper[4965]: E0219 10:57:32.705420 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ac58895-c2f0-4231-b9af-69b821f10453" containerName="extract-content" Feb 19 10:57:32 crc kubenswrapper[4965]: I0219 10:57:32.705425 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ac58895-c2f0-4231-b9af-69b821f10453" containerName="extract-content" Feb 19 10:57:32 crc kubenswrapper[4965]: E0219 10:57:32.705444 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f111f77a-70e9-4d6f-93cf-672cf0d7bd7f" containerName="copy" Feb 19 10:57:32 crc kubenswrapper[4965]: I0219 10:57:32.705450 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="f111f77a-70e9-4d6f-93cf-672cf0d7bd7f" containerName="copy" Feb 19 10:57:32 crc kubenswrapper[4965]: E0219 10:57:32.705468 4965 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ac58895-c2f0-4231-b9af-69b821f10453" containerName="registry-server" Feb 19 10:57:32 crc kubenswrapper[4965]: I0219 10:57:32.705473 4965 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ac58895-c2f0-4231-b9af-69b821f10453" containerName="registry-server" Feb 19 10:57:32 crc kubenswrapper[4965]: I0219 10:57:32.705670 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="f111f77a-70e9-4d6f-93cf-672cf0d7bd7f" containerName="gather" Feb 19 10:57:32 crc kubenswrapper[4965]: I0219 10:57:32.705682 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ac58895-c2f0-4231-b9af-69b821f10453" containerName="registry-server" Feb 19 10:57:32 crc kubenswrapper[4965]: I0219 10:57:32.705695 4965 memory_manager.go:354] "RemoveStaleState removing state" podUID="f111f77a-70e9-4d6f-93cf-672cf0d7bd7f" containerName="copy" Feb 19 10:57:32 crc kubenswrapper[4965]: I0219 10:57:32.707136 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-86m2v" Feb 19 10:57:32 crc kubenswrapper[4965]: I0219 10:57:32.736640 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-86m2v"] Feb 19 10:57:32 crc kubenswrapper[4965]: I0219 10:57:32.848321 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fae8f2b-a4f0-4b26-a69a-391dfa9f86ea-utilities\") pod \"redhat-operators-86m2v\" (UID: \"2fae8f2b-a4f0-4b26-a69a-391dfa9f86ea\") " pod="openshift-marketplace/redhat-operators-86m2v" Feb 19 10:57:32 crc kubenswrapper[4965]: I0219 10:57:32.848685 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fae8f2b-a4f0-4b26-a69a-391dfa9f86ea-catalog-content\") pod \"redhat-operators-86m2v\" (UID: \"2fae8f2b-a4f0-4b26-a69a-391dfa9f86ea\") " pod="openshift-marketplace/redhat-operators-86m2v" Feb 19 10:57:32 crc kubenswrapper[4965]: I0219 10:57:32.848814 4965 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzgqb\" (UniqueName: \"kubernetes.io/projected/2fae8f2b-a4f0-4b26-a69a-391dfa9f86ea-kube-api-access-gzgqb\") pod \"redhat-operators-86m2v\" (UID: \"2fae8f2b-a4f0-4b26-a69a-391dfa9f86ea\") " pod="openshift-marketplace/redhat-operators-86m2v" Feb 19 10:57:32 crc kubenswrapper[4965]: I0219 10:57:32.951320 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fae8f2b-a4f0-4b26-a69a-391dfa9f86ea-utilities\") pod \"redhat-operators-86m2v\" (UID: \"2fae8f2b-a4f0-4b26-a69a-391dfa9f86ea\") " pod="openshift-marketplace/redhat-operators-86m2v" Feb 19 10:57:32 crc kubenswrapper[4965]: I0219 10:57:32.951665 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fae8f2b-a4f0-4b26-a69a-391dfa9f86ea-catalog-content\") pod \"redhat-operators-86m2v\" (UID: \"2fae8f2b-a4f0-4b26-a69a-391dfa9f86ea\") " pod="openshift-marketplace/redhat-operators-86m2v" Feb 19 10:57:32 crc kubenswrapper[4965]: I0219 10:57:32.951785 4965 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzgqb\" (UniqueName: \"kubernetes.io/projected/2fae8f2b-a4f0-4b26-a69a-391dfa9f86ea-kube-api-access-gzgqb\") pod \"redhat-operators-86m2v\" (UID: \"2fae8f2b-a4f0-4b26-a69a-391dfa9f86ea\") " pod="openshift-marketplace/redhat-operators-86m2v" Feb 19 10:57:32 crc kubenswrapper[4965]: I0219 10:57:32.951972 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fae8f2b-a4f0-4b26-a69a-391dfa9f86ea-utilities\") pod \"redhat-operators-86m2v\" (UID: \"2fae8f2b-a4f0-4b26-a69a-391dfa9f86ea\") " pod="openshift-marketplace/redhat-operators-86m2v" Feb 19 10:57:32 crc kubenswrapper[4965]: I0219 10:57:32.952248 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fae8f2b-a4f0-4b26-a69a-391dfa9f86ea-catalog-content\") pod \"redhat-operators-86m2v\" (UID: \"2fae8f2b-a4f0-4b26-a69a-391dfa9f86ea\") " pod="openshift-marketplace/redhat-operators-86m2v" Feb 19 10:57:32 crc kubenswrapper[4965]: I0219 10:57:32.984920 4965 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzgqb\" (UniqueName: \"kubernetes.io/projected/2fae8f2b-a4f0-4b26-a69a-391dfa9f86ea-kube-api-access-gzgqb\") pod \"redhat-operators-86m2v\" (UID: \"2fae8f2b-a4f0-4b26-a69a-391dfa9f86ea\") " pod="openshift-marketplace/redhat-operators-86m2v" Feb 19 10:57:33 crc kubenswrapper[4965]: I0219 10:57:33.024251 4965 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-86m2v" Feb 19 10:57:33 crc kubenswrapper[4965]: I0219 10:57:33.540977 4965 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-86m2v"] Feb 19 10:57:34 crc kubenswrapper[4965]: I0219 10:57:34.534129 4965 generic.go:334] "Generic (PLEG): container finished" podID="2fae8f2b-a4f0-4b26-a69a-391dfa9f86ea" containerID="231117807f3cf040223d633e4bba89939eb326015b1b278edb6bb008186cd3e7" exitCode=0 Feb 19 10:57:34 crc kubenswrapper[4965]: I0219 10:57:34.534251 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-86m2v" event={"ID":"2fae8f2b-a4f0-4b26-a69a-391dfa9f86ea","Type":"ContainerDied","Data":"231117807f3cf040223d633e4bba89939eb326015b1b278edb6bb008186cd3e7"} Feb 19 10:57:34 crc kubenswrapper[4965]: I0219 10:57:34.534448 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-86m2v" event={"ID":"2fae8f2b-a4f0-4b26-a69a-391dfa9f86ea","Type":"ContainerStarted","Data":"f1586594cf69c2b74530e090b70a9722d4a23ac810a4035800806d09f2e8c26b"} Feb 19 10:57:36 crc kubenswrapper[4965]: I0219 10:57:36.562231 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-86m2v" event={"ID":"2fae8f2b-a4f0-4b26-a69a-391dfa9f86ea","Type":"ContainerStarted","Data":"42bfaeaf24fde4a5d0b7affc2294e8a4c81812eb5921d2765840d469c116a08b"} Feb 19 10:57:40 crc kubenswrapper[4965]: I0219 10:57:40.602640 4965 generic.go:334] "Generic (PLEG): container finished" podID="2fae8f2b-a4f0-4b26-a69a-391dfa9f86ea" containerID="42bfaeaf24fde4a5d0b7affc2294e8a4c81812eb5921d2765840d469c116a08b" exitCode=0 Feb 19 10:57:40 crc kubenswrapper[4965]: I0219 10:57:40.602714 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-86m2v" event={"ID":"2fae8f2b-a4f0-4b26-a69a-391dfa9f86ea","Type":"ContainerDied","Data":"42bfaeaf24fde4a5d0b7affc2294e8a4c81812eb5921d2765840d469c116a08b"} Feb 19 10:57:42 crc kubenswrapper[4965]: I0219 10:57:42.627829 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-86m2v" event={"ID":"2fae8f2b-a4f0-4b26-a69a-391dfa9f86ea","Type":"ContainerStarted","Data":"53ec3bba9d42a3e19a9302e107551bb0e397a1636050f0104f55a39bbeba2445"} Feb 19 10:57:42 crc kubenswrapper[4965]: I0219 10:57:42.652889 4965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-86m2v" podStartSLOduration=4.137416458 podStartE2EDuration="10.652837345s" podCreationTimestamp="2026-02-19 10:57:32 +0000 UTC" firstStartedPulling="2026-02-19 10:57:34.536871298 +0000 UTC m=+4510.158192618" lastFinishedPulling="2026-02-19 10:57:41.052292195 +0000 UTC m=+4516.673613505" observedRunningTime="2026-02-19 10:57:42.642622107 +0000 UTC m=+4518.263943447" watchObservedRunningTime="2026-02-19 10:57:42.652837345 +0000 UTC m=+4518.274158675" Feb 19 10:57:43 crc kubenswrapper[4965]: I0219 10:57:43.025411 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-86m2v" Feb 19 10:57:43 crc kubenswrapper[4965]: I0219 10:57:43.025498 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-86m2v" Feb 19 10:57:43 crc kubenswrapper[4965]: I0219 10:57:43.197875 4965 scope.go:117] "RemoveContainer" containerID="045e0b9aa6772b3a1d25435f13900d7065d2499b39f0e927eaefa0a25d09ea17" Feb 19 10:57:43 crc kubenswrapper[4965]: E0219 10:57:43.198232 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:57:44 crc kubenswrapper[4965]: I0219 10:57:44.078550 4965 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-86m2v" podUID="2fae8f2b-a4f0-4b26-a69a-391dfa9f86ea" containerName="registry-server" probeResult="failure" output=< Feb 19 10:57:44 crc kubenswrapper[4965]: timeout: failed to connect service ":50051" within 1s Feb 19 10:57:44 crc kubenswrapper[4965]: > Feb 19 10:57:53 crc kubenswrapper[4965]: I0219 10:57:53.075910 4965 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-86m2v" Feb 19 10:57:53 crc kubenswrapper[4965]: I0219 10:57:53.135914 4965 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-86m2v" Feb 19 10:57:53 crc kubenswrapper[4965]: I0219 10:57:53.318799 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-86m2v"] Feb 19 10:57:54 crc kubenswrapper[4965]: I0219 10:57:54.737997 4965 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-86m2v" podUID="2fae8f2b-a4f0-4b26-a69a-391dfa9f86ea" containerName="registry-server" containerID="cri-o://53ec3bba9d42a3e19a9302e107551bb0e397a1636050f0104f55a39bbeba2445" gracePeriod=2 Feb 19 10:57:55 crc kubenswrapper[4965]: I0219 10:57:55.575099 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-86m2v" Feb 19 10:57:55 crc kubenswrapper[4965]: I0219 10:57:55.644492 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fae8f2b-a4f0-4b26-a69a-391dfa9f86ea-utilities\") pod \"2fae8f2b-a4f0-4b26-a69a-391dfa9f86ea\" (UID: \"2fae8f2b-a4f0-4b26-a69a-391dfa9f86ea\") " Feb 19 10:57:55 crc kubenswrapper[4965]: I0219 10:57:55.644680 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fae8f2b-a4f0-4b26-a69a-391dfa9f86ea-catalog-content\") pod \"2fae8f2b-a4f0-4b26-a69a-391dfa9f86ea\" (UID: \"2fae8f2b-a4f0-4b26-a69a-391dfa9f86ea\") " Feb 19 10:57:55 crc kubenswrapper[4965]: I0219 10:57:55.647378 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fae8f2b-a4f0-4b26-a69a-391dfa9f86ea-utilities" (OuterVolumeSpecName: "utilities") pod "2fae8f2b-a4f0-4b26-a69a-391dfa9f86ea" (UID: "2fae8f2b-a4f0-4b26-a69a-391dfa9f86ea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:57:55 crc kubenswrapper[4965]: I0219 10:57:55.746548 4965 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzgqb\" (UniqueName: \"kubernetes.io/projected/2fae8f2b-a4f0-4b26-a69a-391dfa9f86ea-kube-api-access-gzgqb\") pod \"2fae8f2b-a4f0-4b26-a69a-391dfa9f86ea\" (UID: \"2fae8f2b-a4f0-4b26-a69a-391dfa9f86ea\") " Feb 19 10:57:55 crc kubenswrapper[4965]: I0219 10:57:55.747079 4965 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fae8f2b-a4f0-4b26-a69a-391dfa9f86ea-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:57:55 crc kubenswrapper[4965]: I0219 10:57:55.754302 4965 generic.go:334] "Generic (PLEG): container finished" podID="2fae8f2b-a4f0-4b26-a69a-391dfa9f86ea" containerID="53ec3bba9d42a3e19a9302e107551bb0e397a1636050f0104f55a39bbeba2445" exitCode=0 Feb 19 10:57:55 crc kubenswrapper[4965]: I0219 10:57:55.754348 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-86m2v" event={"ID":"2fae8f2b-a4f0-4b26-a69a-391dfa9f86ea","Type":"ContainerDied","Data":"53ec3bba9d42a3e19a9302e107551bb0e397a1636050f0104f55a39bbeba2445"} Feb 19 10:57:55 crc kubenswrapper[4965]: I0219 10:57:55.754375 4965 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-86m2v" event={"ID":"2fae8f2b-a4f0-4b26-a69a-391dfa9f86ea","Type":"ContainerDied","Data":"f1586594cf69c2b74530e090b70a9722d4a23ac810a4035800806d09f2e8c26b"} Feb 19 10:57:55 crc kubenswrapper[4965]: I0219 10:57:55.754392 4965 scope.go:117] "RemoveContainer" containerID="53ec3bba9d42a3e19a9302e107551bb0e397a1636050f0104f55a39bbeba2445" Feb 19 10:57:55 crc kubenswrapper[4965]: I0219 10:57:55.754406 4965 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-86m2v" Feb 19 10:57:55 crc kubenswrapper[4965]: I0219 10:57:55.757178 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fae8f2b-a4f0-4b26-a69a-391dfa9f86ea-kube-api-access-gzgqb" (OuterVolumeSpecName: "kube-api-access-gzgqb") pod "2fae8f2b-a4f0-4b26-a69a-391dfa9f86ea" (UID: "2fae8f2b-a4f0-4b26-a69a-391dfa9f86ea"). InnerVolumeSpecName "kube-api-access-gzgqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:57:55 crc kubenswrapper[4965]: I0219 10:57:55.839603 4965 scope.go:117] "RemoveContainer" containerID="42bfaeaf24fde4a5d0b7affc2294e8a4c81812eb5921d2765840d469c116a08b" Feb 19 10:57:55 crc kubenswrapper[4965]: I0219 10:57:55.840128 4965 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fae8f2b-a4f0-4b26-a69a-391dfa9f86ea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2fae8f2b-a4f0-4b26-a69a-391dfa9f86ea" (UID: "2fae8f2b-a4f0-4b26-a69a-391dfa9f86ea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:57:55 crc kubenswrapper[4965]: I0219 10:57:55.848503 4965 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzgqb\" (UniqueName: \"kubernetes.io/projected/2fae8f2b-a4f0-4b26-a69a-391dfa9f86ea-kube-api-access-gzgqb\") on node \"crc\" DevicePath \"\"" Feb 19 10:57:55 crc kubenswrapper[4965]: I0219 10:57:55.848534 4965 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fae8f2b-a4f0-4b26-a69a-391dfa9f86ea-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:57:55 crc kubenswrapper[4965]: I0219 10:57:55.865453 4965 scope.go:117] "RemoveContainer" containerID="231117807f3cf040223d633e4bba89939eb326015b1b278edb6bb008186cd3e7" Feb 19 10:57:55 crc kubenswrapper[4965]: I0219 10:57:55.907596 4965 scope.go:117] "RemoveContainer" containerID="53ec3bba9d42a3e19a9302e107551bb0e397a1636050f0104f55a39bbeba2445" Feb 19 10:57:55 crc kubenswrapper[4965]: E0219 10:57:55.908106 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53ec3bba9d42a3e19a9302e107551bb0e397a1636050f0104f55a39bbeba2445\": container with ID starting with 53ec3bba9d42a3e19a9302e107551bb0e397a1636050f0104f55a39bbeba2445 not found: ID does not exist" containerID="53ec3bba9d42a3e19a9302e107551bb0e397a1636050f0104f55a39bbeba2445" Feb 19 10:57:55 crc kubenswrapper[4965]: I0219 10:57:55.908137 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53ec3bba9d42a3e19a9302e107551bb0e397a1636050f0104f55a39bbeba2445"} err="failed to get container status \"53ec3bba9d42a3e19a9302e107551bb0e397a1636050f0104f55a39bbeba2445\": rpc error: code = NotFound desc = could not find container \"53ec3bba9d42a3e19a9302e107551bb0e397a1636050f0104f55a39bbeba2445\": container with ID starting with 53ec3bba9d42a3e19a9302e107551bb0e397a1636050f0104f55a39bbeba2445 not found: ID does not exist" Feb 19 10:57:55 crc kubenswrapper[4965]: I0219 10:57:55.908156 4965 scope.go:117] "RemoveContainer" containerID="42bfaeaf24fde4a5d0b7affc2294e8a4c81812eb5921d2765840d469c116a08b" Feb 19 10:57:55 crc kubenswrapper[4965]: E0219 10:57:55.909284 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42bfaeaf24fde4a5d0b7affc2294e8a4c81812eb5921d2765840d469c116a08b\": container with ID starting with 42bfaeaf24fde4a5d0b7affc2294e8a4c81812eb5921d2765840d469c116a08b not found: ID does not exist" containerID="42bfaeaf24fde4a5d0b7affc2294e8a4c81812eb5921d2765840d469c116a08b" Feb 19 10:57:55 crc kubenswrapper[4965]: I0219 10:57:55.909305 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42bfaeaf24fde4a5d0b7affc2294e8a4c81812eb5921d2765840d469c116a08b"} err="failed to get container status \"42bfaeaf24fde4a5d0b7affc2294e8a4c81812eb5921d2765840d469c116a08b\": rpc error: code = NotFound desc = could not find container \"42bfaeaf24fde4a5d0b7affc2294e8a4c81812eb5921d2765840d469c116a08b\": container with ID starting with 42bfaeaf24fde4a5d0b7affc2294e8a4c81812eb5921d2765840d469c116a08b not found: ID does not exist" Feb 19 10:57:55 crc kubenswrapper[4965]: I0219 10:57:55.909318 4965 scope.go:117] "RemoveContainer" containerID="231117807f3cf040223d633e4bba89939eb326015b1b278edb6bb008186cd3e7" Feb 19 10:57:55 crc kubenswrapper[4965]: E0219 10:57:55.909869 4965 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"231117807f3cf040223d633e4bba89939eb326015b1b278edb6bb008186cd3e7\": container with ID starting with 231117807f3cf040223d633e4bba89939eb326015b1b278edb6bb008186cd3e7 not found: ID does not exist" containerID="231117807f3cf040223d633e4bba89939eb326015b1b278edb6bb008186cd3e7" Feb 19 10:57:55 crc kubenswrapper[4965]: I0219 10:57:55.909899 4965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"231117807f3cf040223d633e4bba89939eb326015b1b278edb6bb008186cd3e7"} err="failed to get container status \"231117807f3cf040223d633e4bba89939eb326015b1b278edb6bb008186cd3e7\": rpc error: code = NotFound desc = could not find container \"231117807f3cf040223d633e4bba89939eb326015b1b278edb6bb008186cd3e7\": container with ID starting with 231117807f3cf040223d633e4bba89939eb326015b1b278edb6bb008186cd3e7 not found: ID does not exist" Feb 19 10:57:56 crc kubenswrapper[4965]: I0219 10:57:56.099634 4965 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-86m2v"] Feb 19 10:57:56 crc kubenswrapper[4965]: I0219 10:57:56.110857 4965 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-86m2v"] Feb 19 10:57:57 crc kubenswrapper[4965]: I0219 10:57:57.218705 4965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fae8f2b-a4f0-4b26-a69a-391dfa9f86ea" path="/var/lib/kubelet/pods/2fae8f2b-a4f0-4b26-a69a-391dfa9f86ea/volumes" Feb 19 10:57:58 crc kubenswrapper[4965]: I0219 10:57:58.197887 4965 scope.go:117] "RemoveContainer" containerID="045e0b9aa6772b3a1d25435f13900d7065d2499b39f0e927eaefa0a25d09ea17" Feb 19 10:57:58 crc kubenswrapper[4965]: E0219 10:57:58.198648 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:58:10 crc kubenswrapper[4965]: I0219 10:58:10.198475 4965 scope.go:117] "RemoveContainer" containerID="045e0b9aa6772b3a1d25435f13900d7065d2499b39f0e927eaefa0a25d09ea17" Feb 19 10:58:10 crc kubenswrapper[4965]: E0219 10:58:10.199142 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:58:22 crc kubenswrapper[4965]: I0219 10:58:22.198657 4965 scope.go:117] "RemoveContainer" containerID="045e0b9aa6772b3a1d25435f13900d7065d2499b39f0e927eaefa0a25d09ea17" Feb 19 10:58:22 crc kubenswrapper[4965]: E0219 10:58:22.199579 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:58:35 crc kubenswrapper[4965]: I0219 10:58:35.204167 4965 scope.go:117] "RemoveContainer" containerID="045e0b9aa6772b3a1d25435f13900d7065d2499b39f0e927eaefa0a25d09ea17" Feb 19 10:58:35 crc kubenswrapper[4965]: E0219 10:58:35.205081 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:58:50 crc kubenswrapper[4965]: I0219 10:58:50.198120 4965 scope.go:117] "RemoveContainer" containerID="045e0b9aa6772b3a1d25435f13900d7065d2499b39f0e927eaefa0a25d09ea17" Feb 19 10:58:50 crc kubenswrapper[4965]: E0219 10:58:50.198963 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" Feb 19 10:59:03 crc kubenswrapper[4965]: I0219 10:59:03.201844 4965 scope.go:117] "RemoveContainer" containerID="045e0b9aa6772b3a1d25435f13900d7065d2499b39f0e927eaefa0a25d09ea17" Feb 19 10:59:03 crc kubenswrapper[4965]: E0219 10:59:03.202621 4965 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7mhh9_openshift-machine-config-operator(63ef3eb8-6103-492d-b6ef-f16081d15e83)\"" pod="openshift-machine-config-operator/machine-config-daemon-7mhh9" podUID="63ef3eb8-6103-492d-b6ef-f16081d15e83" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515145566402024455 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015145566403017373 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015145555037016517 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015145555037015467 5ustar corecore